With the growing market of mobile devices and Virtual, Augmented, and Mixed Reality (VR/AR/MR) applications, headphone-based Three-Dimensional (3D) audio is becoming increasingly important. Head-related Transfer Functions (HRTFs), which represent the acoustic filtering of incoming sounds by the listener’s morphology, are essential to create virtual sound images reproduced via headphones.
Various measurement systems have been proposed to fast record personal HRTFs from different directions, but most of them require expensive hardware setups. In addition, these systems are usually limited to estimate directional HRTFs with a fixed source-listener distance. In this thesis, an MR-based mobile system is proposed for fast estimating distance- and direction-dependent HRTFs with only one loudspeaker.
Perceived externalization, i.e., out of the head, is one of the most important features for building up immersive Virtual Acoustic Environments (VAEs). It is well known that reverberation and spectral information of direct sound components are two essential cues related to perceived externalization. This thesis further studies the relative impact of these two cues in contralateral versus ipsilateral ear signals on externalization of lateral sound images. Based on the outcomes of these studies, a series of experiments is designed to build a quantitative model to explain the interplay of important acoustic cues in externalization of lateral sound sources.
Due to the challenge of measuring individual HRTFs for every listener, non-individual HRTFs are commonly applied in binaural rendering systems in combination with simple room models. However, the synthesized sound sources from frontal and rear directions are difficult to be perceived as well externalized. This thesis proposes an advanced binaural rendering system to enhance externalization of frontal and rear sound images based on the localization- and externalization-related auditory cues.
|