Conference Agenda

Overview and details of the sessions of this conference. Please register as a participant for the conference (free!) and then Login in order to have access to downloads in the detailed view. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view.

Please note that all times are shown in the time zone of the conference. The current conference time is: 22nd Oct 2020, 12:16:46am EEST

Session Overview
SA-P: Spatial audio
Wednesday, 23/Sept/2020:
10:50am - 11:10am

Session Chair: Konrad Kowalczyk
Location: Virtual platform

External Resource: Session.
First presentation:
Second presentation:
Third presentation:
Fourth presentation:
Show help for 'Increase or decrease the abstract text size'
10:50am - 10:55am

Non-Line-of-Sight Time-Difference-of-Arrival Localization with Explicit Inclusion of Geometry Information in a Simple Diffraction Scenario

Sönke Südbeck, Thomas Krause, Jörn Ostermann

Leibniz Universität Hannover, Germany

Time-difference-of-arrival (TDOA) localization is a

technique for finding the position of a wave emitting object, e.g.

a car horn. Many algorithms have been proposed for TDOA

localization under line-of-sight (LOS) conditions. In the non-line-

of-sight (NLOS) case the performance of these algorithms usually

deteriorates. There are techniques to reduce the error introduced

by the NLOS condition, which, however, do not directly take

into account information on the geometry of the surroundings.

In this paper a NLOS TDOA localization approach for a simple

diffraction scenario is described, which includes information on

the surroundings into the equation system. An experiment with

three different loudspeaker positions was conducted to validate

the proposed method. The localization error was less than 6.2 %

of the distance from the source to the closest microphone position.

Simulations show that the proposed method attains the Cramer-

Rao-Lower-Bound for low enough TDOA noise levels.

10:55am - 11:00am

Localization and Categorization of Early Reflections for Estimating Acoustic Reflection Coefficients

Robert Hupke, Sebastian Lauster, Nils Poschadel, Marcel Nophut, Stephan Preihs, Jürgen Peissig

Leibniz University Hannover, Institute of Communications Technology, Hannover, Germany

Knowledge of room acoustic parameters such as frequency- and direction-dependent reflection coefficients, room volume or geometric characteristics is important for the modeling of acoustic environments, e.g. to improve the plausibility of immersive audio in mixed reality applications or to transfer a physical acoustic environment into a completely virtual one. This paper presents a method for detecting first order reflections in three-dimensions of spatial room impulse responses recorded with a spherical microphone array. By using geometric relations, the estimated direction of arrival (DOA) and the time difference of arrival (TDOA), the order of the respective mirror sound source is determined.

After categorization of the incident reflections with respect to the individual walls of the room, the information of DOA and TDOA of the first order mirror sound sources can be used to estimate the frequency dependent reflection coefficients of the respective walls using a modal beamformer. The results of our estimation from real measurements are evaluated and compared to the results of a simulation, while focusing on the categorization of incidence reflections.

11:00am - 11:05am

Binaural Rendering From Distributed Microphone Signals Considering Loudspeaker Distance in Measurements

Naoto Iijima, Shoichi Koyama, Hiroshi Saruwatari

Graduate School of Information Science and Technology,The University of Tokyo

A method of binaural rendering from distributed microphone recordings that takes loudspeaker distance for measuring head-related transfer function (HRTF) into consideration is proposed. In general, to reproduce the binaural signals from the signals captured by multiple microphones in the recording area, the captured sound field is represented by plane-wave decomposition. Thus, HRTF is approximated as a transfer function from a plane-wave source in binaural rendering. To incorporate the distance in HRTF measurements, we propose a method based on the spherical-wave decomposition of a sound field, in which the HRTF is assumed to be measured from a point source. Result of experiments using HRTFs calculated by the boundary element method indicated that the accuracy of binaural signal reproduction by the proposed method based on the spherical-wave decomposition was higher than that by the plane-wave-decomposition-based method. We also evaluate the performance of signal conversion from distributed microphone measurements into binaural signals.

11:05am - 11:10am

Compressing Head-Related Transfer Function databases by Eigen decomposition

Juan Camilo Arévalo Arboleda, Julián Villegas

University of Aizu, Japan

A method to reduce the memory footprint of Head-Related Transfer Functions (HRTFs) is introduced. Based on an Eigen decomposition of HRTFs, the proposed method is capable of reducing an a database comprising 6,344 measurements from 36.30 MB to 2.41 MB (about a 15:1 compression ratio). Synthetic HRTFs in the compressed database were set to have less than 1 dB spectral distortion between 0.1 and 16 kHz. The differences between the compressed measurements with those in the original database do not seem to translate into degradation of perceptual location accuracy. The high degree of compression obtained with this method allows the inclusion of interpolated HRTFs in databases for easing the real-time audio spatialization in Virtual Reality (VR).

Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: IEEE MMSP 2020
Conference Software - ConfTool Pro 2.6.135+CC
© 2001 - 2020 by Dr. H. Weinreich, Hamburg, Germany