Psychoakustik & experimentelle Audiologie

Our EEG system (actiCHamp, Brain Products) consists of 64 active scalp electrodes, several passive electrodes to be used for electrooculography (EOG), and a StimTrak unit allowing to record audio signals along with the EEG signals. For auditory stimulus presentation, we use insert earphones (ER-2; Etymotic Research) minimizing electric noise induction and able to provide a large bandwidth. 

AABBA is an intellectual open group of scientists collaborating on development and applications of models of human spatial hearing

AABBA's goal is to promote exploration and development of binaural and spatial models and their applications.

AABBA members are academic scientists willing to participate in our activities. We meet annually for an open discussion and progress presentation, especially encouraging to bring in students and young scientists associated with members’ projects to our meetings. Our activities consolidate in joint publications and special sessions at international conferences. As a relevant tangible outcome, we provide validated (source) codes for published models of binaural and spatial hearing to our collection of auditory models, known as the auditory modeling toolbox (AMT).


  • Executive board: Piotr Majdak, Armin Kohlrausch, Ville Pulkki

  • Regular members:
    • Aachen: Janina Fels, ITA, RWTH Aachen
    • Bochum: Dorothea Kolossa, Ruhr-Universität Bochum
    • Cardiff: John Culling, School of Psychology, Cardiff University
    • Copenhagen: Torsten Dau & Tobias May, DTU, Lyngby
    • Dresden: Ercan Altinsoy, TU Dresden
    • Ghent: Sarah Verhulst & Alejandro Osses, Ghent University
    • Guangzhou: Bosun Xie, South China University of Technology, Guangzhou
    • Helsinki: Ville Pulkki & Nelli Salminen, Aalto University
    • Ilmenau: Alexander Raake, TU Ilmenau
    • Kosice: Norbert Kopčo, Safarik University, Košice
    • London: Lorenzo Picinali, Imperial College, London
    • Lyon: Mathieu Lavandier, Université de Lyon
    • Munich I: Werner Hemmert, TUM München
    • Munich II: Bernhard Seeber, TUM München 
    • Oldenburg I: Bernd Meyer, Carl von Ossietzky Universität Oldenburg
    • Oldenburg II: Mathias Dietz, Carl von Ossietzky Universität Oldenburg
    • Oldenburg-Eindhoven: Steven van de Par & Armin Kohlrausch, Universität Oldenburg
    • Paris: Brian Katz, Sorbonne Université
    • Patras: John Mourjopoulos, University of Patras
    • Rostock: Sascha Spors, Universität Rostock
    • Sheffield: Guy Brown, The University of Sheffield
    • Tabriz: Masoud Geravanchizadeh, University of Tabriz
    • Toulouse: Patrick Danès, Université de Toulouse
    • Troy: Jonas Braasch, Rensselaer Polytechnic Institute, Troy
    • Vienna: Bernhard Laback & Robert Baumgartner, Austrian Academy of Sciences, Wien
    • The AMT (Umbrella Project): Piotr Majdak
  • Honorary member and founder: Jens Blauert

AABBA Group 2020
AABBA group as of the 12th meeting 2020 in Vienna.


Annual meetings are held at the beginning of each year:

  • 12th meeting: 16-17 January 2020, ViennaScheduleGroup photo
  • 11th meeting: 19-20 February 2019, Vienna. ScheduleGroup photo
  • 10th meeting: 30-31 January 2018, Vienna. Schedule. Group photo
  • 9th meeting: 27-28 February 2017, Vienna. Schedule.
  • 8th meeting: 21-22 January 2016, Vienna. Schedule.
  • 7th meeting: 22-23 February 2015, Berlin.
  • 6th meeting: 17-18 February 2014, Berlin.
  • 5th meeting: 24-25 January 2013, Berlin.
  • 4th meeting: 19-20 January 2012, Berlin.
  • 3rd meeting: 13-14 January 2011, Berlin.
  • 2nd meeting: 29-30 September 2009, Bochum.
  • 1st meeting: 23-26 March 2009, Rotterdam.


  • Upcoming: Structured Session "Binaural models: development and applications" at the Forum Acusticum 2020, Lyon.
  • Special Session "Binaural models: development and applications" at the ICA 2019, Aachen.
  • Special Session "Models and reproducible research" at the Acoustics'17 (EAA/ASA) 2017, Boston.
  • Structured Session "Applied Binaural Signal Processing" at the Forum Acusticum 2014, Krakòw.
  • Structured Session "The Technology of Binaural Listening & Understanding" at the ICA 2016, Buenos Aires.

Contact person: Piotr Majdak

Psychoakustik und experimentelle Audiologie

Hören mit Cochlea-Implantaten untersucht die Grundfunktionen der auditiven Wahrnehmung bei Normalhörenden und Personen mit Cochlea-Implantaten. Die bilaterale Versorgung mit Implantaten ermöglicht die Links-Rechts-Lokalisation von Schallquellen in der Horizontalebene, wodurch sich auch die Sprachverständlichkeit bei Hintergrundlärm verbessert. Die Lokalisationen oben/unten und vorne/hinten sind nach wie vor ein Problem. Normalhörende erhalten Lokalisationsinformation von der Ohrmuschel und werten die so genannten Head Related Transfer Functions (HRTF, kopfbezogene Übertragungsfunktionen) für das räumliche Hören aus. Nach numerischer Simulation und akustischer Messung von HRTFs wird untersucht, wie Lokalisationsinformation in die Stimulationsstrategie der Implantatelektroden eingebaut werden kann, sodass bilateral versorgte Implantatträger/Implantatträgerinnen auch Lokalisationen in den Sagittalebenen wahrnehmen können. Die rasche auditive Vorne/hinten-Lokalisation von Schallquellen sollte zuverlässig sein und wird gerade bei implantierten Kindern, die sich im Straßenverkehr bewegen, als wichtig erachtet.

Unsere Ergebnisse und das gewonnene Wissen werden weltweit von Cochleaimplantat-Herstellern übernommen, unter anderem auch von Med-El, einem österreichischen Unternehmen (Hauptstandort Innsbruck). 

Derzeitige Hauptprojekte:

  • YIRG Dynamates: Dynamic auditory predictions in human and non-human primates
  • Born2Hear: Development and Adaptation of Auditory Spatial Processing Across the Human Lifespan
  • SOFA: Spatially Oriented Format for Acoustics (2013-)


Vergangene Hauptprojekte:

  • SpExCue: Role of Spectral Cues in Sound Externalization: Objective Measures & Modeling (2016-2019)
  • ITD PsyPhy: Bilateral Cochlear Implants: Physiology and Psychophysics (2015-2019)
  • BiPhase: Binaural Hearing and the Cochlear Phase Response (2013 - 2017)
  • POTION: Perceptual Optimization of Audio Time-Frequency Representations and Coding (2014 - 2016)
  • LocaPhoto: Virtual Acoustics: Localization Model & Numeric Simulations (2012 - 2015)
  • ITD MultiEl: Binaural-Timing Sensitivity in Multi-Electrode Stimulation (2013 - 2015)
  • HRTF Imp: Time-Frequency Implementation of HRTFs (2012 - 2014)

Weitere Informationen finden Sie auf unserer Projektliste.


Virtual Acoustics: Localization Model & Numeric Simulations (LocaPhoto)

LocaPhoto consisted of three parts: geometry acquisition, HRTF calculation, and HRTF evaluation by means of localization model.


Geometry acquisition

First, we have evaluated the potential of various 3-D scanners by comparing 3-D meshes obtained for some listeners (Reichinger et al, 2013). For the general means of comparison, we have created "reference" meshes by taking silicon impressions from listeners' ears and scanning them in a high-energy computer tomography scanner. While generally capable, not all 3-D scanners were able to obtain meshes of required quality, thus, limiting their application in practical end-user situations.

Further, we were working on a procedure to generate 3-D meshes directly from 2-D photos by means of photogrammetric-reconstruction algorithms. Under selected conditions, we have obtained 3-D meshes allowing to calculate perceptually-valid HRTFs (publication under preparation).

HRTF calculation

While working on the geometry acquisition, we have developed, implemented, and evaluated a procedure to efficiently calculate HRTFs from a 3-D mesh. The software package Mesh2HRTF is based on a Blender plugin for mesh preparation, an executable application based on boundary-element methods, and Matlab tool for HRTF post-processing (Ziegelwanger et al., 2015a). The evaluation was done by comparing HRTFs calculated for reference meshes to acoustically measured HRTFs. Differences between various conditions were evaluated as model predictions and sound-localization experiments. We have shown that in the proximity of the ear canal, meshes with an average edge length of 1 mm or less are required. Also, we have shown that a small area as the virtual microphone used in the calculations yields best results (Ziegelwanger et al., 2015).

In order to further improve the calculations, we have applied a non-uniform a-priori mesh grading to HRTF calculations. This method reduces the number of elements in the mesh down to 10 000 while still yielding perceptually-valid HRTFs (Ziegelwanger et al., 2016). With that method, HRTF calculations within less than an hour are achievable.

HRTF evaluation

Given the huge amount of parameters in the numerical calculations, hundreds of calculated HRTF sets had to be tested. The evaluation of HRTF quality is a complex task because it involves many percepts like directional sound localization, sound externalization, apparent source widening, distance perception, timbre changes, and others. Generally, one would like to have HRTFs generating virtual auditory scenes as realistic as natural scenes. While a model evaluating kind of "degree of realism" was out-of-reach, we focused on a very important and well-explored aspect: directional sound localization.

For sound localization in the lateral dimension (left/right), there are not may aspects requiring HRTF individualization. The listener-specific ITD, as the interaural broadband difference between the sound's time-of-arrival, can contribute, though. Thus, we first created a 3-D model of time-of-arrival able to describe the ITD with a few parameters based on listener's HRTFs (Ziegelwanger and Majdak, 2014). 

For sound localization in sagittal planes (top/down, front/back), individualization of HRTFs is a large issue. The whole process of sagittal-plane localization is still not completely understood, but the role of the dorsal cochlear nucleus (DCN) was known already at the beginning of LocaPhoto. Thus, in LocaPhoto, we have developed a model able to predict sagittal-plane sound localization performance, based on the spectral processing found in the DCN. It was rigorously evaluated in various conditions and was found to predict listener-specific localization performance quite well (Baumgartner et al., 2014).

In LocaPhoto, this model allowed to evaluate many numerically calculated HRTFs. Also, it allowed to uncover surprising properties of human sound localization (Majdak et al., 2014). It is implemented in the Auditory Modeling Toolbox (Søndergaard and Majdak, 2013). It has been used for various evaluations (Baumgartner et al., 2013) like the positioning of loudspeakers in loudspeaker-based sound reproduction (Baumgartner and Majdak, 2015). And, it serves as a basis for a 3-D sound localization model (Altoe et al., 2014) and model addressing sensorineural hearing losses (Baumgartner et al., 2016).


Austrian Science Fund (FWF, P 24124-N13)


February 2012 - October 2016


  • Baumgartner, R., Majdak, P., Laback, B. (2016): Modeling the Effects of Sensorineural Hearing Loss on Sound Localization in the Median Plane, in: Trends in Hearing 20, 1-11.
  • Ziegelwanger, H., Kreuzer, W., Majdak, P. (2016): A priori mesh grading for the numerical calculation of the head-related transfer functions , in: Applied Acoustics 114, 99 - 110.  
  • Baumgartner, R., Majdak, P. (2015): Modeling Localization of Amplitude-Panned Virtual Sources in Sagittal Planes, in: J. Audio Eng. Soc 63, 562-569.
  • Ziegelwanger, H., Kreuzer, W., Majdak, P. (2015): Mesh2HRTF: An open-source software package for the numerical calculation of head-related transfer functions, in: Proceedings of the 22nd International Congress on Sound and Vibration (ICSV). Florence, Italy, 1-8.
  • Ziegelwanger, H., Majdak, P., Kreuzer, W. (2015): Numerical calculation of head-related transfer functions and sound localization: Microphone model and mesh discretization, in: The Journal of the Acoustical Society of America 138, 208-222.  
  • Altoè, A., Baumgartner, R., Majdak, P., Pulkki, V. (2014): Combining count-comparison and sagittal-plane localization models towards a three-dimensional representation of sound localization, in: Proceedings of the 7th Forum Acusticum. Krakow, Poland, 1-6.
  • Baumgartner, R., Majdak, P., Laback, B. (2014): Modeling Sound-Source Localization in Sagittal Planes for Human Listeners., in: The Journal of the Acoustical Society of America 136, 791-802.
  • Majdak, P., Baumgartner, R., Laback, B. (2014): Acoustic and non-acoustic factors in modeling listener-specific performance of sagittal-plane sound localization, in: Frontiers in Psychology 5, 319(1-10).
  • Baumgartner, R., Majdak, P., Laback, B. (2013): Assessment of sagittal-plane sound localization performance in spatial-audio applications, in: Blauert, J. (ed.), The Technology of Binaural Listening. Berlin-Heidelberg-New York (Springer), 93-119
  • Reichinger, A., Majdak, P., Sablatnig, R., Maierhofer, S. (2013): Evaluation of Methods for Optical 3-D Scanning of Human Pinnas, in: Proceedings of the 3D Vision Conference 2013, Third Joint 3DIM/3DPVT Conference. Seattle, WA, 390-397.
  • Søndergaard, P., Majdak, P. (2013): The Auditory Modeling Toolbox, in: Blauert, J. (ed.), The Technology of Binaural Listening. Berlin, Heidelberg, New York (Springer), 33-56

Contact for more information:

Piotr Majdak (Principle Investigator)

Michael Mihocic (HRTF measurement)

Cochlear ImplantA cochlear implant (CI) is a surgically implanted electronic device that provides a sense of sound to a person who is profoundly deaf or severely hard of hearing. Cochlear implants are often referred to as a bionic ear.

As of December 2012, approximately 324,000 people worldwide have received cochlear implants; in the U.S., roughly 58,600 adults and 38,000 children are recipients. Some recipients have bilateral implants to allow for stereo sound. However, barriers such as the cost of the device prevent many patients from acquiring the device.

Cochlear implants may help provide hearing in patients that are deaf due to lack of or damage to sensory hair cells in their cochlea. In those patients, they can often enable sufficient hearing to allow unaided understanding of speech. The quality of sound is different from natural hearing, with less sound information being received and processed by the brain. However, many patients are able to hear and understand speech and environmental sounds. Newer devices and processing strategies may allow recipients to hear better in noise, enjoy music, and even use their implant processors while swimming.

(Source: adapted from


Here are some short audio examples to compare acoustic CI simulations to normal hearing. Try to identify the text (German speakers) and the songs on the basis of the simulation first!

After listening to the cochlear implant simulations, you will realize that they sound strange and "tinny". However, after a little practice, you will become familiarized with the sounds and you will be able to understand them almost as well as the unprocessed originals.

Title    CI Simulation    Original Sound   
Female Voice 1      
Female Voice 2      
Female Voice 3      
Female Voice 4      
Female Voice 5      
Male Voice      
Music 1      
Music 2      
Music 3      
Music 4      
Music 4      
Music 5      
Music 6      
Music 7      
Music 9      
Music 10      
Music 11      
Music 12      
Music 8      
Music 14      
Music 15      
Music 9      
Music 17      
Music 10      
Music 11      
Music 20      
Music 21      
Music 12      
Music 13      
Music 24      
Music 14      

Download all sound examples as mp3:

Zip-Archive (30 MB)


(Source of the simulation strategy: Goupell et al. 2008)

A semi-anechoic room (6.2m × 5.5m × 2.96m) is available for acoustic measurements. Also psychoacoustic experiments like sound-localization experiments can be performed.

The room is equipped with:

  • a 22-loudspeaker array for HRTF measurement;
  • 64-microphone array for various recording approaches (beam forming, near-field holography);
  • a virtual-environment setup for tests with subjects in a virtual visual and acoustic environment controlled with a head tracker in real time.
  • equiment for speech recordings for speaker identicication


DSCN0612 Vertical circular array of 22 loudspeakers generates acoustic signals from almost every direction. The listener is placed on a computer controlled swivel chair. DSCN0614