P66Session 2 (Tuesday 13 January 2026, 14:10-16:40)Using real-time acoustic resynthesis and virtual reality to understand the needs of hearing-impaired listeners in conversational settings
Evaluating the effectiveness of hearing aid algorithms in conversational settings is a challenging task. Setting up realistic scenarios involves substantial preparation and effort on the part of both researchers and participants. Additional challenges arise when trying to design environments that are both controllable and reproducible. One potential solution is to use virtual reality in combination with real recorded data.
This study uses a novel dataset, CHiME9-ECHI, that provides close-talk microphone and head-tracking data. Using this data, we are testing the feasibility of hearing aid algorithm evaluation in synthetic environments, where recorded ground-truth speech and head-tracking data from four conversational partners can be combined with spatial audio processing to replicate the audio as it would be received by the hearing aid microphones worn by one of the conversants.
With this synthetic environment, we can implement “virtual hearing aids” that can use the audio and sensor information to replicate the source enhancement strategies used by physical hearing aids, such as beamforming and other target-speaker enhancement techniques. A user can then wear a virtual reality headset, and the tracking data can be processed by the system to produce a simulated hearing aid experience in a conversational scenario.
The long-term goal of this system implementation is to allow for hearing aid algorithms to be assessed more easily and reproducibly. By providing a means to rapidly and iteratively adjust the parameters and algorithms employed, we can better identify which characteristics align hearing aids with the needs and preferences of their users, both generally and individually.
In the future, we expect that similar systems may allow for hearing aid manufacturers to better identify what customizations they may offer to their users, or to allow for a hearing aid fitting process where a user can virtually experience challenging noise environments and tune parameters in real time, creating a customized profile that could be uploaded to their hearing aids to create a listening experience better suited to their personal needs.
At the workshop, we hope to provide some early audio examples of conversations reprocessed using these techniques.
This project is supported by RNID.