P48Session 2 (Tuesday 13 January 2026, 14:10-16:40)Potential of deep neural networks for hearing loss compensation and noise reduction: In search of the best configuration
Recent applications of auditory models with hearing loss can be ‘closed loop’ approaches: Deep learning is used to optimize the parameters of a neural network by comparing a normal hearing and a hearing-impaired branch of the auditory model, thereby obtaining a new, unrestricted (and unknown) hearing loss compensation (HLC). The potential of such an approach has already been demonstrated by Leer et al, 2024 (doi:10.48550/arXiv.2403.10428), using a physiological auditory nerve model and Drgas & Bramsløw, SPIN2025 (doi:10.5281/zenodo.15433865), using a simpler loudness model.
Neural-based Noise Reduction (NR) can also be added, by adding noise only to the hearing-impaired branch. Different combinations of standard and neural HLC+NR can be combined into one (joint) or two (separate) deep neural networks (DNN). Likewise, different loss functions and DNN architecture can be applied, which are both critical for the result.
The aim of the work was to test the early application of a loudness model (AUDMOD: Bramsløw, SPIN2024, doi:10.5281/zenodo.10473561) for DNN-based HLC and noise reduction in different configurations: 1) Neural NR + Neural HLC, 2) Neural NR + NALNL2 HLC, 3) Joint HLC + NR, 4) No NR + Neural HLC, 5) NALNL2 HLC (standard reference). A selection of cost functions has been applied during training.
For HLC, the resulting signals were evaluated and compared to National Acoustics Laboratory nonlinear (NAL-NL2) prescription using spectral measurements and relevant objective metrics, such as STOI and the hearing-aid speech perception index (HASPI). Total loudness and loudness pattern comparisons are also presented.