SPIN2026: No bad apple! SPIN2026: No bad apple!

P13Session 1 (Monday 12 January 2026, 15:00-17:30)
Reverberation impairs neural stream segregation of concurrent speech

Anna-Lena Krause, Sabina Wiley
Department of Cognitive Neuroscience, Maastricht University, Netherlands

Lars Hausfeld
Department of Cognitive Neuroscience, Maastricht University, Netherlands
Maastricht Brain Imaging Centre, Netherlands

Background: In everyday acoustic settings, listeners must often attend to one speaker among several concurrent talkers. The auditory system relies on cues such as spatial location and pitch to segregate the target from the distractor stream. Although previous studies have examined these mechanisms using brief, anechoic speech stimuli, the neural dynamics supporting speech segregation in reverberant and naturalistic environments remain unclear.

Methods: Participants (N = 18) listened to 30-second audiobook segments containing two concurrent speakers while high-density EEG was recorded. Speech mixtures were systematically manipulated across three acoustic dimensions: reverberation (high vs. low), spatial separation (co-located vs. separated), and pitch separation (small vs. large), using simulated binaural room impulse responses. Behavioral responses were recorded assessing intelligibility and subjective difficulty. Neural tracking was assessed using multivariate temporal response functions (mTRFs) based on direct, reverberant, and combined speech envelope models.

Results: Behaviorally, listeners’ intelligibility significantly declined under high reverberation (p < 10-8), while spatial and pitch separation exerted minimal effects. EEG prediction accuracy revealed significant main effects of both reverberation (p = .019; high < low) and model type (p < 10-10; combined, direct > reverberant), indicating reduced neural tracking in more reverberant environments. Examination of the combined mTRF model showed early fronto-central responses (30–130 ms in low, 30–180 ms in high reverberation) for both target and distractor speech, suggesting prolonged early auditory processing under degraded conditions. A later, left-lateralized attentional enhancement (210–270 ms) emerged for target versus distractor speech in the low-reverberation condition but disappeared when reverberation was high. For reverberant speech, two target-selective processing stages (90–160 ms and 200–260 ms) were present in low but not in high reverberation, indicating increased difficulty and disrupted stream segregation.

Conclusions: These findings suggest that under low reverberation, the auditory system can separate target and interfering speech more effectively, possibly supported by reverberant cues that aid stream segregation of direct and reverberant speech. In contrast, strong reverberation appears to add to masking, limiting the auditory system’s ability to dissociate target and distractor streams and reducing overall speech intelligibility.

Last modified 2025-11-21 16:50:42