SPIN2026: No bad apple! SPIN2026: No bad apple!

P43Session 1 (Monday 12 January 2026, 15:00-17:30)
Speech-in-speech load triggers inattentional deafness

Franck Élisabeth, Guillaume Andéol, Véronique Chastres, Élodie Bayle
Institut de Recherche Biomédicale des Armées, Brétigny sur Orge, France

Isabelle Viaud-Delmon
CNRS, IRCAM, Sorbonne Université, Ministère de la Culture, Sciences et Technologies de la Musique et du son, STMS, Paris, France

Clara Suied
Institut de Recherche Biomédicale des Armées, Brétigny sur Orge, France

In multitalker scenes, tracking one speech stream while suppressing others taxes selective attention and working memory. We argue that this speech in speech load alone is sufficient to induce inattentional deafness—a failure to notice clearly audible, task irrelevant sounds—even when those sounds are well above threshold. Here, we developed a new paradigm, that can be run online, to study inattentional deafness in auditory-only conditions. We combined a multi speaker Coordinate Response Measure (CRM) task with a N-back task to keep listeners continuously engaged with target speech while competing sentences played to the opposite ear. On the final trial, an unexpected, non speech critical sound was presented at the same level as the sentences; awareness was probed only at the end of the experiment to capture inattentional deafness (one-shot paradigm). Listeners performed the speech task above chance across N levels, confirming engagement, yet still missed the critical sound on the majority of trials (on average 78% misses). Spatial attention modulated detection, consistent with the idea that resource allocation to the attended stream shapes what gets through awareness from elsewhere in the scene. Overall, the pattern demonstrates that speech competition creates sufficient cognitive load to generate inattentional deafness.

Last modified 2025-11-21 16:50:42