P28Session 2 (Tuesday 13 January 2026, 14:10-16:40)Differences in multimodal communicative behavior of hard-of-hearing and hearing individuals in social and non-social background noise
Communication is inherently multimodal, requiring interlocutors to combine signals from the auditory and the visual domains to ensure mutual understanding, joint meaning-making, and ultimately communicative success. Hard-of-hearing individuals, who face difficulties in the auditory domain, might therefore rely more on visual communicative signals, such as gestures and facial expressions – especially in situations with loud background noise, which are perceived as particularly challenging. To date, it has not been empirically investigated how different forms of visual communication, such as sign-supported speech, gesturing, or lip reading, interact in dialogue situations with background noise for people who are hard of hearing. Furthermore, it is not well-established if the characteristics of their multimodal communicative behavior, i.e., the kinematic properties of their gestures and the acoustic properties of their speech, interact and differ from those of hearing individuals who engage in conversation in the same situation.
In the present study, we analyze video- and audio-recordings of conversations between hard-of-hearing dyads and between hearing dyads, who engaged in three rounds of conversation (free dialogue, joint decision-making task, director-matcher task) while being exposed to changing background noise (no noise, social noise, non-social noise) in the lab. We calculate the following kinematic features for each gesture instance per participant: maximum height, volume, sub-movements, rhythmicity, peak velocity, maximum distance from the body, and use of McNeillian space. For each speech instance per participant, we also calculate intensity and pitch. Using linear discriminant analysis (LDA), we investigate whether differences in these kinematic and acoustic features are big enough to reliably classify the hearing status (hard-of-hearing vs. hearing) of participants, as well as the type of background noise. Further, we investigate whether there is an interaction effect between the two.
A preliminary LDA based on kinematic features resulted in a successful classification between hard-of-hearing and hearing participants with an accuracy of 80%, indicating that their gesture behavior does indeed differ significantly. The role of speech and potential modulations of multimodal communication according to background noise type are still to be determined. Future analyses will also focus on whether kinematic and acoustic characteristics are predictive of communicative success, which we operationalize as a combination of self-report measures (questionnaires) and task measures (accuracy and reaction time).
With this research, we shed new light on the differences in communicative behavior between hearing and hard-of-hearing people in challenging listening conditions. By identifying patterns of particularly differentiating multimodal communicative signals, we ultimately aim to develop informed suggestions on how to create a more inclusive, pleasant, and welcoming communicative experience in situations with background noise.