Abstract
We used Auditory/visual masks to investigate how the availability of speech signals governs speech perception. Stimuli were videos of a talker uttering sentences. The auditory mask consisted of speech shaped noise; the visual mask, a circular patch obscuring talker's mouth region. Auditory signals were quantified by the glimpse proportion (GP); visual signals by visual entropy (VE), a measure based on visual change. Auditory stimuli mixed with the noise at -3 dB SNR were presented paired with the talker's static or moving face (full vs. masked face) for speech identification. Speech identification was more accurate with the moving face (visual benefit); with greater benefit for the full than masked face. The GP correlation with speech identification scores was highest in the static face condition. The visual benefit was correlated with the VE but only when the latter correlated highly with mid-frequency speech energy of the auditory signal.
| Original language | English |
|---|---|
| Title of host publication | Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015), 10-14 August 2015, Glasgow, Scotland, UK |
| Publisher | University of Glasgow |
| Number of pages | 5 |
| ISBN (Print) | 9780852619414 |
| Publication status | Published - 2015 |
| Event | International Congress of Phonetic Sciences - Duration: 10 Aug 2015 → … |
Conference
| Conference | International Congress of Phonetic Sciences |
|---|---|
| Period | 10/08/15 → … |
Keywords
- speech perception
- auditory perception
- visual perception