Abstract
Speech communication is multimodal by nature. It is well known that hearing people use both auditory and visual information for speech perception (Reisberg et al. 1987). For deaf people, visual speech constitutes the main speech modality. Listeners with hearing loss who have been orally educated typically rely heavily on speechreading based on lips and facial visual information. However lipreading alone is not sufficient due to the similarity in visual lip shapes of speech units. Indeed, even the best speechreaders do not identify more than 50 percent of phonemes in nonsense syllables (Owens and Blazek 1985) or in words or sentences (Bernstein et al. 2000). This chapter deals with Cued Speech, a manual augmentation for lipreading visual information.
Original language | English |
---|---|
Title of host publication | Audiovisual Speech Processing |
Editors | Gérard Bailly, Pascal Perrior, Eric Vatikiotis-Bateson |
Place of Publication | U.K. |
Publisher | Cambridge University Press |
Pages | 104-120 |
Number of pages | 17 |
ISBN (Print) | 9781107006829 |
Publication status | Published - 2012 |
Keywords
- Cued Speech
- auditory perception
- deaf
- lipreading
- means of communication
- speech perception
- visual perception