Temporal organization of Cued Speech production

D. Beautemps, M.-A. Cathiard, V. Attina, C. Savariaux

    Research output: Chapter in Book / Conference PaperChapter

    Abstract

    Speech communication is multimodal by nature. It is well known that hearing people use both auditory and visual information for speech perception (Reisberg et al. 1987). For deaf people, visual speech constitutes the main speech modality. Listeners with hearing loss who have been orally educated typically rely heavily on speechreading based on lips and facial visual information. However lipreading alone is not sufficient due to the similarity in visual lip shapes of speech units. Indeed, even the best speechreaders do not identify more than 50 percent of phonemes in nonsense syllables (Owens and Blazek 1985) or in words or sentences (Bernstein et al. 2000). This chapter deals with Cued Speech, a manual augmentation for lipreading visual information.
    Original languageEnglish
    Title of host publicationAudiovisual Speech Processing
    EditorsGérard Bailly, Pascal Perrior, Eric Vatikiotis-Bateson
    Place of PublicationU.K.
    PublisherCambridge University Press
    Pages104-120
    Number of pages17
    ISBN (Print)9781107006829
    Publication statusPublished - 2012

    Keywords

    • Cued Speech
    • auditory perception
    • deaf
    • lipreading
    • means of communication
    • speech perception
    • visual perception

    Fingerprint

    Dive into the research topics of 'Temporal organization of Cued Speech production'. Together they form a unique fingerprint.

    Cite this