Audio-visual speech perception off the top of the head

    Research output: Contribution to journalArticle

    Abstract

    The study examined whether people can extract speech related information from the talker's upper face that was presented using either normally textured videos (Experiments 1 and 3) or videos showing only the outline of the head (Experiments 2 and 4). Experiments 1 and 2 used within- and cross-modal matching tasks. In the within-modal task, observers were presented two pairs of short silent video clips that showed the top part of a talker's head. In the cross-modal task, pairs of audio and silent video clips were presented. The task was to determine the pair in which the talker said the same sentence. Performance on both tasks was better than chance for the outline as well as textured presentation suggesting that judgments were primarily based on head movements. Experiments 3 and 4 tested if observing the talker's upper face would help identify speech in noise. The results showed the viewing the talker's moving upper head produced a small but reliable improvement in speech intelligibility, however, this effect was only secure for the expressive sentences that involved greater head movements. The results suggest that people are sensitive to speech related head movements that extend beyond the mouth area and can use these to assist in language processing.
    Original languageEnglish
    Number of pages11
    JournalCognition
    Publication statusPublished - 2006

    Keywords

    • Head
    • Movements
    • Speech perception
    • Visual perception

    Fingerprint

    Dive into the research topics of 'Audio-visual speech perception off the top of the head'. Together they form a unique fingerprint.

    Cite this