Motherese by eye and ear : infants perceive visual prosody in point-line displays of talking heads

Christine Kitamura, Bahia Guellai, Jeesun Kim

    Research output: Contribution to journalArticlepeer-review

    27 Citations (Scopus)

    Abstract

    Infant-directed (ID) speech provides exaggerated auditory and visual prosodic cues. Here we investigated if infants were sensitive to the match between the auditory and visual correlates of ID speech prosody. We presented 8-month-old infants with two silent line-joined point-light displays of faces speaking different ID sentences, and a single vocal-only sentence matched to one of the displays. Infants looked longer to the matched than mismatched visual signal when full-spectrum speech was presented; and when the vocal signals contained speech low-pass filtered at 400 Hz. When the visual display was separated into rigid (head only) and non-rigid (face only) motion, the infants looked longer to the visual match in the rigid condition; and to the visual mismatch in the non-rigid condition. Overall, the results suggest 8-month-olds can extract information about the prosodic structure of speech from voice and head kinematics, and are sensitive to their match; and that they are less sensitive to the match between lip and voice information in connected speech.
    Original languageEnglish
    Article numbere111467
    Number of pages8
    JournalPLoS One
    Volume9
    Issue number10
    DOIs
    Publication statusPublished - 2014

    Open Access - Access Right Statement

    © 2014 Kitamura et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited

    Fingerprint

    Dive into the research topics of 'Motherese by eye and ear : infants perceive visual prosody in point-line displays of talking heads'. Together they form a unique fingerprint.

    Cite this