Perceiving visual prosody from point-light displays

Erin Cvejic, Jeesun Kim, Chris Davis

Research output: Chapter in Book / Conference PaperConference Paperpeer-review

1 Citation (Scopus)

Abstract

This study examined the perception of linguistic prosody from augmented point-light displays that were derived from motion tracking six talkers producing different prosodic contrasts. In Experiment 1, we determined perceivers' ability to use these abstract visual displays to match prosody across modalities (audio to video), when the non-matching visual display was segmentally identical and differed only in prosody. The results showed that perceivers were able to match the auditory speech to these limited face motion prosodic displays at better than chance levels; performance for the stimuli of different talkers varied greatly. A subjective perceptual rating task (Experiment 2) demonstrated that variation across talkers in the acoustic realization of prosodic contrasts may account for some of this difference; however a combination of the salience of acoustic and visual prosodic cues is likely to be driving matching performance.
Original languageEnglish
Title of host publicationProceedings of the International Conference on Audio-Visual Speech Processing (AVSP2011), Aug 31 - Sep 3, 2011, Volterra, Italy
PublisherKTH, Computer Science and Communication
Pages15-20
Number of pages6
ISBN (Print)9789175010809
Publication statusPublished - 2011
EventInternational Conference on Audio-Visual Speech Processing -
Duration: 31 Aug 2011 → …

Conference

ConferenceInternational Conference on Audio-Visual Speech Processing
Period31/08/11 → …

Fingerprint

Dive into the research topics of 'Perceiving visual prosody from point-light displays'. Together they form a unique fingerprint.

Cite this