Abstract
![CDATA[Visual cues to speech prosody are available from a speaker’s face; however the form and/or location of such cues are likely to be inconsistent across speakers. Given this, the question arises as to whether such cues are general enough to signal the same prosody information across speakers, and if so, where and what these cues are. To investigate this, this study used visual-visual and auditory-visual matching tasks requiring participants to select pairs of stimuli that were produced with the same prosody within- and across-speakers when visual information was limited to the upper or lower face. Experiment 1 tested within-speaker prosody matching when the speaker's lower face was presented. The results showed highly accurate matching performance. Taken together with the results of our previous study which presented the upper face in the same tasks [Cvejic, Kim & Davis, 2010, Speech Commun. 52, 555-564], these data provided a baseline for which to evaluate cross-speaker prosody matching (Experiment 2). In Experiment 2, both lower and upper face stimuli were presented. In comparison to within-speaker matching, performance was lower for cross-speaker matching but still greater than chance. Overall, the results suggest that both the upper and lower face provide general non-speaker specific as well as speaker-specific visual cues to prosody.]]
Original language | English |
---|---|
Title of host publication | Speech Prosody 2010 : Proceedings of the 5th International Conference on Speech Prosody, [Hotel] Doubletree Magnificent Mile, Chicago, May 11-14, 2010 |
Publisher | Creative Commons |
Number of pages | 4 |
Publication status | Published - 2010 |
Event | International Conference on Speech Prosody - Duration: 31 May 2016 → … |
Conference
Conference | International Conference on Speech Prosody |
---|---|
Period | 31/05/16 → … |
Keywords
- speech prosody
- auditory-visual speech perception