Audiovisual and lexical cues do not additively enhance perceptual adaptation

Shruti Ullas, Elia Formisano, Frank Eisner, Anne Cutler

Research output: Contribution to journalArticlepeer-review

7 Citations (Scopus)

Abstract

When listeners experience difficulty in understanding a speaker, lexical and audiovisual (or lipreading) information can be a helpful source of guidance. These two types of information embedded in speech can also guide perceptual adjustment, also known as recalibration or perceptual retuning. With retuning or recalibration, listeners can use these contextual cues to temporarily or permanently reconfigure internal representations of phoneme categories to adjust to and understand novel interlocutors more easily. These two types of perceptual learning, previously investigated in large part separately, are highly similar in allowing listeners to use speech-external information to make phoneme boundary adjustments. This study explored whether the two sources may work in conjunction to induce adaptation, thus emulating real life, in which listeners are indeed likely to encounter both types of cue together. Listeners who received combined audiovisual and lexical cues showed perceptual learning effects similar to listeners who only received audiovisual cues, while listeners who received only lexical cues showed weaker effects compared with the two other groups. The combination of cues did not lead to additive retuning or recalibration effects, suggesting that lexical and audiovisual cues operate differently with regard to how listeners use them for reshaping perceptual categories. Reaction times did not significantly differ across the three conditions, so none of the forms of adjustment were either aided or hindered by processing time differences. Mechanisms underlying these forms of perceptual learning may diverge in numerous ways despite similarities in experimental applications.
Original languageEnglish
Pages (from-to)707-715
Number of pages9
JournalPsychonomic Bulletin and Review
Volume27
Issue number4
DOIs
Publication statusPublished - 1 Aug 2020

Bibliographical note

Publisher Copyright:
© 2020, The Author(s).

Open Access - Access Right Statement

This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changesweremade. The images or other third partymaterial in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Keywords

  • audio, visual aids
  • lipreading
  • visual perception

Fingerprint

Dive into the research topics of 'Audiovisual and lexical cues do not additively enhance perceptual adaptation'. Together they form a unique fingerprint.

Cite this