Lexically guided retuning of visual phonetic categories

Patrick van der Zande, Alexandra Jesse, Anne Cutler

    Research output: Contribution to journalArticlepeer-review

    Abstract

    Listeners retune the boundaries between phonetic categories to adjust to individual speakers' productions. Lexical information, for example, indicates what an unusual sound is supposed to be, and boundary retuning then enables the speaker's sound to be included in the appropriate auditory phonetic category. In this study, it was investigated whether lexical knowledge that is known to guide the retuning of auditory phonetic categories, can also retune visual phonetic categories. In Experiment 1, exposure to a visual idiosyncrasy in ambiguous audiovisually presented target words in a lexical decision task indeed resulted in retuning of the visual category boundary based on the disambiguating lexical context. In Experiment 2 it was tested whether lexical information retunes visual categories directly, or indirectly through the generalization from retuned auditory phonetic categories. Here, participants were exposed to auditory-only versions of the same ambiguous target words as in Experiment 1. Auditory phonetic categories were retuned by lexical knowledge, but no shifts were observed for the visual phonetic categories. Lexical knowledge can therefore guide retuning of visual phonetic categories, but lexically guided retuning of auditory phonetic categories is not generalized to visual categories. Rather, listeners adjust auditory and visual phonetic categories to talker idiosyncrasies separately.
    Original languageEnglish
    Pages (from-to)562-571
    Number of pages10
    JournalJournal of the Acoustical Society of America
    Volume134
    Issue number1
    DOIs
    Publication statusPublished - 2013

    Fingerprint

    Dive into the research topics of 'Lexically guided retuning of visual phonetic categories'. Together they form a unique fingerprint.

    Cite this