Using EEG and stimulus context to probe the modelling of auditory-visual speech

Research output: Contribution to journalArticlepeer-review

12 Citations (Scopus)

Abstract

We investigated whether internal models of the relationship between lip movements and corresponding speech sounds [Auditory-Visual (AV) speech] could be updated via experience. AV associations were indexed by early and late event related potentials (ERPs) and by oscillatory power and phase locking. Different AV experience was produced via a context manipulation. Participants were presented with valid (the conventional pairing) and invalid AV speech items in either a 'reliable' context (80% AVvalid items) or an 'unreliable' context (80% AVinvalid items). The results showed that for the reliable context, there was N1 facilitation for AV compared to auditory only speech. This N1 facilitation was not affected by AV validity. Later ERPs showed a difference in amplitude between valid and invalid AV speech and there was significant enhancement of power for valid versus invalid AV speech. These response patterns did not change over the context manipulation, suggesting that the internal models of AV speech were not updated by experience. The results also showed that the facilitation of N1 responses did not vary as a function of the salience of visual speech (as previously reported); in post-hoc analyses, it appeared instead that N1 facilitation varied according to the relative time of the acoustic onset, suggesting for AV events N1 may be more sensitive to the relationship of AV timing than form.
Original languageEnglish
Pages (from-to)220-230
Number of pages11
JournalCortex
Volume75
DOIs
Publication statusPublished - 2016

Keywords

  • audiovisual
  • oscillations
  • speech

Fingerprint

Dive into the research topics of 'Using EEG and stimulus context to probe the modelling of auditory-visual speech'. Together they form a unique fingerprint.

Cite this