An MEG study of auditory-visual speech processing
Single Paper Presentation
Chris Davis
MARCS Auditory Laboratories, University of Western Sydney
Daniel Kislyuk
Laboratory of Computational Engineering, Helsinki University of Technology, Finland Mikko Sams
Laboratory of Computational Engineering, Helsinki University of Technology, Finland Jeesun Kim
MARCS Auditory Laboratories, University of Western Sydney Abstract ID Number: 29 Full text:
Not available Last modified:
March 4, 2007
Presentation date: 07/05/2007 8:40 AM in Quad General Lecture Theatre
(View Schedule)
Abstract
A recent ERP study (Van Wassenhove, Grant & Poeppel, 2005) reported that activity in the auditory cortices is suppressed when a hearer concurrently sees the talker speaking (visual speech). This suppression occurred 50-200 ms after stimulus onset and indicates early integration of audiovisual (AV) speech. However, AV suppression occurred whether or not the auditory and visual speech matched suggesting the effect might not be speech specific. In this study we recorded neuromagnetic responses to Auditory Only (AO) and AV speech and matched non-speech auditory stimuli with a 306-channel whole-scalp neuromagnetometer (MEG) as participants detected speech. Evoked auditory potentials for the AO speech condition peaking around 100 ms (N100m) were modeled as Equivalent Current Dipoles (ECD). MEG allowed AO and AV activation to be determined for left and right auditory cortices. Behavioural data showed an AV advantage in reaction time and hit rate for both the speech and nonspeech stimuli. The MEG data showed that the N100m auditory ECD was reduced in the AV condition for both the speech and non-speech stimuli (and for the left and right hemispheres equally). This shows that visual speech suppresses the response of the auditory cortex irrespective of whether the auditory stimuli are speech or not.
|