Spatio-temporal dynamics of multisensory speech processing: An investigation with fMRI, MEG and intra-cranial EEG
Single Paper Presentation
Thomas Thesen
Department of Neurology, New York University
Peter Hansen
University of Oxford Rick Reale
University of Wisconsin Ian Holliday
University of Aston John Brugge
University of Wisconsin Ruth Campbell
University College London Robert Osterbauer
University of Oxford Krish Singh
University of Aston Matthew Howard
University of Iowa Hiroto Kawasaki
University of Iowa Hiroyuki Oya
University of Iowa Gemma Calvert
University of Bath Abstract ID Number: 201 Full text:
Not available Last modified:
May 21, 2006
Presentation date: 06/20/2006 8:30 AM in Hamilton Building, McNeil Theatre
(View Schedule)
Abstract
A standing debate in the field of multisensory speech perception is the level of processing at which the auditory and visual sensory streams converge. We investigated the spatio-temporal dynamics of natural audio-visual (AV) speech perception using a paradigm combining multiple neuroimaging techniques, namely functional MRI (fMRI), magnetoencephalography (MEG) and intra-cranial electroencephalography (iEEG). The high spatial specificity and resolution of fMRI was used to localize cortical areas involved in AV speech perception and integration. The fMRI results were then used in constraining the inverse solution of the MEG source model to yield millisecond timing information about activity in these areas during multisensory integration. After identifying multisensory integration sites and their time-course in the posterior superior temporal lobe, we focused on this area using the high temporal and spatial resolution of intracranial EEG. Consistent results from all imaging modalities support an involvement of primary and secondary auditory cortex in the integration of AV speech. Moreover, evoked fields/potentials and frequency analysis of the MEG and iEEG data suggest that visual speech influences auditory cortex at both the earliest and later stages of cortical processing of natural AV speech signals. A novel spatio-temporal model of AV speech perception is introduced.
|