A combined ECoG, MEG and fMRI investigation of audio-visual speech
Poster Presentation
Gemma Calvert
Psychology, University of Bath
Thomas Thesen
Psychology, University of Bath Krish Singh
Neurosciences Research Institute, University of Aston Peter Hansen
University Laboratory of Physiology, University of Oxford Ian Holliday
Department of Psychology, University of Aston Abstract ID Number: 129 Full text:
Not available
Last modified: March 21, 2005
Abstract
Behavioural studies have shown that the audible and visible components of speech can be combined seamlessly to enhance speech comprehension. In a series of studies using fMRI, we previously reported multisensory interactions in auditory and visual cortex, as well as the STS, during bimodal speech perception that exceeded the summed response to each sensory channel alone. The temporal resolution of fMRI however precluded determination of the time course (and hence directional information flow) of these various interactions. By adopting a multi-technique approach (intracranial EEG, MEG and fMRI) using a single audio-visual speech paradigm, we have been able to exploit the superior temporal and spatial resolution of the different methods to reveal the time-course of these multisensory interactions with high spatial precision. In addition, we have characterised the changes in cortical oscillatory power associated with the perception of audio-visual speech.
|