A combined ECoG, MEG and fMRI investigation of audio-visual speech

Gemma Calvert, Psychology, University of Bath

Abstract
Behavioural studies have shown that the audible and visible components of speech can be combined seamlessly to enhance speech comprehension. In a series of studies using fMRI, we previously reported multisensory interactions in auditory and visual cortex, as well as the STS, during bimodal speech perception that exceeded the summed response to each sensory channel alone. The temporal resolution of fMRI however precluded determination of the time course (and hence directional information flow) of these various interactions. By adopting a multi-technique approach (intracranial EEG, MEG and fMRI) using a single audio-visual speech paradigm, we have been able to exploit the superior temporal and spatial resolution of the different methods to reveal the time-course of these multisensory interactions with high spatial precision. In addition, we have characterised the changes in cortical oscillatory power associated with the perception of audio-visual speech.

Not available

Back to Abstract