Cross-modal interactions of audio-visual information defined by time and space: A functional neuroimaging study
Poster Presentation
*Shahin Zangenehpour
Neuropsychology/Cognitive Neuroscience Unit, Montreal Neurological Institute, Montreal, Canada
Robert J Zatorre
Neuropsychology/Cognitive Neuroscience Unit, Montreal Neurological Institute, Montreal, Canada Abstract ID Number: 133 Full text:
PDF
Last modified: June 25, 2005
Abstract
The principles underlying cross-modal integration of auditory-visual information have been ex-plored in several animal models using neurophysiological and behavioural approaches. However, cross-modal integration is poorly understood, particularly in the human nervous system. We used a speaker-LED array that fits inside a positron emission tomography (PET) scanner to study the be-havioural and functional aspects of audiovisual interactions in the human brain. We designed stim-uli that are composed of five noise bursts and five flashes of light, each lasting 2 seconds in total, with randomly variable duration of each of the five elements. We parametrically varied the tempo-ral asynchrony and spatial disparity of the audio-visual components of these bimodal stimuli either independently or simultaneously. Here, we present results from an ongoing study of audio-visual interactions defined by spatiotemporal parameters at the behavioural and functional neuroimaging levels. We will discuss the neural correlates of audio-visual integration involved in the processing of these bimodal stimuli. We will also discuss the role of spatial and temporal congruity, and whether or not those aspects of signal processing depend on dissociable neural systems.
|
|
Learn more
about this
publishing
project...
|
|
|