Audiovisual integration during object recognition: an ER-fMRI study
Poster Presentation
Alexandra Fort
INSERM U280
Peter C. Hansen
University Laboratory of Physiology, University of Oxford Thomas Thesen
University Laboratory of Physiology, University of Oxford Gemma A. Calvert
Department of Psychology, University of Bath Abstract ID Number: 32 Full text:
Not available
Last modified: March 11, 2005
Abstract
Behavioural studies have shown that correspondence in time is a key factor that determines whether two or more sensory cues will be perceived as emanating from a common object. By manipulating the temporal onset of two audiovisual objects in an event-related fMRI study, we aimed to elucidate the brain areas responsible for the detection of crossmodal synchrony during an object recognition task. Subjects were instructed to determine on each trial whether an auditory, visual or audiovisual stimulus corresponded to one of two previously learned objects. In the audiovisual condition, the visual component could be presented either simultaneously with the auditory component (AV) or with a delay of 300 ms (AV-asy). Comparison of the AV condition against the combined sum of the two unimodal conditions failed to reveal any brain areas exhibiting a superadditive response, consistent with previous event-related imaging studies of crossmodal object integration (e.g. Beauchamp et al, 2004). However, areas where AV was greater than either the A or V condition alone included the superior colliculus and both auditory and visual cortices. These areas also formed part of the network of areas exhibiting a stronger response for AV vs AV-asy which additionally included the SMA, pulvinar and sensory-motor cortex adjacent to the central sulcus. The results of this study have implicated a network of multisensory integration sites that operate both at the input (sensory) and output (motor) stage during crossmodal object recognition.
|