Deciding when not to integrate: an investigation of the spatiotemporal limits of auditory-visual integration
Neil Roach, Visual Neuroscience Group, School of Psychology, The University of Nottingham, University Park, Nottingham NG7 2RD UK
Abstract
When a particular stimulus property can be encoded by more than one sensory system, combining estimates from different modalities provides an effective means of noise reduction. However, these benefits only apply if the estimates being integrated relate to a common source. In contrast, integrating information associated with independent objects or events has the potential to be highly disadvantageous. To minimise mismatches between sensory signals, mechanisms of multisensory integration implement a limited tolerance to discrepancies between estimates in the spatial and temporal domains. At present, relatively little is known about the factors that determine the limits of this tolerance. We investigated the integration of auditory and visual information in the classic ventriloquist effect (spatial judgements) and in an interval bisection task (temporal judgements). By systematically mapping out the tolerance of cross-modal effects in each task to audio-visual discrepancies we show that the spatial and temporal limits of integration (i) span multiple JND units in either modality; (ii) are robust to changes of stimulus properties; (iii) remain reasonably constant across different tasks and (iv) act independently of one another. These results suggest that the brain implements a remarkably rigid strategy for ensuring cross-modal correspondence during integration.
Not available
Back to Abstract
|