Neural basis of Bayes-optimal multisensory integration: theory and experiments

Alexandre Pouget, Brain and Cognitive Sciences, University of Rochester

Abstract
Humans can combine multisensory inputs near optimally. This is quite remarkable considering that sensory inputs often come in very different formats, such as the voice of a speaker and the image of the lips movements. Moreover, sensory modalities are not equally reliable, and their degree of reliability can change from one instant to the next. I will present a neural model based on the notions of basis functions and probabilistic population codes, that can solve both of these problems optimally. The model makes two major predictions: 1- multisensory neurons should be mostly additive, and 2- their receptive fields are not necessarily in spatial correspondence. Both of these predictions are at odd with the current dogma in the multisensory integration literature, which claims that many neurons are superadditive, and that the receptive fields of most multisensory neurons are in spatial alignment. I will show that, in fact, experimental data support our predictions.

Not available

Back to Abstract