Maximum likelihood integration of audiovisual speed signals

Adam Bentvelzen, School of Psychology, University of Sydney

Abstract
We investigated audiovisual speed perception to test whether the maximum likelihood estimation (MLE) model accounts for perception when discrepant auditory and visual speeds are paired. MLE proposes a sum of visual and auditory speed estimates, with each weighted by reliability. Apparent motion was used (between LEDs for vision, and between virtual auditory locations for audition). Random positional jitter was used to degrade motion signals and vary reliability of speed percepts. Overall, speed discrimination was more precise in vision. When auditory and visual motions of discrepant speeds were paired, the audiovisual speed percept and its precision was measured and compared with MLE predictions. Generally, perceived audiovisual speed was between that of the two unimodal speeds and was more precisely discriminated. However, bimodal speed and precision only matched MLE predictions when unimodal weights were similar. When one modality dominated strongly (by more than 3:1), performance followed the more precise modality. These findings extend previous studies of MLE to the speed domain but show that MLE does not apply under all conditions. Specifically, when unimodal weights differ markedly, audiovisual speed perception reverted to performance based only on the visual input.

Not available

Back to Abstract