A Bayesian model for estimating body orientation from vestibular and visual information

Paul MacNeilage, UC Berkeley, Vision Science

Abstract
Otolith signals are ambiguous cues to body orientation because they are affected by inertial forces when the body is accelerated. The ambiguity can be resolved with added visual information indicating orientation and acceleration with respect to the earth. We have been investigating how noisy vestibular and visual signals are combined. Here we present a statistically optimal Bayesian model of this process. We represent the likelihoods associated with sensory measurements in a 2D body orientation / body acceleration space. The likelihood function associated with the otolith signal traces a curve because there is no unique solution for orientation and acceleration based on the otolith signal alone. The most likely estimates are those that satisfy the gravitoinertial force equation, F = G + I. Likelihood functions associated with other sensory signals can resolve this ambiguity. In addition, we propose two priors, one acting along each dimension in the orientation/acceleration space, the idiotropic prior and the no-acceleration prior. These priors are consistent with behavioral observations, specifically the Aubert effect and somatogravic illusion. Recent experiments confirm predictions of the model: 1) visual signals affect interpretation of the otolith signal 2) less reliable signals are weighted less 3) combined estimates are more precise than single-cue.

Not available

Back to Abstract