Multisensory self-motion estimation, old ideas and new data

Stuart Smith, School of Psychology, University College Dublin

Abstract
Moving through one’s environment is a naturally multisensory task
involving a coordinated set of sensorimotor processes that encode and
compare information from visual, vestibular, proprioceptive,
motor-corollary, and cognitive inputs. Interaction between visual and
vestibular information in the perception of self-motion has been
reported in the literature for over 50 years [e.g. Battersby et at,
1956]. The importance of visual inputs for estimation of self-motion
direction (heading) was first recognised by Gibson (1950) who postulated
that heading could be recovered by locating the focus of expansion (FOE)
of the radially expanding optic flow field coincident with forward
translation. We have recently shown [Stone, Smith and Bush, 2004] that
humans with intact vestibular function can estimate their direction of
linear translation using vestibular cues alone with as much certainty as
they do using visual cues. Here we report the results of an ongoing
study of self-motion estimation that investigates whether visual and
vestibular information can be combined in a statistically optimal
fashion. We discuss our results from the perspective that successful
execution of self-motion behaviour requires the computation of one’s own
spatial orientation relative to the environment. Nearly 20 years ago
Larry Young and colleagues [Borah, Young & Curry, 1988] showed that an
internal model based on the Kalman filter could provide a qualitative
account of multisensory contributions to human spatial orientation.

Not available

Back to Abstract