T3.1 Quantifying the weights of multisensory influences on postural control across development
Mark A. Schmuckler University of Toronto Scarborough
Balance control is fundamentally a multisensory process. Starting in infancy, people are sensitive to a variety of perceptual inputs for controlling balance, including the proprioceptive and kinesthetic inputs traditionally believed to control balance, along with both visual (e.g., presence versus absence of visual input, imposed optic flow information) and haptic (e.g., light-fingertip contact) information. Given such findings, one of the principal questions now facing researchers interested in posture involves quantifying the weighting, and potential reweighting, of sensory inputs across varying task environments, and across developmental time. Work in my laboratory over the years has explored the impact of a variety of such sensory components in different task environments, such as lit versus dark environments, moving room paradigms, varying conditions of haptic input, proprioceptive inputs via explicit manipulations of length and width of base of support, in children ranging in age from 3 to 9 years. The current talk focuses on recent work that has aggregated the findings across these multiple experiments, using these data to specifically address the issue of weighting of varying sensory inputs for postural control, and how this weighting might change over time. Specifically, quantification of sensory inputs was calculated by predicting measures of postural stability as a function of dummy coding for a range of visual, haptic, and proprioceptive inputs. These analyses revealed interesting developmental differences of the relative weights of sensory information across children ranging in age from 3 to 9 years, and adults. Such modeling thus enables the quantification of developmental trajectories in children’s relative use of varying visual, haptic, and proprioceptive inputs in postural control.
T3.2 Infant learning in vision and beyond
Chia-huei Tseng Research Institute of Electrical Communication, Tohoku University
Learning in a multisensory world is challenging as the information from different sensory dimensions may be inconsistent and confusing. By adulthood, learners optimally integrate bimodal (e.g. audio-visual, AV) stimulation by both low-level (e.g. temporal synchrony) and high-level (e.g. semantic congruency) properties of the stimuli to boost learning outcomes. However, it is unclear how this capacity emerges and develops. One of the challenges comes from the lack of proper research paradigm for infants.
To approach this question, we designed a novel paradigm to examine whether preverbal infants were capable of utilizing high-level properties with grammar-like rule acquisition. In this paradigm, we first habituate infants with an audio-visual bimodal temporal sequence that represents an A-A-B rule. The audio-visual relevance and consistence can vary in perceptual (e.g. visual motion and arising auditory frequency), cognitive (e.g. syllables), or semantic dimensions (e.g. emotional categories). I will describe the design rationale, and how our results show that similar to adults, preverbal infants’ learning is influenced by a high-level multisensory integration gating system, pointing to a perceptual origin of bimodal learning advantage that was not previously acknowledged.
T3.3 Crossmodal association of auditory and visual material properties in infants
Yuta Ujiie Research and Development Initiative, Chuo University, Tokyo, Japan
The human perceptual system enables to extract visual properties of an object’s material from auditory information. The neural basis underlying such multisensory association has been revealed to develop throughout experience of exposure to a material, by neuroimaging studies in monkey. In humans, however, the development of this neural representation remains poorly understood. Therefore, we addressed this question by using near-infrared spectroscopy (NIRS), a functional brain activity imaging technique, to examine the brain activity in response to audiovisual material matching in 4- to 8-month-old infants. In this presentation, I will show our finding that, in preverbal 4- to 8-month-old infants, the presence of a mapping of the auditory material property with visual material in the right temporal region. This indicated that audiovisual material information involves a relatively high-order processing that is enveloped by the sound symbolism. Also, I will suggest the possibility that the development of the association of multisensory material properties may depend on the material’s familiarity during the first half year of life.
T3.4 Visual and somatosensory hand representation through development.
Lucilla Cardinali Fondazione Istituto Italiano di Tecnologia, U-VIP _ Unit for Visually Impaired People, Genova, Italia
Our body changes in size and shape throughout life. Childhood is a crucial period during which body growth increases substantially. How does the brain keep track of such changes? How visual and somatosensory information is used to create an accurate representation the body? Here we investigate the accuracy of the hand representation in children using a visual and a haptic task. 80 children between 5 and 10 years old judged the size of their own hand against a series of fake hands presented either visually or haptically. Using a staircase method, we found that all children underestimate the size of their hand. The amount of underestimation increases with age and is modality independent, that is, is present for both the visual and haptic task. The variability in the response is higher in the haptic condition compared to the visual one. Finally, the representation bias is body-specific as it is not present when the same children estimated (visually or haptically) objects size. Distortions in hand representations have been previously reported in adults; however, this is the first study to show the presence of a distortion in hand representation in children too. Crucially such underestimation is smaller than the one previously described in adults suggesting that the gap between real body and represented body increases throughout life.
T3.5 The role of allocentric information in the development of spatial navigation across childhood
Luigi F. Cuturi Istituto Italiano di Tecnologia
The triangle completion task is a navigational and path integration task often used to study the integration of spatial navigation abilities with multisensory cues in order to build an allocentric representation of space. This methodology allows to investigate the capability of updating own position in space relative to the available sensory cues. Here, we tested children (age: 6-11 y.o.) with a triangle completion task to study how well they accomplish spatial updating after turning angles of 45¡, 90¡ or 135¡ to the right or left. Additionally we also tested how an allocentric reference (an auditory cue) might influence performance. Trajectories were recorded by means of the Kinect (motion sensing device – Microsoft) and the EyesWeb platform (Volpe et al., 2016). After walking along the first two legs that compose the triangle (150 cm and 220 cm long, respectively) by being guided by the experimenter, blindfolded participants were asked to return to the start position without support thus completing the triangle, i.e. along the third leg. Each turn was verbally signaled and indicated by gently pushing the participant towards the target direction. The task was described with a forest exploration narrative in order to make it enjoyable for children. Our results show that younger children performed worse than older peers, thus indicating the role of the developmental stage in understanding the turned angle. In particular, indexes as the distance between the ending point of the trajectory and how stable children maintain a straight heading while moving (i.e. directness), show that performance is worse at the early stages of development (age 6-8 y.o.). The development of spatial updating skills across the lifespan throws light on the ability of discriminating angles by walking and how the integration of external cues could be used to develop a learning platform for teaching angles by walking.
T3.6 Sensory dominance and multisensory integration as screening tools in aging
Pawel J. Matusz Information Systems Institute at University of Applied Sciences Western Switzerland (HES-SO Valais)
Naturalistic environments are inherently multisensory and the advantages of multisensory information for brain and behavioural processing are well-established. In turn, healthy aging has a pervasive impact on the brain’s structure and function, and sensory, perceptual, as well as memory and executive functions seem particularly impacted by aging-related changes. However, these insights emerge from almost exclusively unisensory literature, while the multisensory benefits for information processing are typically enhanced in healthy older compared to healthy younger individuals. These contradictory results could potentially be reconciled by studying sensory-dominance patterns; their importance for multisensory benefits has been shown across the lifespan and changes therein would be consistent with healthy and neurodegenerative aging-related changes in the brain. Thus, we compared healthy young (HY), healthy older (HO) and mild-cognitive impairment (MCI) individuals on a simple audio-visual detection task. Neuropsychological tests assessed individuals’ learning and memory impairments. First, patterns of sensory dominance emerged only for healthy and abnormal aging groups, towards a propensity for auditory-dominant behaviour (i.e., detecting sounds faster than flashes). Notably, multisensory benefits were larger only in healthy older than younger individuals who were visually-dominant. Second, the multisensory detection task offered added benefits as a time- and resource-economic MCI screening tool. Specifically, a receiver operating characteristic (ROC) analysis demonstrated that a correct MCI diagnosis (derived from the Mini-Mental State Examination, MMSE) could be reliably achieved based alone on the combination of indices of multisensory integration and of sensory dominance. These results provide much-needed clarification regarding the presence of enhanced multisensory benefits in both healthily and abnormally aging individuals. As such, our findings highlight the potential importance of sensory profiles in determining multisensory benefits in healthy and abnormal aging. Crucially, these results also open an exciting possibility for multisensory detection tasks to be used as a cost-effective complementary screening tool for dementia.