The Multisensory Space – Perception, Neural representation and Navigation

Organizer: Daniel Chebat1 & Shachar Maidenbaum2
1Ariel University
2Columbia University

 Abstract: We perceive our surrounding environment using all of our senses in parallel, building a rich multisensory representation. This multisensory representation can be used to move through our environment and interact spatially with our surroundings. Vision is the most suited suited sense to assist spatial perception, but how essential is it to the process by which we navigate? And what happen when it is lacking, or unreliable? In this symposium we wish to explore different aspects of this process, and the role of vision and visual experience in guiding this process and the neural correlates thereof. We have put together a strong panel of speakers who have devoted their careers to the study of perceptual and spatial learning, the processing of sensory information and multimodal integration with emphasis on sensory deprivation. Dr Chebat will open, introducing the topic and describing the use of sensory substitution devices to perceive space, amodality, and training induced plastic changes in the brain. He will then be followed by Dr Ptito and Dr Kupers who will discuss anatomical, metabolic and functional changes in the brain of people who are congenitally blind, and the cascade of of resulting changes in the processing of sensory olfactory, tactile and auditory information. Dr Olivier Collignon will then discuss behavioral and brain reorganization linked with sensory deprivation and how this reorganization impacts the perception of space by people who are blind. Dr Maidenbaum will turn from sensory impairment to the use of Virtual Reality tools to manipulate the relative cues between vision, audition and proprioception during navigation and spatial memory tasks. Dr Amedi will close the symposium by discussing a wider theoretical view on these topics with emphasis on brain organization and reorganization arising from sensory impairment and manipulation.

 


S8.1 The modality independent nature of the human brain’s spatial network

Daniel Chebat, Ariel University

Show abstract

Spatial navigation in the absence of vision has been investigated from a variety of approaches that have progressed our understanding of spatial knowledge acquisition by the blind, including their abilities, strategies, and corresponding mental representations. Our previous work demonstrated the recruitment of primary visual areas in congenitally blind (CB) individuals, but not in sighted blindfolded or in late blind (LB) individuals , which may enable them to use sensory substitution devices (SSDs) efficiently. Using a combination of functional and anatomical neuroimaging techniques, our recent work has demonstrated the impact of amodal cortical processing in guiding spatial learning. The comparisons of performances between congenitally blind people and sighted people using sensory substitution devices in perceptual and sensory-motor tasks uncovered the striking ability of the brain to rewire itself during perceptual learning and to learn to interpret novel sensory information even during adulthood, not just on congenitally blind individuals, but late blind and even sighted individuals as well. Specifically, we demonstrate that regions that were typically considered “visual” scene selective regions can be recruited through sensory substitution during a navigation task in both congenitally blind and sighted blindfolded counterparts. We argue that scene selective regions and the navigation network in general performs modality-independent spatial computations that does not require visual input to perform spatial tasks.

Hide abstract

 

S8.2 Structural, metabolic and functional changes in the congenitally blind brain

Ron Kupers and Maurice Ptito
BRAINlab, Department of Neuroscience, Panum Institute, University of Copenhagen

Show abstract

For human and non-human primates, vision is one of the most privileged sensory channels used to interact with the outside world. The importance of vision is already strongly embedded in the organization of the primate brain as about one third of its cortical surface is involved in visual functions. It is therefore not surprising that the absence of vision from birth, or the loss of vision later in life, has major consequences for the structural and functional organization of the brain. In this talk, we will first describe a number of brain imaging studies from our lab using (functional) magnetic resonance imaging, diffusion imaging, positron emission tomography and magneto-encephalography that describe some of the structural, metabolic and functional changes that accompany the loss of vision. These studies demonstrate that the absence of vision causes massive structural changes that take place not only in the visually deprived cortex but also in other brain areas. These studies further reveal that the visually deprived cortex becomes responsive to a wide variety of non-visual sensory inputs. Recent studies even showed an important role of the visually-deprived cortex in cognitive and language processes. Next, we will present recent behavioral studies from our lab indicating that congenitally blind individuals show increases in acuity for tactile, thermal, gustatory and olfactory processes.

Hide abstract

 

S8.3 Space without sight

Olivier Collignon 
Institute of Neuroscience (IoNS) of the University of Louvain, Center for Mind/Brain Sciences (CIMeC) at the University of Trento

Show abstract

Vision typically provides the most reliable information about our surrounding space. What happens when you cannot rely on this sensory input due to blindness? I will expose the behavioral and brain reorganizations that occur in blind people for the processing of space. ΓÇïFirst, I will show that blindness typically triggers enhanced spatial discrimination in the preserved senses and a reorganization of the neural network supporting such abilities. Aside from theseΓÇï quantitative differences, I will ΓÇïalso demonstrate that congenitally blind individuals have a qualitatively different way of representing spaceΓÇï. Such fundamental qualitative differencesΓÇï in blind people ΓÇïcascade on the way they use space in relation to higher cognitive functions like representing numbers or ordering items in working memory.

Hide abstract

 

S8.4 Spatial perception and interaction with manipulated sensory reliability

Shachar Maidenbaum 
Columbia University NY

Show abstract

Vision is considered to be the dominant sensory channel which humans use for spatial tasks. However, what happens when this channel clashes with others, e.g. when vision loses reliability compared to other sensory channels like audition, or proprioception?

We explored this via spatial tasks in several virtual environments, which enabled us to control subjects’ sensory input and to manipulate the reliability of the visual input. Identical environments were repeated under several conditions: navigating using only audition (via Sensory Substitution), using only vision and in a series of “clash” trials in which the auditory channel was always fully reliable but the visual channel was not. Subjects were not instructed that the visual information may be unreliable and did not know which condition each trial was in. We found that all subjects self-learned to disregard unreliable visual information and rely on audition when needed, reflected also by increased suspicion of the visual information (e.g. scanning walls for masked openings). All subjects could complete all levels under all conditions. However, despite the potential ability to solve all levels equally by disregarding the visual input and using only audition, subjects reported significantly different levels of difficulty for the conditions, and a strong subjective preference for having the visual information even when aware that some of it, and even all of it, was false. We then used a head mounted display to explore the effect of matched/mismatched proprioceptive and visual cues, finding that matching significantly boosted reports of immersion but had a smaller effect on task performance. These results demonstrate both the ability to dynamically learn a new skill via an augmented sensory channel and to disregard the main modality typically used for it, but also the ingrained importance of the visual channel for human navigation in both natural and unnatural multisensory conditions.

Hide abstract

 

S8.5 Task Selectivity as a comprehensive principle for brain organization – including in early sensory region

B. Heimler, S. Hofstetter, S. Maidenbaum, A. Amedi
Hebrew University of Jerusalem

Show abstract

In the last decades, convergent evidence from studies with sensory deprived populations such as blind and deaf adults showed that most of the known specialized regions in higher-order ‘visual’ and ‘auditory’ cortices maintained their anatomically consistent category-selective properties in the absence of visual/auditory experience when input was provided by other senses carrying category-specific information. In this talk I will explore How early in the visual hierarchy the preservation of visual tasks in other modalities can extend to – Will these include also retinotopic regions, or only higher order ones? Can this plasticity extend even to the earliest regions of the visual pathway such as V1? My main focus will be on the case of early sensory cortices as a model to unravel whether the whole brain is a task-machine or this notion explains only the organization of higher-order sensory cortices. I will present evidence suggesting that early sensory cortices re-organization following sensory deprivation and especially blindness, seems to suggest a negative answer to this question as the deprived V1 has been repeatedly shown to be activated by memory and language (task-switching plasticity). However, I will also present recent data from our lab from navigation tasks in virtual reality demonstrating the functional recruitment for periphery vs. fovea in V1 for non-visual navigation demonstrating task preservation in one of the earliest steps in the visual pathway. Furthermore, i will demonstrate that non visual navigation recruits additional retinotopic regions, such as the dorsal V6 for non-visual navigation regardless of visual experience. These new results challenging this negative conclusion and propose novel ways to conceptualize and test task-machine organization in those cortices. Finally, I will discuss the implications of our results for both basic research and for clinical rehabilitation settings.

Hide abstract

 


 

Event Timeslots (1)

Sunday, June 17
-