6th Annual Meeting of the International Multisensory Research Forum
    Home > Papers > Ju Hwan Lee
Ju Hwan Lee

Crossmodal Facilitation Effect in Spatial Attention and Multisensory Display of Spatial Information using HRTF
Poster Presentation

Ju Hwan Lee
Department of Psychology, Yonsei University, SEOUL, KOREA

Kwang Hee Han
Department of Psychology, Yonsei University, SEOUL, KOREA

     Abstract ID Number: 108
     Full text: PDF
     Last modified: June 24, 2005

Abstract
In everyday life, we pick up a valuable necessary input by paying selectively attention to it out of a great deal of information. For instance, we will usually turn our eyes to the information source if someone suddenly calls our name at a crowded cocktail party. In this and many other such situations, certain information that is initially processed in one sensory modality improves the sensory processing of stimuli presented in other modalities at the same spatial location. Such our sensory processing of the spatial information has the crossmodal consequences of spatial orienting. In the present study, we investigated empirically the feasibility of applying the crossmodal consequences of spatial orienting to the target detection systems such as RADAR of fighter planes by simultaneously or asynchronously presenting the visual and auditory display of the spatial information. In this paper, two experiments were conducted in a certain way that visual stimuli were presented alone or together with auditory cues (four SOAs) generated by Head-Related Transfer Function (HRTF) techniques that reflect some differences between two auditory signals delivered to two ears of listener from the sound source. Our data show that although the spatial information presented auditorily from non-real location is the virtual sound, the performance of the valid and simultaneous crossmodal display is faster than visual only, invalid, or non-spatial auditory cue with SOAs for the sensory processing of the spatial location. Finally, some of our experiments suggest that the valid and simultaneous crossmodal presentation of the spatial information based on the facilitation of crossmodal links is still effective in the stereo settings such as headphones pre-equipped in the existing systems by using the virtual sound generation techniques.

Research
Support Tool
  For this 
non-refereed conference paper
Capture Cite
View Metadata
Printer Friendly
Context
Author Bio
Define Terms
Related Studies
Media Reports
Google Search
Action
Email Author
Email Others
Add to Portfolio



    Learn more
    about this
    publishing
    project...


Public Knowledge

 
Open Access Research
home | overview | program
papers | organization | schedule | links
  Top