Audio-tactile, visuo-tactile, and audio-visual temporal synchrony perception
Poster Presentation
Waka Fujisaki
NTT CS labs
Shin'ya Nishida
NTT CS labs Abstract ID Number: 28 Full text:
Not available Last modified:
May 30, 2007
Presentation date: 07/05/2007 10:00 AM in Quad Maclauren Hall
(View Schedule)
Abstract
To explore whether cross-modal temporal synchrony perception is established by a common mechanism across modalities regardless of a combination of modalities, we compared temporal synchrony-asynchrony discrimination performances of audio-tactile, visuo-tactile, and audio-visual signals obtained from the same participants. Visual and auditory stimuli were a luminance-modulated Gaussian blob and an amplitude-modulated white noise modulated by either single pulses or repetitive pulse trains. Tactile stimulus was presentation of the same waveform to a forefinger through a vibration generator. The results showed that temporal limits of synchrony-asynchrony discrimination were similar for audio-visual and visuo-tactile pairs (~4 Hz for repetitive pulse trains), but that for audio-tactile pair was significantly higher (~8 Hz or above). This seems to disagree with a hypothesis that cross-modal synchrony judgment is mediated by a single common mechanism. At present, we cannot completely exclude the possible existence of the specialized low-level sensors for audio-tactile synchrony detection. However, in all three modality pairs, higher performances were obtained with single pulses compared with repetitive pulse trains. This temporal crowding effect, along with other properties (e.g., feature invariance) of audio-tactile synchrony judgments, suggests that the principle underlying temporal synchrony judgment may be common (salient feature matching), with the performance being limited by the temporal resolution of each modality.
|
|
Learn more
about this
publishing
project...
|
|
|