On the automaticity of audio-visual links in spatial attention capture

Veronica Mazza, Dipartimento di Psicologia dello Sviluppo e della Socializzazione, Università di Padova

Abstract
Three experiments evaluated the degree of automaticity of crossmodal spatial attention shifts, by assessing the intentionality criterion.
We used the orthogonal cueing paradigm, in which a lateralised stimulus in a given modality was followed (100 vs. 700 ms) by a target in the same or different modality. In all experiments, the first stimulus was always uninformative regarding the target location. In Experiment 1, where both the location and the modality of targets were unpredictable, we replicated Spence and Driver’s (1997) basic results, with faster discrimination for visual targets following uninformative auditory stimuli on the same location at short intervals. By contrast, visual stimuli did not facilitate auditory target discrimination (but see Ward, McDonald & Lin, 2000). This result was replicated in Experiment 2, where the target location was blocked, and participants could orient their attention in advance, and in Experiment 3, where also the target modality was blocked.
Our results suggest that this sort of crossmodal orienting is automatic, as it occurred even when participants were provided with all information about the target to prevent uninformative auditory stimuli from being processed. This, in turn, is consistent with the notion that peripheral auditory stimuli are very powerful in attracting visual attention.

Not available

Back to Abstract