MODIFYING SPEECH IDENTIFICATION THROUGH MCGURK INCONGRUENCE VS. SENSORY ADAPTATION
Paul Bertelson, Free University Brussels, Cognitive Neuroscience Unit
Abstract
Exposure to incongruent auditory-visual pairs of speech tokens can produce both recalibration and selective adaptation of speech identification. In an earlier study, exposure to an ambiguous auditory token (intermediate between /aba/ and /ada/) dubbed onto the video of a face articulating either /aba/ or /ada/, recalibrated the perceived identity of auditory targets in the direction of the visual component, while exposure to congruent non-ambiguous /aba/ or /ada/ pairs created selective adaptation, i. e. a shift of perceived identity in the opposite direction (Bertelson, Vroomen, de Gelder, 2003). Here, we examined the build-up course of the aftereffects produced by the same two types of bimodal adapters, over a 1 to 256 range of presentations. The aftereffects of non-ambiguous congruent adapters increased linearly across that range, while those of ambiguous incongruent adapters followed a curvilinear course, going up and then down with increasing exposure. This late decline might reflect selective adaptation to the recalibrated ambiguous sound, showing that the two phenomena can occur within the same task context.
Not available
Back to Abstract
|