Recalibration in Speech Perception: Lipread vs. Lexical Information
Poster Presentation
van Linden Sabine
Tilburg University, Department of Psychology
Abstract ID Number: 53 Full text:
Not available Last modified:
March 15, 2006
Presentation date: 06/20/2006 10:00 AM in Hamilton Building, Foyer
(View Schedule)
Abstract
The identification of a phoneme can be biased by both lipread speech (McGurk-effect) and lexical information (Ganong effect). Both information sources can also recalibrate auditory speech identification, as demonstrated by aftereffects. For example, exposure to an ambiguous sound intermediate between /aba/ and /ada/ dubbed onto a face articulating /aba/ (or /ada/) increases responses consistent with the visual stimulus on subsequent identification trials (Bertelson, Vroomen, de Gelder, 2003). Others have found aftereffects when the ambiguous sound is embedded in a word. Here, we directly compared biases and aftereffects induced by lipread and lexical information with the same materials and procedures. This allowed us to check whether there is a fundamental difference between bottom-up perceptual information and top-down lexical knowledge.
The immediate bias effects were bigger for lipread information than for lexical information. However, aftereffects induced by lipread and lexical did not differ in magnitude, dissipation rate, and stability over time. Thus, bottom-up lipread and top-down lexical information affect online speech perception differentely, but they induce similar recalibration effects.
|
|
Learn more
about this
publishing
project...
|
|
|