Influence of voicing, background noise and nature of the visual input on the RT facilitation to discriminate speech syllables
Poster Presentation
Julien Besle
INSERM U280, Lyon, France
Jean-Luc Schwartz
Institut de la Communication Parlée, Grenoble, France Marie-Hélène Giard
INSERM U280, Lyon, France Abstract ID Number: 167 Full text:
Not available Last modified:
March 19, 2006
Presentation date: 06/19/2006 4:00 PM in Hamilton Building, Foyer
(View Schedule)
Abstract
Seeing the lip movements of a talker is known to speed up speech processing. However, it has been argued that this facilitation may be attributed to the visual information provided by the initial gesture cueing the speech sound. We thus tested wether such a facilitation in RTs can be obtained with visual cues providing only temporal information about the speech sound: subjects had to discriminate between two auditory or audiovisual syllables having the same lip gestures and differing only by their voicing,. In addition, in half of the experimental blocks, we replaced the mouth by a rectangle with a surface varying as the open mouth area to evaluate the specificity of that cueing effect. We also manipulated the level of background noise.
Depending on the voicing and noise level, we observed either benefits or costs in reaction times for audiovisual compared to auditory syllables. However, for any condition, the gain in reaction time was larger (or the cost was smaller) for natural gestures compared to rectangle deformations, suggesting that audiovisual facilitation in speech processing cannot be accounted for only by a cueing effect of the visual signal, and further that it presents some degree of specificity to speech stimuli.
|
|
Learn more
about this
publishing
project...
|
|
|