Imitation and cross-modal integration in speech perception
Poster Presentation
Maurizio Gentilucci
Neuroscienze, Università di Parma
Paolo Bernardis
Neuroscienze, Università di Parma Luigi Cattaneo
Neuroscienze, Università di Parma Abstract ID Number: 90 Full text:
Not available
Last modified: March 21, 2005
Abstract
We aimed to determine whether audiovisual integration in speech perception is based on either imitation or on supra-modal binding functions. In the first experiment observers were required to repeat a string of phonemes acoustically or visually (i.e. an actor mimicked pronunciation of the string) or audio-visually presented. In the visual presentation the lip kinematics and in the acoustical presentation the voice spectra of the observers were influenced by the actor’s lip kinematics and voice spectra, respectively. In the audiovisual presentation the effects decreased. In a second experiment in which the McGurk paradigm was used three distinct patterns of response were observed: fusion of the two stimuli, repetition of the acoustically, and, less frequently, of the visually presented string of phonemes. The analysis of the latter two responses showed that the voice spectra always differed from those in the congruent audiovisual presentation (i.e. when the articulation mouth gestures were congruent with the string of phonemes) and approached those of the other modality. The lip kinematics were influenced by those mimicked by the actor, but only when executed to pronounce a labial consonant. The data suggest that both imitation and supra-modal integration participate to perception at different stages of stimulus elaboration.
|
|
Learn more
about this
publishing
project...
|
|
|