Audio-visual integration of letters and speech: From unimodal to bimodal subjective representation
Single Paper Presentation
Hans Colonius
Department of Psychology, Oldenburg University
Adele Diederich
School of Humanities and Social Science Abstract ID Number: 42 Full text:
Not available Last modified:
March 14, 2006
Presentation date: 06/19/2006 4:00 PM in Hamilton Building, Foyer
(View Schedule)
Abstract
Learning the correspondences between letters (graphemes) and speech sound units (phonemes) of a language is a crucial step in reading acquisition. Recent neurophysiological and –imaging studies suggest that multisensory brain areas play a role in the audiovisual integration of graphemes and phonemes similar to what has been observed for the integration of speech information with lip movements. In psychophysical experiments, the simultaneous presentation of visual and auditory target graphemes and phonemes leads to faster reaction times and more accurate recognition and discrimination performance compared to unimodal presentations. Little, however, is known about the subjective representation of graphemes and phonemes underlying these crossmodal effects. Is the subjective bimodal representation simply an amalgamation of unimodal features? Or do the crossmodal effects suggest the existence of bimodal characteristics not present in any unimodal context? Here we present a novel measurement technique to address these issues without requiring explicit assumptions about the set of relevant features. It is based on a version of the theory of dissimilarity developed by Dzhafarov and Colonius that permits the reconstruction of subjective distances among stimuli of arbitrary complexity from their pairwise discriminability. The approach is demonstrated on data from an experiment on audio-visual integration of letters and speech.
|