Cross-modal perception of emotion by face and voice: an ERP study

Michela Balconi, Department of Psychology, Catholic University of Milan

Abstract
Emotion perception constitutes a case of processing of cues from multiple channels. Particularly, we focus our attention on the simultaneous processing of the tone of voice and the facial expression. Behavioral and neuropsychological studies indicate that, when we have to decode emotions on the basis of congruous visual and vocal information, a cross-modal bias, similar to that of speech reading, has place. Indeed, when the visual and the auditory stimuli are incongruent, subjects operate an integration. This integration is observed also when subjects are explicitly required to ignore one of the sources. It has been suggested that this cross-modal integration arises in a very early perceptual step of information processing. Moreover, the processing of emotional cues would take place outside the scope of awareness. In order to investigate this hypothesis, we conduced an ERP study comparing the subliminal and supraliminal perception of simultaneous visual (facial expressions of happiness, sadness, fear, anger, surprise and disgust) and auditory (words pronounced in an affective tone) emotional stimuli, in both the condition of congruence and incongruence. Differences in terms of ERPs (peak and latency variations) and behavioral response (RT) were found, due to condition (congruence/incongruence) and stimulation (supraliminal/subliminal) type.

Not available

Back to Abstract