INTEGRATION OF FACIAL EXPRESSIONS AND EMOTIONAL VOCALIZATIONS TAKES PLACE IN UNIMODAL VISUAL AREAS

Hanneke Meeren, Cognitive and Affective Neuroscience Laboratory, Tilburg University, Tilburg, The Netherlands

Abstract
Emotional signals provided by faces and voices as well as emotional body language provide the primary tools of human emotional communication. How these hang together is not yet well understood. The correspondence between facial and vocal expressions is easily recognized, but the integration of these visual and auditory channels at the neural level is poorly understood. Traditionally it has been assumed that multisensory integration is a higher order process that occurs in multimodal regions after sensory signals have undergone extensive processing through a hierarchy of unisensory cortical regions. Recent findings, however, challenge this assumption and suggest a role for “unimodal” sensory areas. We recorded event-related potentials in 15 subjects, who watched videoclips of angry and happy facial expressions that were accompanied by congruent or incongruent emotional vocalizations. We show that the early visual P1 component is already sensitive for successful audiovisual integration at 107-ms after stimulus onset, i.e. P1 was enhanced when face and vocalization did not match as compared to when they formed a unified emotional percept. Importantly, the effects were not caused by low-level properties of the stimuli, since they were absent for the summated unimodal conditions. Our findings demonstrate that audiovisual integration of dynamic emotional signals already takes place during the early stages of processing in unimodal visual cortex.

Not available

Back to Abstract