Crossmodal integration of emotional faces and voices in Pervasive Developmental Disorder: an ERP study
Maurice Magnee, Child- and Adolescent Psychiatry, University Medical Center Utrecht, the Netherlands
Abstract
Pervasive developmental disorder (PDD) refers to a group of behavioral disorders of which autism is the most severe. Lack of social and emotional reciprocity counts among the most characteristic social-cognitive impairments of PDD and has been well documented for processing facial expressions. Here we investigated to what extent a deficit in recognition of facial expressions is associated with abnormal integration between the emotion seen in the face and heard in the voice. Electrophysiological responses to facial expressions and to face-voice pairs which were either emotionally congruent or incongruent were measured in adult PDD patients and matched controls. Increased P1 and N170 amplitudes were seen in response to the presentation of fearful faces as compared to happy faces in both groups. These results indicate normal processing of facial expressions among PDD patients. ERP responses to audiovisual presentation showed increased occipito-temporal amplitudes in response to fearful voices at 200 ms, only when presented in combination with a fearful face. This effect was observed in the control group, but not in the patient group. Because of the importance of rapid audiovisual integration of emotional information on social competence, the absence of such modulation in PDD might attribute to the observed deficits in their emotional behavior.
Not available
Back to Abstract
|