Crossmodal bias effects in perception of human body language

Jan Van den Stock, Cognitive and Affective Neuroscience Laboratory, Tilburg University, Tilburg, The Netherlands

Abstract
Research on emotions expressed by the whole body has been sparse. In two experiments we measured recognition of whole body expressions of emotion in combination with emotional sound fragments. In the first experiment, subjects were presented a sentence spoken in an emotional tone of voice while at the same time a still image of a whole body expression was presented. The emotion expressed in the body was congruent or incongruent with that of the voice. Instructions required rating of the emotion in the voice. Results indicate that perception of the vocal expression is biased towards the body emotion.
In the second experiment, we combined dynamic images of body emotions with sound fragments. Sounds consisted of either human vocalisations or animal sounds (birds chirping and dogs barking). Preliminary testing indicated that emotions were equally well recognized when conveyed by human vocalisations and by animal sounds. Sound fragments were combined with emotionally congruent or incongruent video images. The task was to categorise the emotion expressed in the body. Results show that categorisation of body expression is influenced by incongruent human sounds but not by animal ones. The results indicate that semantic congruency by itself is not sufficient to explain crossmodal bias effects.

Not available

Back to Abstract