The ability to both produce and decode such signals is critical for social living, but these processes are far from simple. The human face and voice are flexible and dynamic, capable of producing hundreds of subtle and fleeting expressions. Furthermore, the production and meaning of emotion expressions are modulated in the short-term by interpersonal context and in the long-term by learning and culture. It is therefore no surprise that the smooth exchange of emotional signals can go awry, as in cross-cultural encounters or for people with certain motor, emotional, or cognitive disorders.
My research combines cognitive and affective science to understand the dynamic processes involved in the important and fragile tasks of producing and perceiving emotion expressions. I examine three aspects of emotion expressions: how perceivers process their meaning, the social functions they serve, and how culture shapes their manifestation. See my publications here.
People are experts at perceiving subtle nonverbal expressions and inferring the underlying emotions. Evidence suggests that part of this challenging perceptual task is accomplished through embodied simulation processes, in which the perceiver's brain and facial muscles partially recreate the perceived expression and associated emotion (Wood, Rychlowska, Korb, & Niedenthal, 2016). Our lab and others have demonstrated that interfering with people's facial muscles while they look at facial expressions reduces the accuracy with which they can perceptually discriminate between and judge the meaning of various emotions. For instance, wearing a gel facemask that constricts facial movement disrupts people's ability to detect a facial expression next to a highly similar distractor (Wood, Lupyan, Sherrin, & Niedenthal, 2015). We also have a long-term collaboration with the UW Facial Nerve Clinic that will allow us to test, among other things, the impact of facial paralysis on emotion perception and experience (Korb, Wood, Banks, Agoulnik, Hadlock, & Niedenthal, 2016).
Laughter is a ubiquitous social signal that can smooth interactions (Wood & Niedenthal, under review). We identified three distinct social tasks accomplished by laughter and smiles (Martin, Rychlowska, Wood, & Niedenthal, 2017): rewarding the desirable behavior of others (reward), soothing and signaling nonthreat (affiliation), and indirectly challenging the status or behavior of others (dominance).
One study we conducted related participant judgments of the social intentions of 400 laughter samples to the laughs’ acoustic features, which I extracted using acoustic analysis software (Wood, Martin, & Niedenthal, 2017). Unique acoustics predicted the 3 proposed social functions, suggesting systematic changes to the voice during laughter can convey reward, affiliation, or dominance. These relationships often depended on the sex of the expresser: louder laughter, for instance, was perceived as more affiliative in males, but less affiliative in females. Upcoming work will examine the social behavioral consequences of different naturally-occurring laughter forms.
Recent work from my lab identifies cultural factors underlying observed differences in the manifestation of emotion expression. We argue that cultures arising from the intersection of other cultures, such as in the U.S., initially lacked a clear social structure, shared norms, and a common language (Niedenthal, Rychlowska, & Wood, 2017). Such cultures would have to increase their reliance on emotion expressions, establishing a cultural norm of expressive clarity. For instance, 83 countries have made significant contributions to the current population of the U.S., a highly heterogeneous country, but only 5 countries contributed to Russia's current population, making it relatively homogeneous. We re-analyzed data from 92 cross-cultural emotion recognition studies (involving 82 unique countries) and showed that facial expressions from more heterogeneous cultures are better recognized cross-culturally than those from homogeneous cultures (Wood, Rychlowska, & Niedenthal, 2016).