+ All Categories
Home > Documents > The Use of Common Codes When Interpreting Facial Emotional ...jbarresi.psychology.dal.ca/Papers/ER9...

The Use of Common Codes When Interpreting Facial Emotional ...jbarresi.psychology.dal.ca/Papers/ER9...

Date post: 18-Jul-2020
Category:
Upload: others
View: 0 times
Download: 0 times
Share this document with a friend
1
The Use of Common Codes When Interpreting Facial Emotional Expressions of Self and Other John Barresi, Cindy Hamon-Hill, Dalhousie University, Halifax, NS, Canada Participants evaluated whether emotions expressed in facial displays by self and a stranger were responses to particular emotion-eliciting photos or not. Performance on self was superior to a stranger when paired eliciting stimuli produce different emotions (e.g. sad vs cute), but not the same emotion (e.g. both amusing), supporting a “common code” not memory account. Barresi, J., & Moore, C. (1996). Intentional relations and social understanding. Behavioral and Brain Sciences, 19, 107–122. Barresi, J. & Moore, C. (2008). The neuroscience of social understanding. The Shared Mind: Perspectives on Intersubjectivity (Zlarev J,Racine T, Sinha C, & Itkonen E, ed), 36-66. Amsterdam: John Benjamin. Carr, L., Iacoboni, M., Dubeau, M.-C., Mazziotta, J.C., and Lenzi G.L. (2003). Neural mechanisms of empathy in humans: A relay from neural systems for imitation to limbic areas. PNAS ,USA, 100:5497-5502. Hamon-Hill, C. & Barresi, J. (2010). Does motor mimicry contribute to emotion recognition? Behavioral and Brain Sciences, 33, 447-448. Iacoboni, M. (2008). Mirroring People: The New Science of How We Connect with Others, Farrar, Straus & Giroux, New York, NY Knoblich, G., & Flach, R. (2001). Predicting the effects of actions: Interactions of perception and action. Psychological Science, 12, 467–472. Prasad, S. & Shiffrar, M. (2009). Viewpoint and the recognition of people from their movements. Journal of Experimental Psychology: HP&P, 35, 39-49. Prinz, W. (1997). Perception and action planning. European Journal of Cognitive Psychology, 9, 129–154. Oberman L.M., Winkielman P., & Ramachandran S. (2007). Face to face: Blocking facial mimicry can selectively impair recognition of emotional expressions. Social Neuroscience, 2, 167-178. Because performance on self was superior to other when eliciting stimuli produced different emotions, but not when they produced the same emotion, we infer that a common code between perception and production of actions, not memory, produced this difference. If memory were involved, superiority of self over other should occur for both same and different emotion conditions. Moreover, the interfering effect of pen when processing emotional expressions of both self and other provides further support for a common coding interpretation of the perception of emotions. Activation of facial muscles reduced efficiency in processing emotional expressions of self, but more so of another person. This suggests that perceptual processing of emotion expressions of another person may rely more heavily on the motor system than required for self. That there is a common code involved in the production and perception of action has both theoretical (Barresi & Moore, 1996; Prinz, 1997) and empirical (e.g. Iacoboni, 2009; Barresi & Moore, 2008) support. One implication of this account is that our interpretation of perceived actions becomes refined to the extent that we share ‘expertise’ in performing those particular actions. This expertise should apply particularly well when making judgments about our own past actions (Knoblich & Flach, 2001; Prasad & Shiffrar, 2009). Neuroscientific research involving emotions as forms of actions has found support for the use of action plans and common codes in the perception of facial expressions of emotions of others (Carr et al., 2003). In the present behavioral study we tested whether one’s action plans contribute to one’s representation of a perceived emotion in self and others. Participants made judgments about when images of their own facial expressions as well as those of a stranger were in response to particular emotion-eliciting images. We proposed that recognizing emotional states of facial displays involves a process of matching 1st person internal information of one’s own emotional response with 3rd person visual information of the expressed emotion in a facial display. It was hypothesized that if one’s own action plans are used in interpreting emotional expressions, then participants, being experts vis a vis themselves, should be more efficient in processing their own emotional expressions compared to those of an unfamiliar person. In order to test whether memory of one’s previous response to a stimulus gave an advantage to self, pairs of eliciting stimuli that were likely to produce the same emotional response were compared to pairs that were likely to produce a different emotional response. If memory were a factor then self should show an advantage in both pair types. However, if action matching gave self an advantage, it would be expected to occur for different but not for same emotion pairs. Another variable of interest in the present study involved interference due to the use of facial muscles in a concurrent task while making emotion judgments. Previous studies found some evidence that biting on a pen or pencil reduced efficiency in processing facial emotions (Hamon-Hill & Barresi, 2010; Oberman, Winkielman, & Ramachandran, 2007). We hypothesized that biting on the pen would interfere with normal perceptual processing that involved the motor system and feedback from facial muscles thus result in poorer overall performance. The question of interest here was whether this effect, if replicated, would operate differently for self and other. Abstract Background Results Conclusions Bibliography Method Participants: N = 58, 92% Female, Mean Age = 21 (Range 16 - 40) Procedure: Part A: Participants were recorded while responding naturally to 24 emotion eliciting images. Facial responses to 12 pre-selected photos were edited to 2 second clips containing the apex of the affective response. Responses of each participant were paired with response of one stranger (other). Part B: Testing Phase: Participants viewed multiple trials of the 12 photos followed by a facial display of either self or other and asked whether the emotion display was in response to that particular stimulus (match), or not (mismatch). Photos and facial displays were paired to evoke the same (e.g. amusing) or different (e.g. sad vs cute) emotions. Different emotion pairs were of the same or different valence. Over 2 blocks of 96 trials, participants performed the task while biting on a pen (n = 28) or not (n = 30). dc-bias d’ d’ analysis on stimuli of different emotions replicated effects of pen, p = .04, and self vs other, p = .004. Biting on a pen reduced discrimination more so for other than for self, p = .02. Participants were better at discriminating stimuli of different emotions and different valence than the other two conditions, p < .001. d’ on same emotion did not differ from zero. Participants were biased to respond ‘match’ to paired stimuli of different emotions but the same valence, as well as to paired stimuli of the same emotion, p = .02. sad vs cute cute cute vs vs amusing amusing amusing DV: RT (from stimulus onset) and Accuracy (Hits & Correct Rejections = 1, Misses & False Alarms = 0) Data submitted to a repeated measures ANOVA with pen as a between factor. Figures display means and SE.
Transcript
Page 1: The Use of Common Codes When Interpreting Facial Emotional ...jbarresi.psychology.dal.ca/Papers/ER9 POSTER.pdf · for the use of action plans and common codes in the perception of

The Use of Common Codes When Interpreting Facial EmotionalExpressions of Self and Other

John Barresi, Cindy Hamon-Hill, Dalhousie University, Halifax, NS, Canada

Participants evaluated whether emotions expressed in facial displays by self and astranger were responses to particular emotion-eliciting photos or not. Performanceon self was superior to a stranger when paired eliciting stimuli produce differentemotions (e.g. sad vs cute), but not the same emotion (e.g. both amusing),supporting a “common code” not memory account.

Barresi, J., & Moore, C. (1996). Intentional relations and social understanding. Behavioral and Brain Sciences, 19, 107–122.Barresi, J. & Moore, C. (2008). The neuroscience of social understanding. The Shared Mind: Perspectives on Intersubjectivity (Zlarev J,Racine T, Sinha C, & Itkonen E, ed), 36-66. Amsterdam: John Benjamin.Carr, L., Iacoboni, M., Dubeau, M.-C., Mazziotta, J.C., and Lenzi G.L. (2003). Neural mechanisms of empathy in humans: A relay from neural systems for imitation to limbic areas. PNAS ,USA, 100:5497-5502.Hamon-Hill, C. & Barresi, J. (2010). Does motor mimicry contribute to emotion recognition? Behavioral and Brain Sciences, 33, 447-448.Iacoboni, M. (2008). Mirroring People: The New Science of How We Connect with Others, Farrar, Straus & Giroux, New York, NYKnoblich, G., & Flach, R. (2001). Predicting the effects of actions: Interactions of perception and action. Psychological Science, 12, 467–472.Prasad, S. & Shiffrar, M. (2009). Viewpoint and the recognition of people from their movements. Journal of Experimental Psychology: HP&P, 35, 39-49.Prinz, W. (1997). Perception and action planning. European Journal of Cognitive Psychology, 9, 129–154.Oberman L.M., Winkielman P., & Ramachandran S. (2007). Face to face: Blocking facial mimicry can selectively impair recognition of emotional expressions. Social Neuroscience, 2, 167-178.

Because performance on self was superior to other when eliciting stimuli produced different emotions, but not when they produced the same emotion, we infer that a common codebetween perception and production of actions, not memory, produced this difference. If memory were involved, superiority of self over other should occur for both same anddifferent emotion conditions. Moreover, the interfering effect of pen when processing emotional expressions of both self and other provides further support for a common codinginterpretation of the perception of emotions. Activation of facial muscles reduced efficiency in processing emotional expressions of self, but more so of another person. Thissuggests that perceptual processing of emotion expressions of another person may rely more heavily on the motor system than required for self.

•That there is a common code involved in the production and perception of actionhas both theoretical (Barresi & Moore, 1996; Prinz, 1997) and empirical (e.g.Iacoboni, 2009; Barresi & Moore, 2008) support. One implication of this account isthat our interpretation of perceived actions becomes refined to the extent that weshare ‘expertise’ in performing those particular actions. This expertise should applyparticularly well when making judgments about our own past actions (Knoblich &Flach, 2001; Prasad & Shiffrar, 2009).

•Neuroscientific research involving emotions as forms of actions has found supportfor the use of action plans and common codes in the perception of facialexpressions of emotions of others (Carr et al., 2003). In the present behavioral studywe tested whether one’s action plans contribute to one’s representation of aperceived emotion in self and others. Participants made judgments about whenimages of their own facial expressions as well as those of a stranger were inresponse to particular emotion-eliciting images. We proposed that recognizingemotional states of facial displays involves a process of matching 1st personinternal information of one’s own emotional response with 3rd person visualinformation of the expressed emotion in a facial display.

•It was hypothesized that if one’s own action plans are used in interpretingemotional expressions, then participants, being experts vis a vis themselves,should be more efficient in processing their own emotional expressions comparedto those of an unfamiliar person.

•In order to test whether memory of one’s previous response to a stimulus gave anadvantage to self, pairs of eliciting stimuli that were likely to produce the sameemotional response were compared to pairs that were likely to produce a differentemotional response. If memory were a factor then self should show an advantage inboth pair types. However, if action matching gave self an advantage, it would beexpected to occur for different but not for same emotion pairs.

•Another variable of interest in the present study involved interference due to theuse of facial muscles in a concurrent task while making emotion judgments.Previous studies found some evidence that biting on a pen or pencil reducedefficiency in processing facial emotions (Hamon-Hill & Barresi, 2010; Oberman,Winkielman, & Ramachandran, 2007). We hypothesized that biting on the pen wouldinterfere with normal perceptual processing that involved the motor system andfeedback from facial muscles thus result in poorer overall performance. Thequestion of interest here was whether this effect, if replicated, would operatedifferently for self and other.

Abstract

Background

Results

Conclusions

Bibliography

Method

Participants: N = 58, 92% Female, Mean Age = 21 (Range 16 - 40)Procedure:

Part A: Participants were recorded while responding naturally to 24 emotion elicitingimages. Facial responses to 12 pre-selected photos were edited to 2 second clips containing theapex of the affective response. Responses of each participant were paired with response of onestranger (other).

Part B: Testing Phase: Participants viewed multiple trials of the 12 photos followed by afacial display of either self or other and asked whether the emotion display was in response to thatparticular stimulus (match), or not (mismatch). Photos and facial displays were paired to evoke thesame (e.g. amusing) or different (e.g. sad vs cute) emotions. Different emotion pairs were of thesame or different valence. Over 2 blocks of 96 trials, participants performed the task while bitingon a pen (n = 28) or not (n = 30).

d’’ c-biasd’

d’ analysis on stimuli of different emotions replicated effects of pen,p = .04, and self vs other, p = .004. Biting on a pen reduced discriminationmore so for other than for self, p = .02.

Participants were better at discriminating stimuli of different emotionsand different valence than the other two conditions, p < .001. d’ onsame emotion did not differ from zero.

Participants were biased to respond ‘match’ to paired stimuli ofdifferent emotions but the same valence, as well as to paired stimuliof the same emotion, p = .02.

sad vs cute

cute cute vs vs amusingamusing

amusing

DV: RT (from stimulus onset) andAccuracy (Hits & Correct Rejections = 1,Misses & False Alarms = 0)Data submitted to a repeated measuresANOVA with pen as a between factor.Figures display means and SE.

Recommended