+ All Categories
Home > Documents > Introduction - Opus: Online Publications Store -...

Introduction - Opus: Online Publications Store -...

Date post: 21-May-2018
Category:
Upload: duongkhanh
View: 216 times
Download: 1 times
Share this document with a friend
72
Recognising facial expressions of pain The role of spatial frequency information in the recognition of facial expressions of pain Shan Wang*, Christopher Eccleston, Edmund Keogh University of Bath (in press, PAIN) * Correspondence: Shan Wang Department of Psychology & Centre for Pain Research, University of Bath Bath, BA2 7AY United Kingdom; Tel: +44 (0) 1225 385434 Fax: +44 (0) 1225 386752 E-mail: [email protected] Running Head: Recognising facial expressions of pain Type of manuscript: Original Article Number of pages: 32
Transcript
Page 1: Introduction - Opus: Online Publications Store - Opusopus.bath.ac.uk/47450/1/UploadManuscript.docx · Web viewThe number of hits (correct responses) was calculated for each participant.

Recognising facial expressions of pain

The role of spatial frequency information in the recognition of facial expressions of pain

Shan Wang*, Christopher Eccleston, Edmund Keogh

University of Bath

(in press, PAIN)

* Correspondence:

Shan Wang

Department of Psychology & Centre for Pain Research,

University of Bath

Bath, BA2 7AY

United Kingdom;

Tel: +44 (0) 1225 385434

Fax: +44 (0) 1225 386752

E-mail: [email protected]

Running Head: Recognising facial expressions of pain

Type of manuscript: Original Article

Number of pages: 32

Number of figures/tables: 12

Disclosure statement: Shan Wang received a PhD scholarship from the University of Bath to

conduct this research.

Page 2: Introduction - Opus: Online Publications Store - Opusopus.bath.ac.uk/47450/1/UploadManuscript.docx · Web viewThe number of hits (correct responses) was calculated for each participant.

Recognising facial expressions of pain

Abstract

Being able to detect pain from facial expressions is critical for pain communication.

Alongside identifying the specific facial codes used in pain recognition, there are other types

of more basic perceptual features, such as spatial frequency (SF) which refers to the amount

of detail in a visual display. Low-SF carries coarse information, which can be seen from a

distance, and high-SF carries fine-detailed information that can only be perceived when

viewed close up. Since this type of basic information has not been considered in the

recognition of pain we therefore investigated the role of low- and high-SF information in the

decoding of facial expressions of pain. Sixty-four pain free adults completed two independent

tasks: a multiple expression identification task of pain and core emotional expressions, and a

dual expression “either-or” task (pain vs. fear, pain vs. happiness). While both low-SF and

high-SF information makes the recognition of pain expressions possible, low-SF information

seemed to play a more prominent role. This general low-SF bias would seem an

advantageous way of potential threat detection, as facial displays will be degraded if viewed

from a distance or in peripheral vision. One exception was found, however, in the “pain-fear”

task, where responses were not affected by SF type. Together this not only indicates a

flexible role for SF information that depends on task parameters (goal context), but suggests

that in challenging visual conditions, we perceive an overall affective quality of pain

expressions rather than detailed facial features.

Key Words: pain; facial expression; recognition; spatial frequency

Page 3: Introduction - Opus: Online Publications Store - Opusopus.bath.ac.uk/47450/1/UploadManuscript.docx · Web viewThe number of hits (correct responses) was calculated for each participant.

Recognising facial expressions of pain

Summary

Both low-spatial-frequency and high-spatial-frequency information make the recognition of

pain expressions possible, with low-spatial-frequency information playing a more prominent

role.

Page 4: Introduction - Opus: Online Publications Store - Opusopus.bath.ac.uk/47450/1/UploadManuscript.docx · Web viewThe number of hits (correct responses) was calculated for each participant.

Recognising facial expressions of pain

1 Introduction

Pain is broadcast into our social world through verbal and nonverbal channels of

communication. Cues of pain function to elicit help or comfort, and alert others to potential

threats in the environment [15,31,40,79]. Facial expressions are a primary non-verbal channel

of pain communication, and so need to be clearly and unambiguously recognised. However,

there are various ways in which pain expressions can be decoded. One method has been to

objectively identify a unique set of facial codes on a component basis, and use these to

differentiate pain from core emotions [23–24,50]. Other approaches are to consider the

processing of faces in a global manner [80], such as when making value judgements about

pain expressions and their authenticity [7,16,34,47–49], levels of severity [20,41,42,46,51],

or simply whether they are different from non-noxious emotional states [36]. Overall pain

expressions can be identified at an acceptable level (70%) – pain recognition seems

comparable to fear, although not as easy to recognise as anger or surprise (>80%) [36,63].

As well as these higher-level social-cognitive features, faces are also images that vary

in basic perceptual information, such as luminance (brightness), spatial frequency (detail) and

stereoscopic disparity (depth) [35]. Of these, spatial frequency (SF) [35,62] is considered

particularly important, as it relates to the amount of detailed information present in an image

[3,5,11,32,37,60]. Low-SF typically conveys coarse information (e.g., contour of the face,

distance between the eyes), whereas high-SF conveys fine-detailed information (e.g., local

facial elements, creases). SF information is processed very early on in visual perception,

through distinctive visual pathways and cortical mechanisms [6,25,73], and contributes to

higher-level processing and understanding. In clear and intact representations (broad-SF),

both types of information are available to observers [35]. However, visual stimuli are often

degraded, with some information unavailable. For example, faces viewed at distance or in the

1

Page 5: Introduction - Opus: Online Publications Store - Opusopus.bath.ac.uk/47450/1/UploadManuscript.docx · Web viewThe number of hits (correct responses) was calculated for each participant.

Recognising facial expressions of pain

periphery lack fine-detailed high-SF information. Although coarse low-SF information is

always retained in a natural environment, faces viewed close up have enhanced fine-detailed

high-SF information.

The studies of facial expressions of emotions suggest they can be recognised, and

judgements made, even when only low-SF information is provided [8,10,13,39,60]. However,

it is currently unclear whether SF information affects our ability to accurately detect and

recognise pain expressions, and in turn whether this is similar or different to core emotions.

We, therefore, examined whether pain faces can be recognised in degraded visual conditions,

by varying the amount of SF information available. Since task parameters can affect

expression recognition from SF information [60], we employed two different tasks, which

reflect different forms of visual perception [61]: a multiple expression identification task, and

a dual expression “either-or” task. We hypothesized that pain would be recognised in faces

even when limited SF information (e.g. low-SF) is available, and at a similar level to core

emotions. Since the multiple expression identification task is more complex, we assumed

more fine-detailed information is required [6], and so predicted pain recognition would be

more impaired by reducing high-SF information.

2 Method

2.1 Design

Participants completed a multiple expression identification task and a dual expression

“either-or” task, both of which employed a mixed-groups design. The within-groups variables

were the type of SF information available (broad-SF vs. low-SF vs. high-SF) and expression

type (which varied according to task, see below). A between-groups variable was also

included, and consisted of the participant’s gender (male vs. female). This was included

2

Page 6: Introduction - Opus: Online Publications Store - Opusopus.bath.ac.uk/47450/1/UploadManuscript.docx · Web viewThe number of hits (correct responses) was calculated for each participant.

Recognising facial expressions of pain

because men and women are known to decode facial expressions in different ways

[27,28,38]. The dependent variables were accuracy and response time, depending on the task

under investigation.

2.2 Participants

Sixty-four healthy adult participants (33 females) were recruited from the University of

Bath staff and student population. The sample had a mean age of 26.48 (SD = 5.93). All

participants had normal or correct to normal vision, and reported being pain free and free

from any psychiatric or neurological conditions.

2.3 Face stimulus materials: selection and preparation

In order to examine the effect of SF information on our two expression recognition

tasks, a stimuli set needed to be generated. This section describes how we selected the stimuli

for use in these tasks, and how the original stimuli set was constructed and validated, before

describing the validation procedure used in the present study.

We chose to use posed facial expressions of pain and core emotions. Whilst there are

known differences between posed and genuine expressions [7,48], there is an extensive

literature showing that the use of posed images is common and acceptable in expression

recognition research [1]. Indeed, the vast majority of emotion recognition research relies on

well controlled, validated, posed expressions, where the utility of using such expression

stimuli is well established [63]. Since posed and genuine expressions do vary, we should be

mindful of this when drawing conclusion, and will return to this point in the discussion.

2.3.1 Description of original stimuli set

3

Page 7: Introduction - Opus: Online Publications Store - Opusopus.bath.ac.uk/47450/1/UploadManuscript.docx · Web viewThe number of hits (correct responses) was calculated for each participant.

Recognising facial expressions of pain

Rather than generating a new stimuli set, an existing set of facial expressions including

pain was used, as they have already been validated as being good examples of the core

expressions to be considered in the present study. Whilst there now are a number of possible

stimuli sets available (e.g. [63]), the STOIC database of facial expressions [56] was chosen

because images are standardised in terms of size, colour and luminance level, and have been

successfully used in various studies [9,29,30,54,55,57,78].

The original STOIC facial expression stimuli set included pain, six basic emotions

(anger, disgust, fear, happiness, sadness, and surprise), and neutral faces, all presented by ten

models (five females and five males). All basic stimuli are standardised grey-scale images

(256×256 pixels) with calibrated luminance level and elliptical masks to exclude non-facial

cues. To create the database, Roy and colleagues collected a total of 7000 video clips from 34

actors, of which 1088 video clips were selected on the basis of observers ratings (most

genuine). From these, a static stimuli (images) set was extracted based on the apex (most

expressive frame) of each video. In the validation studies, observers rated each stimulus for

intensity of anger, disgust, fear, happiness, sadness and surprise. For each expression, the

most recognisable videos and images that possessed similar affective intensities were

selected. The final STOIC database comprised of 80 videos (mean rating proportions for all

expressions > 0.80) and 80 images (mean rating proportions for all expressions > 0.78) from

10 actors (five females and five males), with each actor facially expressing all eight

expressions (six basic emotions, pain and neutral).

Although the original STOIC database included both dynamic (videos) and static

stimuli (images) of the facial expressions, the static stimuli were used in this study. This was

because exposure duration has previously been found to influence the perception of SF

information in human face processing [4,58]. Since stimulus exposure time is associated with

4

Page 8: Introduction - Opus: Online Publications Store - Opusopus.bath.ac.uk/47450/1/UploadManuscript.docx · Web viewThe number of hits (correct responses) was calculated for each participant.

Recognising facial expressions of pain

the video-clip duration, using dynamic stimuli (videos) would introduce a more complex

array of factors into the study.

2.3.2 Further validation of stimuli

One of the goals of the current investigation was to compare pain expressions against

core emotions in two separate tasks. For the dual expression “either-or” task, a comparison

expression was required to compare with pain (see details in Dual expression “either-or”

task). Therefore a validation task was conducted to gain additional ratings on valence and

arousal, which would not only be used to select our comparison stimuli, but also confirm the

validity (recognition accuracy) of the STOIC expressions.

2.3.2.1 Validation procedure

An additional ten healthy adult participants (5 females) were recruited from the

University of Bath staff and student population. The sample had a mean age of 25.10 (SD =

3.00). All participants had normal or correct to normal vision, and reported being pain free

and free from any psychiatric or neurological condition. Participants were not paid for taking

part in the validation task. Informed consent was obtained from all participants prior to taking

part in the study.

The task was designed and controlled using E-Prime professional 2.0. Stimuli were

displayed in their original size of 7.62 × 7.62 cm on a 19" LCD screen with resolution of

1280 × 1024 pixels and a refresh rate of 60 HZ. Participants’ viewing distance was

approximately 60 cm with a visual angle of 3.63°. Each participant completed 80 trials (one

face image per trial; 8 expressions × 10 models per expression). In each trial, participants

were shown a fixation cross at the centre of the screen for 500 msec followed by a face

stimulus. Participants were asked to identify the expression of the face from a list of 8 options

(1 = happiness, 2 = sadness, 3 = disgust, 4 = surprise, 5 = pain, 6 = anger, 7 = fear and 8 =

5

Page 9: Introduction - Opus: Online Publications Store - Opusopus.bath.ac.uk/47450/1/UploadManuscript.docx · Web viewThe number of hits (correct responses) was calculated for each participant.

Recognising facial expressions of pain

neutral) by pressing the corresponding key on the keyboard (1 to 8, respectively). They then

rated the valence and arousal level of the expression on two 7-point scales by pressing the

corresponding key labelled on the keyboard: with respect to valence, -3 = clearly unpleasant

to +3 = clearly pleasant; and arousal, -3 = highly relaxed to +3 = highly arousal. The face

image remained on the screen in each trial until participants responded. The stimuli were

shown in a random order.

2.3.2.2 Validation Results: Stimulus recognisability

The number of hits (correct responses) was calculated for each participant. No outliers

were found, and non-parametric tests were used for data analysis. One-sample binomial tests

revealed that all the expressions were identified at a better than chance level (test proportion

= 12.5%; all p’s < .001, Cohen’s g’s > 0.48). To examine this further, a confusion matrix on

the number of responses was created (see Table 1), where we were able to compare the

number of responses to target and non-target expressions. For example, for pain stimuli, the

proportion of pain responses was 78%, and the sum of responses to anger, disgust, fear,

happiness, neutral, sadness and surprise expressions was 22%. One-sample binomial tests

revealed that all the target expressions could be accurately identified (test proportion = 50%;

for anger, p < .001, Cohen’s g = 0.35; disgust, p < .05, Cohen’s g = 0.11; fear, p < .05,

Cohen’s g = 0.11; happiness, p < .001, Cohen’s g = 0.48; neutral, p < .001, Cohen’s g = 0.29;

pain, p < .001, Cohen’s g = 0.28; sadness, p < .001, Cohen’s g = 0.33; and surprise p < .001,

Cohen’s g = 0.40). For completeness and comparison with previously published studies, the

simple hit rate and the unbiased hit rate [75] were also calculated using the following

formulas.

Simple hit rate =Number of hits

Number of expressions presented ;

6

Page 10: Introduction - Opus: Online Publications Store - Opusopus.bath.ac.uk/47450/1/UploadManuscript.docx · Web viewThe number of hits (correct responses) was calculated for each participant.

Recognising facial expressions of pain

Unbiased hit rate =

(Number of hits)2

(Number of expressions presented) × (Number of expressions perceived).

As can be seen in Table 2, the accuracy of responses was at an acceptable level across

the different categories, confirming that the expression sets reflect the intended expression.

------Insert Tables 1 & 2 around here------

2.3.2.3 Validation Results: Expression selection for dual expression task

In order to determine which expressions to use in the dual expression “either-or” task,

we focused on how similar and/or different the expressions were judged to be. To achieve

this, the valence and arousal ratings for each expression were entered into the following

formula (for means see Table 2):

D =√ ( Arousalpain−Arousalemotion )2+ (Valence pain−Valence emotion )2;

Here D is the difference (or distance) between two expressions, and Arousal and

Valence are respectively the mean values of arousal and valence ratings for the expressions.

The expression type that was most similar to pain (i.e., least different) was the fear expression

(D = 0.36; see Figure 1). As a comparison, the expression set that showed the greatest

difference from pain was also included, which in this case was happiness (D = 5.29). Of note,

as our data showed that happiness differed from pain on more than one variable, selecting

happiness as a comparison expression to pain meant that if a difference is found involving

happiness expressions, then it is not possible to say whether this was due to its valence or

arousal. Rather, all that can be concluded is that the happiness expressions were rated as most

different. However, in maximising the difference from the pain expression, we can examine

whether this affects the impact of SF information on judgements. If there is a large difference

between expressions then we would expect the reliance on fine detail to be reduced, and so

7

Page 11: Introduction - Opus: Online Publications Store - Opusopus.bath.ac.uk/47450/1/UploadManuscript.docx · Web viewThe number of hits (correct responses) was calculated for each participant.

Recognising facial expressions of pain

removal of high SF information should have less of an effect on image identification. This

point will be returned to in the discussion.

------Insert Figure 1 around here------

2.3.3 Stimuli manipulation of spatial frequencies

In order to investigate the role of low-SF or high-SF information, the SF of our images

was manipulated to separate the low-SF and high-SF information from the original intact

stimuli. Since the SF ranges continuously from low to high, filters with selected cut-off

values were created to access certain ranges of SF. For example, a filter that allows all SF

lower than a cut-off value to pass through, and removes all SF higher than the value is called

a low-pass filter. Conversely, a filter that allows all SF higher than a cut-off value to pass

through and removes all the lower SF is called a high-pass filter. Therefore, deriving low-SF

information could be implemented by applying a low-pass filter with a lower cut-off value;

and a high-pass filter with higher cut-off value could be used to gain high-SF information.

Selection of cut-off values was consistent with the literature [76]: 8 cycles per face was used

for the low-pass cut-off, and 32 cycles per face was used for the high-pass cut-off.

All images were filtered by low- and high-pass Gaussian filters. The manipulation was

completed by MATLAB 2012. First, each original image was Fourier transformed into its

frequency domain. Second, low- and high-pass Gaussian filters were created with cut-off

values of 8 and 32 cycles per face respectively. Third, a low- or high-pass filter was applied

on Fourier transformed image. Fourth, the inverse Fourier transform was used to transform

the filtered image from its frequency domain to spatial domain, and the final outcome was the

filtered image. The low and high-pass Gaussian filters are described as:

GLow−pass=1

2 π σ2 × e−x2+ y2

2σ2

;

8

Page 12: Introduction - Opus: Online Publications Store - Opusopus.bath.ac.uk/47450/1/UploadManuscript.docx · Web viewThe number of hits (correct responses) was calculated for each participant.

Recognising facial expressions of pain

GHigh−pass=1

2 π σ2 ×(1−e−x2+ y2

2σ2 ).

In both of the equations, e− x2+ y2

2 σ 2 is the 2-demensional Gaussian Kernel function. 1

2π σ2

is a normalisation constant which ensures the average grey level of the image remains the

same after filtering. σ determines the width of the filter, where σ=the cut-off value

12

× image size .

After filtering, all the images were adjusted to have the same luminance level. Each

actor (10 in total) displayed each facial expression (8 in total) at each SF level (3 in total).

Thus, a total of 240 images were produced, which consisted of 80 broad-SF stimuli

(unfiltered), 80 low-SF stimuli (low-passed), and 80 high-SF stimuli (high-passed). For an

example of a SF filtered face image, please refer to Figure 1 in Vuilleumier et al.’s study

[74].

2.4 Tasks

The tasks involved a multiple expression identification task with eight categories, and a

dual expression “either-or” task. Both tasks were included to allow examination of different

types of expression recognition when processing a visual stimulus (e.g., a facial expression)

[61]. The dual expression “either-or” task allows us to distinguish one category of stimuli

from another by searching for at least one distinctive difference between the two expressions.

However, such strategy would not lead to a success in the multiple expression identification

task, which requires additional steps and arguably a higher level of specificity to accurately

identify the expression. Therefore, we assumed that different strategies and information

utilisation would be involved in these two tasks, and that the identification task is more

complex since it requires more information for success.

9

Page 13: Introduction - Opus: Online Publications Store - Opusopus.bath.ac.uk/47450/1/UploadManuscript.docx · Web viewThe number of hits (correct responses) was calculated for each participant.

Recognising facial expressions of pain

Both the multiple and dual expression tasks were designed and controlled using E-

Prime Professional 2.0. The apparatus and settings were the same as in the validation task.

The tasks were as follows:

2.4.1 Multiple expression identification task

An expression identification task was used to examine whether pain could be identified

from faces in challenging visual conditions. We also included the set of core basic emotions

(i.e., anger, disgust, fear, happiness, sadness, surprise [22]) and a neutral facial expression for

comparison. A total of 240 images were used, which consisted of 80 broad-SF images, 80

low-SF images, and 80 high-SF images. Each participant completed 240 trials with a break

after every 60 trials.

In each trial, participants were shown a fixation cross at the centre of the screen for 500

msec followed by a face stimulus. The face stimulus was presented one at a time, each for a

fixed presenting time of 300 msec. Under natural viewing conditions, on average, humans

move their eyes every 300 msec to extract enough visual information for further processing

[33]; thus 300 msec is roughly comparable to the visual processing during a single fixation

episode in natural viewing conditions [21]. Each of the stimuli was randomly jittered over ±

0.3° to prevent participants from fixating on a particular feature. Following each face

stimulus, a list of eight options of expressions (1 = happiness, 2 = sadness, 3 = disgust, 4 =

surprise, 5 = pain, 6 = anger, 7 = fear and 8 = neutral) was shown. Participants were required

to choose the expression of the face by pressing the corresponding key (1 to 8) on the

keyboard. The options of expressions remained on the screen until participants made a

response. Following a response, a blank page was presented for 500 msec to reduce the

adaptation effect on the retina, then the next trial began with the fixation cross. The stimuli

were shown in a random order.

10

Page 14: Introduction - Opus: Online Publications Store - Opusopus.bath.ac.uk/47450/1/UploadManuscript.docx · Web viewThe number of hits (correct responses) was calculated for each participant.

Recognising facial expressions of pain

Recognition accuracy served as the key outcome variable on this task. As there were

eight possible responses, which could increase the manual response latency and its variance,

response time was not examined. Since the task was not time dependent, and instructions

were displayed in each trial, a practice session was not included.

2.4.2 Dual expression “either-or” task

This task considered whether participants could distinguish a facial expression of pain

from a basic emotion in challenging visual conditions. This is an “either-or” task that requires

participants to discriminate whether a given face was showing expression A or expression B.

We investigated this in two versions of the task: one was between pain and a basic emotion

perceived similar to pain (i.e. fear), and the other was between pain and a basic emotion

perceived different to pain (i.e. happiness). The choice of comparison expressions was

determined by responses gained in the validation, which allowed us to consider the

similarity/difference between pain and emotions by their rated valence and arousal levels.

The stimuli in this task comprised of broad-SF, low-SF, and high-SF images of pain,

fear, and happiness facial expressions. There were 30 images for each facial expression (i.e.,

there were 10 actors displaying each expression, and each actor/expression was presented in

three different states: broad-SF, low-SF and high-SF). This task consisted of three different

discrimination conditions: pain-fear (similar condition), pain-happiness (different condition),

and fear-happiness (counter-balanced section). Each condition was presented as a blocked set

of trials, with each comprising of 240 trials (each image appeared four times).

In each trial, a fixation cross was presented for 500 msec at the centre of the screen

followed by a face stimulus. The face stimulus was presented for 300 msec and randomly

jittered over ± 0.3° to prevent participants from fixating on a particular feature. Participants

were asked to discriminate between two different expressions by pressing the corresponding

11

Page 15: Introduction - Opus: Online Publications Store - Opusopus.bath.ac.uk/47450/1/UploadManuscript.docx · Web viewThe number of hits (correct responses) was calculated for each participant.

Recognising facial expressions of pain

button on a Serial Response Box (SRBox) as accurately and as quickly as possible. For

example, in the pain-happiness condition, participants indicated whether the facial expression

shown was pain or happiness. A response could be made within 2000 msec of the onset of

stimulus, after which the trial terminated and moved onto the next trial (i.e., with or without

response). A 2000 msec limit is recommended for studies of this type as a reasonable cut-off

to minimize the effect of response time outliers [52]. It is also considered long enough to

allow participants to make manual responses after conscious processing of a visual stimulus

[43,70,71]. Once a response had been made, a blank screen was displayed for 500 msec to

reduce any adaptation effects.

The stimuli in each condition were presented randomly, and the order of conditions was

counterbalanced across participants. Each participant was required to complete the three

conditions with a break scheduled between each one. A practice session of 10 trials

consisting of anger and surprise facial expressions preceded the main experimental

conditions.

2.5 Procedure

Ethical committee approval was granted by Department of Psychology Ethics

Committee (Ref. 13-002) and Department of Health Ethics Committee (Ref. EP 12/13 64) of

the University of Bath. Informed consent was obtained from all participants prior to taking

part in the study. All participants were asked to complete the multiple expression task first,

followed by the dual expression “either-or” task1. This fixed order was chosen over

1 The current study was a discrete part of a larger PhD thesis by the first author, in which a battery of self-report measures were employed, including Anxiety Sensitivity Index-3 (ASI-3) [68], Pain Catastrophising Scale [66], Interpersonal Reactivity Index (IRI) [18,19], and Positive and Negative Affect Schedule (PANAS) [77]. However, we do not report this aspect here as they were not relevant to the specific goals of this manuscript. We would, however, be happy to provide further details on request.

12

Page 16: Introduction - Opus: Online Publications Store - Opusopus.bath.ac.uk/47450/1/UploadManuscript.docx · Web viewThe number of hits (correct responses) was calculated for each participant.

Recognising facial expressions of pain

counterbalancing to avoid any potential priming effect from a subset of the stimuli. The

participants were given £5 each.

2.6 Data analysis

For the identification task, the dependent variable was accuracy (number of hits). Data

were entered into mixed-groups analysis of variance (ANOVA) to examine the effect of

perceptual information (broad-SF vs. low-SF vs. high-SF) and expression type (anger vs.

disgust vs. fear vs. happiness vs. neutral vs. pain vs. sad vs. surprise) on identification

accuracy. Participant gender was included as a between-group variable. Simple effects

analyses were applied when significant interactions found. Post hoc analyses with

Bonferroni-type correction were conducted when required, and the corrected cut-off point for

each analysis was calculated following 0.05/the number of comparison rule (e.g. when there

are three comparisons, the corrected cut-off point is 0.05/3 = 0.0167). The significance levels

after Bonferroni-type adjustment (p < .05, p < .01 or p < .001) and the effect sizes (Cohen’s

d) are reported for each comparison. This procedure has been applied repeatedly throughout

the analysis.

For the dual expression “either-or” task, dependent variables were accuracy and

response time (RT). Data were analysed separately for each of the three presentation pair

conditions. For each paired condition, data were entered into mixed-groups ANOVA to

examine the effect of perceptual information (broad-SF vs. low-SF vs. high-SF) and

expression type (fear vs. pain or happiness vs. pain) on discrimination accuracy and RT.

Participant gender was entered as a between-groups variable. Simple effects analyses were

applied when significant interactions found. Post hoc analyses followed the same principles

as described above for the multiple expression task.

13

Page 17: Introduction - Opus: Online Publications Store - Opusopus.bath.ac.uk/47450/1/UploadManuscript.docx · Web viewThe number of hits (correct responses) was calculated for each participant.

Recognising facial expressions of pain

3 Results

3.1 Multiple expression identification task

3.1.1 Data screening

The number of hits (correct responses) was calculated for each participant. No outliers

were found for the overall number of hits for each participant, with z-scores lying within an

acceptable range i.e., between -3.29 and 3.29 [67]. The data were normally distributed, with

acceptable z-scores of skewness and kurtosis between -1.96 and 1.96 [14], and were

approximately homogeneous. For factors where sphericity could not be assumed, F-ratios

with adjusted degrees of freedom and p-values are reported below.

For completeness and easy comparison across studies, the simple hit rates and unbiased

hit rates were calculated and are reported in Table 3.

------Insert Table 3 around here------

3.1.2 Role of low-SF and high-SF information

Means and standard deviations of the number of hits for female and male participants in

each condition are presented in Table 4. One sample binomial tests revealed that expressions

were identified above chance level (12.5%) in all conditions by both female and male

participants (all p’s < .001, Cohen’s g’s > 0.28).

------Insert Table 4 around here------

Data were then entered into a 3 (SF information: broad-SF vs. low-SF vs. high-SF) × 8

(expressions: anger vs. disgust vs. fear vs. happiness vs. neutral vs. pain vs. sadness vs.

surprise) × 2 (participant gender: female vs. male) mixed-groups ANOVA. Statistical

14

Page 18: Introduction - Opus: Online Publications Store - Opusopus.bath.ac.uk/47450/1/UploadManuscript.docx · Web viewThe number of hits (correct responses) was calculated for each participant.

Recognising facial expressions of pain

analysis revealed a significant main effect of SF information, F (2, 124) = 42.64, p < .001, η2p

= .41. Participants produced more hits on faces with broad-SF information (mean = 58.69,

SD = 6.75) than both low-SF (mean = 55.09, SD = 6.03; p < .001, Cohen’s d = 0.56) and

high-SF information (mean = 52.78, SD = 7.32; p < .001, Cohen’s d = 0.84), and the number

of hits was higher when presented with low-SF rather than high-SF information (p < .01,

Cohen’s d = 0.34).

The main effect of facial expression type was also significant, F (6.03, 374.07) = 56.38,

p < .001, η2p = .48 (see Figure 2). The recognition of pain expressions was significantly lower

than those found for happiness or surprise faces (both p’s < .001, Cohen’s d’s > 1.12), but not

different to the remaining expressions (all p’s > .26). In terms of the non-pain comparisons,

the most accurately identified expressions were happiness and surprise, which were

significantly better than the other expressions (all p’s < .001, Cohen’s d’s > 0.75); and a

greater number of hits were found for happiness faces compared to surprise faces (p < .001,

Cohen’s d = 1.54). In addition, the recognition of anger and sadness was more accurate than

those for disgust (both p’s < .001, Cohen’s d’s > 0.75) and fear (both p’s < .01, Cohen’s d’s >

0.55). Also, the neutral faces were better recognised than disgust (p < .01, Cohen’s d = 0.59).

------Insert Figure 2 around here------

A significant interaction was found between SF information × expression type, F

(13.22, 819.68) = 7.03, p < .001, η2p = .10; see Figure 3. Simple effects analysis was applied

to examine the effect of perceptual information on each expression type. The perceptual

information had a significant effect on pain (F (2, 62) = 24.72, p < .001, η2p = .45), anger (F

(2, 62) = 13.68, p < .001, η2p = .31), disgust (F (2, 62) = 16.88, p < .001, η2

p = .36), fear (F (2,

62) = 5.04, p < .01, η2p = .14), neutral (F (2, 62) = 8.34, p < .001, η2

p = .22), and sadness (F (2,

62) = 18.69, p < .001, η2p = .38); but not on happiness (F (2, 62) = 0.43, p = .65) or surprise

(F (2, 62) = 2.35, p = .10). Different patterns emerged based on the type of expression under

15

Page 19: Introduction - Opus: Online Publications Store - Opusopus.bath.ac.uk/47450/1/UploadManuscript.docx · Web viewThe number of hits (correct responses) was calculated for each participant.

Recognising facial expressions of pain

investigation. For pain and disgust expressions there were more hits for broad-SF information

than low-SF (both p’s < .05, Cohen’s d’s > 0.24) and high-SF information (both p’s < .001,

Cohen’s d’s > 0.67), and better recognition for low-SF information compared to high-SF

information (both p’s < .01, Cohen’s d’s > 0.37). For anger, fear and sadness expressions,

recognition was better for broad-SF information than low-SF (all p’s < .05, Cohen’s d’s >

0.35) and high-SF information (all p’s < .05, Cohen’s d’s > 0.35); however, the difference

between low-SF and high-SF information was not significant for these expressions (all p’s

> .21). For neutral faces, the hits for low-SF information were less than broad-SF and high-

SF information (both p’s < .01, Cohen’s d’s > 0.40), and there was no significant difference

between broad-SF and high-SF information (p = 1.00).

------Insert Figure 3 around here------

In terms of gender differences, the main effect of gender was also found to be

significant, F (1, 62) = 4.06, p < .05, η2p = .06: females (mean = 170.88, SD = 17.18)

produced more hits than males (mean = 161.97, SD = 18.20). The interaction between

expression type × gender was also significant, F (6.03, 374.07) = 2.13, p < .05, η2 p = .03; see

Figure 4. Simple effects analysis was applied to examine the gender difference within each

expression type. The gender difference was only significant in identification of disgust (F (1,

62) = 13.10, p < .001, η2 p = .17), where females (mean = 18.64, SD = 4.17) performed better

than males (mean = 14.06, SD = 5.85). The gender differences in other expressions were not

significant, all F’s < 1.22, all p’s > .27. The interactions between SF information and

participants’ gender (F (2, 124) = 0.31, p = .74), and SF information, expression, and

participants’ gender (F (13.22, 819.68) = 1.35, p = .18) were not significant. No other

significant effects were found.

------Insert Figure 4 around here------

16

Page 20: Introduction - Opus: Online Publications Store - Opusopus.bath.ac.uk/47450/1/UploadManuscript.docx · Web viewThe number of hits (correct responses) was calculated for each participant.

Recognising facial expressions of pain

3.2 Dual expression “either-or” task

Three different analyses were conducted, based on the different expression pairings:

pain-fear, pain-happiness and fear-happiness.

3.2.1 Data screening

One male participant did not complete this task. Data were screened to remove trials

with response times shorter than 200 msec or longer than 2000 msec. The overall numbers of

hit were calculated for each participant in each discrimination condition. One participant

(female) was removed due to low response accuracies in both pain-happiness and fear-

happiness discrimination, with z-scores lower than -3.29. One sample binomial tests were

applied in all three pairing conditions, which revealed that both expressions in each pairing

were identified above chance level (50%) by both female and male participants (all p’s

< .001, Cohen’s g’s > 0.35). Simple hit rates and unbiased hit rates were calculated (after

screening) and are reported in Table 5.

------Insert Table 5 around here------

The mean RTs were calculated for each participant under the different levels of the

experiment. A second female participant was removed due to the long RTs in pain-fear

section (z-scores of the mean RTs exceed 3.29 in a number of levels). Final data for this task

were from a sample of 61 participants (31 females). Distributions had an acceptable level of

skewness for the number of hits and RTs [52], and were approximately homogeneous.

Corrected F values are reported for factors where sphericity could not be assumed.

3.2.2 Pain-fear

Accuracy and RT data were entered, respectively, into two separate 3 (SF information:

broad-SF vs. low-SF vs. high-SF) × 2 (expressions: pain vs. fear) × 2 (participant gender:

17

Page 21: Introduction - Opus: Online Publications Store - Opusopus.bath.ac.uk/47450/1/UploadManuscript.docx · Web viewThe number of hits (correct responses) was calculated for each participant.

Recognising facial expressions of pain

female vs. male) mixed-group ANOVAs. Means and standard deviations can be found in

Table 6 and 7.

------Insert Table 6 & 7 around here------

Analysis of the accuracy data revealed no significant main or interaction effects, all F’s

< 3.02, all p’s > .05.

The RT analysis, however, revealed a significant main effect of expression type, F (1,

59) = 19.00, p < .001, η2 p = .24. Here, responses to pain faces (mean = 695 msec, SD = 135)

were faster than those found for fear faces (mean = 732 msec, SD = 135). There was no

significant effect of SF information (F (2, 118) = 2.06, p = .13), or gender (F (1, 59) = 3.36, p

= .07). None of the interactions was significant (all F’s < 2.06, all p’s > .13).

3.2.3 Pain-Happiness

A similar analysis was conducted on accuracy and RT data, with means and standard

deviations presented in Table 6 and 7.

Statistics analysis of accuracy data revealed a significant main effect of SF information,

F (2, 118) = 14.37, p < .001, η2 p = .20. There were more hits for broad-SF information (mean

= 74.46, SD = 3.26) than for either high-SF (mean = 72.15, SD = 5.39; p < .001, Cohen’s d =

0.52) or low-SF information (mean = 73.05, SD = 4.74; p < .01, Cohen’s d = 0.35). However,

the difference between high-SF and low-SF information was not significant (p = .13). The

main effect of expression type was significant, F (1, 59) = 4.63, p < .05, η2 p = .07, with

accuracy being better for happiness faces (mean = 111.25, SD = 6.33) compared to pain faces

(mean = 108.41, SD = 9.48).

A significant interaction was also found between SF information and expression type,

F (1.85, 109.38) = 3.69, p < .05, η2 p = .06 (see Figure 5). Simple effects analysis was applied

18

Page 22: Introduction - Opus: Online Publications Store - Opusopus.bath.ac.uk/47450/1/UploadManuscript.docx · Web viewThe number of hits (correct responses) was calculated for each participant.

Recognising facial expressions of pain

to examine the effect of SF information on each expression type. The SF information had

significant effects on pain (F (2, 58) = 10.78, p < .001, η2p = .27) and happy faces (F (2, 58) =

3.48, p < .05, η2p = .11). For pain faces, using broad-SF information was more accurate

compared to when using high-SF (p < .001, Cohen’s d = 0.49) or low-SF information (p

< .01, Cohen’s d = 0.32); and using low-SF information was better than using high-SF

information (p < .05, Cohen’s d = 0.20). However, there was no difference in hits for happy

faces with broad-SF, high-SF and low-SF information (all p’s > .08). There were no

significant gender differences, F (1, 59) = 0.12, p = .73. All other interactions were non-

significant (all F’s < 1.60, p’s > .21).

------Insert Figure 5 around here------

The RTs analysis revealed a significant main effect of SF information, F (2, 118) =

32.45, p < .001, η2 p = .36. Performance was faster when presented with broad-SF information

(mean = 623 msec, SD = 107) compared with either low-SF (mean = 648 msec, SD = 114; p

< .001, Cohen’s d = 0.22) or high-SF information alone (mean = 659 msec, SD = 104; p

< .001, Cohen’s d = 0.34); but the difference between low-SF and high-SF was not

significant (p = .10).

The main effect of expression type was also significant, F (1, 59) = 10.53, p < .01, η2 p =

.15. Happiness faces (mean = 634 msec, SD = 106) were identified faster than pain faces

(mean = 652 msec, SD = 111). There was no significant gender difference (F (1, 59) = 0.06,

p = .81), and none of the interactions was significant (all F’s < 1.10, all p’s > .29).

3.2.4 Fear-happiness

Although not the primary focus of this study, for completeness analyses were also

conducted on the fear-happiness accuracy and mean RT data - both of which consisted of 3

(SF information: broad-SF vs. low-SF vs. high-SF) × 2 (expressions: fear vs. happiness) × 2

19

Page 23: Introduction - Opus: Online Publications Store - Opusopus.bath.ac.uk/47450/1/UploadManuscript.docx · Web viewThe number of hits (correct responses) was calculated for each participant.

Recognising facial expressions of pain

(participant gender: female vs. male) mixed-group ANOVAs. Means and standard deviations

are presented in Table 6 and 7.

Analysis on the accuracy data revealed significant interaction of SF information,

expression type and participant gender, F (2, 118) = 5.12, p < .01, η2 p = .08. Simple effects

analyses were applied to examine the gender differences. The significant gender difference

was found in recognising happy faces presented by broad-SF information (F (1, 59) = 4.52, p

< .05, η2 p = .07), in which females produced more hits than males. There was no gender

difference in other conditions (all F’s < 1.74, p > .19).

None of the significant main effects or other interactions was significant (all F < 1.89,

p > .15).

Statistical analysis on RT data revealed a significant main effect of SF information, F

(2, 118) = 19.75, p < .001, η2 p = .25. Broad-SF faces (mean = 609 msec, SD = 97) were

identified faster than low-SF (mean = 625 msec, SD = 103; p < .001, Cohen’s d = 0.16) and

high-SF faces (mean = 634 msec, SD = 101; p < .001, Cohen’s d = 0.25), but the difference

between low-SF and high-SF information was not significant (p = .17). The main effect of

expression was significant, F (1, 59) = 30.06, p < .001, η2 p = .34. Happiness faces (mean =

608 msec, SD = 95) were identified faster than fear faces (mean = 638 msec, SD = 106).

There was no significant gender difference (F (1, 59) = 0.22, p = .64), and no

significant interactions (all F’s < 1.79, all p’s > .17).

4 Discussion

Different types of SF information (low-SF vs. high-SF) affected the recognition of

facial expressions of pain. As expected, the recognition of pain faces was best when low-SF

and high-SF information were available together, and reduced when either type was removed.

20

Page 24: Introduction - Opus: Online Publications Store - Opusopus.bath.ac.uk/47450/1/UploadManuscript.docx · Web viewThe number of hits (correct responses) was calculated for each participant.

Recognising facial expressions of pain

This pattern was also found for most of the core emotional expressions, with two exceptions:

happiness and surprise (see below). This general perceptual information effect has previously

been reported in emotion recognition [10], although not in pain.

Whilst losing SF information can reduce the recognition accuracy of pain, performance

was still above chance level showing that the presence of either low-SF or high-SF

information is sufficient for the accurate recognition of pain. In addition, when compared to

most emotional expressions used here, the identification accuracy for pain expressions was

similar: only happiness and surprise expressions were better recognised (which may be due to

differences in valence and arousal; see Figure 1). Pain may be similar to identify when

compared to other negative emotional expressions [36,63], in that both low-SF and high-SF

information are required to resolve recognition with a good level of accuracy. Interestingly,

however, reducing the amount of low-SF information available had a more conspicuous

influence on the identification of pain (alongside disgust) in comparison with basic emotions.

Together, these findings not only support the view that the loss of specific types of perceptual

information reduce our ability to detect and identify pain from facial expressions, but they

also indicate that pain and the core emotions share similar perceptual requirements [79].

The influence of SF information on expression recognition was also dependent on the

task being performed. This confirms previous reports that task parameters are important when

looking at SF effects [60,65]. When conducting the multiple expression task, identification of

pain expressions was reduced by a loss of both types of SF information. However, the effect

of losing low-SF information was greater. When asked to discriminate between two

contrasting expressions, a more complex pattern was found that depended upon the

expression pairs involved. When expression pairs were perceived as being very different (in

both arousal and valence) from one another (i.e., pain and happiness), the role of low-SF and

high-SF information was similar to that found in the multiple expression task. Removal of

21

Page 25: Introduction - Opus: Online Publications Store - Opusopus.bath.ac.uk/47450/1/UploadManuscript.docx · Web viewThe number of hits (correct responses) was calculated for each participant.

Recognising facial expressions of pain

either low-SF or high-SF information reduced pain recognition accuracy, with a stronger

impact found when low-SF information was removed, but had no effect on the accurate

recognition of happiness expressions. However, when expressions pairs were perceived as

similar to each other (i.e., pain and fear), the removal of both types of SF information was

less important. One possible explanation for this difference within the discrimination task

might be because the distinctiveness of low-SF or high-SF information contained in

expressions perceived similar to each other (i.e., pain and fear) is limited or too subtle to

make a difference. Even though pain and fear encoded different facial action units [24], they

show similar amount of negative affect at similar arousal level, and both states might be

experienced (perhaps simultaneously) when in pain [17,40].

An additional interesting finding from the pain-fear discrimination task was that even

though both expressions were considered similar, faster responses were made to pain faces

compared to those expressing fear. This implies that even when perceptual parameters are

similar, pain expressions seem to be easier to process and/or stand out as being more

distinctive. Reasons why pain judgments were made faster than fear judgments are unclear,

but may reflect the fact that pain closely signals physical (sensory) harm and so is prioritized,

whereas fear responses may occur to a range of (non-physical) threats also. The reliability of

this pain judgment effect is certainly worth considering further in future studies.

A key question addressed here was whether low-SF or high-SF information would

play a prominent role in identifying facial expressions of pain. For the multiple expression

task, we found that responses were more accurate when presenting low-SF over high-SF

information. This is consistent with views that low-SF, which conveys coarse information

may facilitate the global processing of faces [26]. However, this study also showed that the

low-SF information advantage was only found for one other type of expression, namely

disgust, in the identification task. Furthermore, this advantage was only found for pain within

22

Page 26: Introduction - Opus: Online Publications Store - Opusopus.bath.ac.uk/47450/1/UploadManuscript.docx · Web viewThe number of hits (correct responses) was calculated for each participant.

Recognising facial expressions of pain

the different pair (i.e. pain and happiness) in the dual emotion “either-or” task. Again this

suggests that the relative contribution of low-SF and high-SF information depends on both

the type of expression and task parameters. If low-SF information is predominantly used for

recognising pain faces, this may have a social advantage. A person who is experiencing pain

may not always be able to display facial expression to others in a clear way. For example,

challenging visual conditions may mean viewing time is brief, faces are obscured, or they are

viewed at distance or in the periphery. In such situations, high-SF information is reduced and

only information conveyed by low-SF is available; it would therefore be adaptive to be able

to quickly detect and accurately identify pain using low-SF information. Mechanisms may

have also evolved to facilitate this. Indeed, sub-cortical visual processing pathways have been

proposed transfer coarsely degraded (low-SF) information to the amygdala [74], which has

previously been found to play a pivotal role in processing social cues and threatening facial

expressions [59], and in the judgment of others suffering [45,64,72].

The current study also produced some unexpected effects. For example, losing low-SF

or high-SF information generally seemed to affect recognition accuracy, this was not found

for happiness and surprise, both of which were the best-identified expressions. Whilst an

advantage for happiness has been found before [8,10,39], this is not as well reported for

surprise. Equally puzzling was the failure to find gender differences in the effects of SF

information on expression identification. Whilst male-female differences in expression

recognition have been previously reported [28,44,53,69], we did not find consistent evidence

for gender differences. A third unexpected finding was that the low-SF advantage over high-

SF information for pain and emotional expressions was found for accuracy, but not response

times. Others have also reported that recognition response times for emotional expressions

are either similar [74], or when differences are found, are inconsistent [2,39,70]. Whilst this

may again be due to variation in task parameters and expressions, studies, including the one

23

Page 27: Introduction - Opus: Online Publications Store - Opusopus.bath.ac.uk/47450/1/UploadManuscript.docx · Web viewThe number of hits (correct responses) was calculated for each participant.

Recognising facial expressions of pain

reported here, also utilize measurement techniques that prevent the dissociation between

visual processing and response execution [12,70]. This may therefore disguise a delicate

temporal advantage of processing low-SF or high-SF information. More sophisticated

measures, such as saccadic responses and event-related potentials (ERP), may be helpful in

reveal any temporal advantage that may exist.

There are some limitations and interesting future directions to be considered. Stimuli

were not genuine facial expressions, but instead images created using actors. Our confidence

in extrapolating our findings outside of the laboratory is limited, and the use of authentic

expressions in real world settings will bring new challenges to the quality and range of

expressions. A long-term solution will be to collate results from different studies, utilizing

different methods and techniques to assess the consistency of such effects. Another potential

methodological modification is to consider presentation times, which were set to 300 msec in

this study. Exposure time and image information (in SF) both play a role in face processing

[4,58]. For example, it has been shown that when exposure time is increased, correct

detection of faces from high-SF information increases, but decreases for low-SF faces [4].

Similarly, it would be interesting to examine the role of SF when viewing dynamic

expressions, and whether low- and high-SF information aids expression identification at

different stages of perception. A third area for development could be to consider the

somewhat arbitrary (but standard) SF cut-off values used to create the stimuli. Future

research could consider the actual SF cut-off thresholds for coarse and fine-detailed

information in processing facial expressions, including pain. If our results are consistent, then

they suggest that alongside facial action units, other types of information are important in the

processing of pain, and other facial expressions. Reducing SF information has more of an

effect on the recognition accuracy of negative facial expressions, including pain, than on non-

negative expressions (surprise, happiness). Finally, forced choice paradigms were used in

24

Page 28: Introduction - Opus: Online Publications Store - Opusopus.bath.ac.uk/47450/1/UploadManuscript.docx · Web viewThe number of hits (correct responses) was calculated for each participant.

Recognising facial expressions of pain

both of our tasks in the present study, which could have resulted in inflated recognition rates

compared with those found when using open response paradigms [36,63]. It may be worth

considering the use of open response paradigms in future research for degraded expressions.

In sum, this is a novel investigation into the role of SFs that convey different types of

perceptual information in the identification of facial expressions of pain. Pain faces could

reliably be identified with limited perceptual information, while the best performance

occurred when both types of information was available. The prominent role of low-SF was

found in identifying pain, indicating that in challenging visual conditions, individuals

perceive an overall affective quality of pain expressions rather than detailed facial features.

25

Page 29: Introduction - Opus: Online Publications Store - Opusopus.bath.ac.uk/47450/1/UploadManuscript.docx · Web viewThe number of hits (correct responses) was calculated for each participant.

Recognising facial expressions of pain

Acknowledgments

This research was funded by a Graduate School PhD Scholarship provided by the

University of Bath to the first author. EK and CE have also received unrelated research

funding support from Reckitt Benckiser Healthcare (UK) Limited, and Engineering and

Physical Sciences Research Council (EPSRC).

The authors would like to thank Professor Chris Budd, Department of Mathematics,

University of Bath and Dr Nathan Smith, Department of Electrical Engineering, University of

Bath for helping to establish the image spatial frequency manipulation methods.

The authors would also like to thank Professor Frederic Gosselin, Department of

Psychology, University of Montreal for granting permission to use the STOIC database.

Conflict of interest statement

The authors have no conflicts of interest to declare in relation to this article.

26

Page 30: Introduction - Opus: Online Publications Store - Opusopus.bath.ac.uk/47450/1/UploadManuscript.docx · Web viewThe number of hits (correct responses) was calculated for each participant.

Recognising facial expressions of pain

References

[1] Adolphs R. Recognizing Emotion from Facial Expressions: Psychological and Neurological Mechanisms. Behav Cogn Neurosci Rev 2002;1:21–62.

[2] Aguado L, Serrano-Pedraza I, Rodriguez S, Roman FJ. Effects of Spatial Frequency Content on Classification of Face Gender and Expression. Span. J. Psychol. 2010;13:525–537.

[3] Ahissar M, Hochstein S. The reverse hierarchy theory of visual perceptual learning. Trends Cogn. Sci. 2004;8:457–64.

[4] Bachmann T. Different trends in perceptual pattern microgenesis as a function of the spatial range of local brightness averaging. Towards an empirical method for the differentiation between global and local levels of form as related to processing in real time. Psychol. Res. 1987;49:107–11.

[5] Bar M. A cortical mechanism for triggering top-down facilitation in visual object recognition. J. Cogn. Neurosci. 2003;15:600–609.

[6] Bar M. Visual objects in context. Nat. Rev. Neurosci. 2004;5:617–29.

[7] Bartlett MS, Littlewort GC, Frank MG, Lee K. Automatic decoding of facial movements reveals deceptive pain expressions. Curr. Biol. 2014;24:738–43.

[8] Becker DV, Neel R, Srinivasan N, Neufeld S, Kumar D, Fouse S. The vividness of happiness in dynamic facial displays of emotion. PLoS One 2012;7.

[9] Blais C, Roy C, Fiset D, Arguin M, Gosselin F. The eyes are not the window to basic emotions. Neuropsychologia 2012;50:2830–2838.

[10] Bombari D, Schmid PC, Schmid Mast M, Birri S, Mast FW, Lobmaier JS. Emotion recognition: The role of featural and configural face information. Q. J. Exp. Psychol. 2013;66:2426–2442.

[11] Bullier J. Integrated model of visual processing. Brain Res. Rev. 2001;36:96–107.

[12] Calvo MG, Nummenmaa L. Time course of discrimination between emotional facial expressions: the role of visual saliency. Vision Res. 2011;51:1751–9.

[13] De Cesarei A, Codispoti M. Spatial frequencies and emotional perception. Rev. Neurosci. 2013;24:89–104.

[14] Clark-Carter D. Quantitative Psychological Research: The Complete Student’s Companion. Taylor & Francis, 2009.

[15] Craig KD. Social communication of pain enhances protective functions: a comment on Deyo, Prkachin and Mercer (2004). Pain 2004;107:5–6.

27

Page 31: Introduction - Opus: Online Publications Store - Opusopus.bath.ac.uk/47450/1/UploadManuscript.docx · Web viewThe number of hits (correct responses) was calculated for each participant.

Recognising facial expressions of pain

[16] Craig KD, Hyde SA, Patrick CJ. Genuine, suppressed and faked facial behavior during exacerbation of chronic low back pain. Pain 1991;46:161–171.

[17] Crombez G, Vlaeyen JW, Heuts PH, Lysens R. Pain-related fear is more disabling than pain itself: evidence on the role of pain-related fear in chronic back pain disability. Pain 1999;80:329–39.

[18] Davis MH. A multidimensional approach to individual differences in empathy. 1980.

[19] Davis MH. Measuring individual differences in empathy: Evidence for a multidimensional approach. J Pers Soc Psychol 1983;44:113–126.

[20] Deyo KS, Prkachin KM, Mercer SR. Development of sensitivity to facial expression of pain. Pain 2004;107:16–21.

[21] DiCarlo JJ, Maunsell JH. Form representation in monkey inferotemporal cortex is virtually unaltered by free viewing. Nat. Neurosci. 2000;3:814–21.

[22] Ekman P, Friesen W V. Constants across cultures in the face and emotion. J Pers Soc Psychol 1971;17:124–129.

[23] Ekman P, Friesen W V, Ellsworth P. Emotion in the human face: Guidelines for research and an integration of findings. Oxford, England: Pergamon Press, 1972 xii, 191.

[24] Ekman P, Friesen W V, Hager JC. The Facial Action Coding System. Salt Lake City: A Human Face, 2002.

[25] Van Essen, D. C., and Deyoe EA. Concurrent processing in the primate visual cortex. In: M. Gazzaniga, editor. The Cognitive Neurosciences. Cambridge: Bradford Book, 1995. 383–400.

[26] Goffaux V, Rossion B. Faces are “spatial”--holistic face perception is supported by low spatial frequencies. J. Exp. Psychol. Hum. Percept. Perform. 2006;32:1023–1039.

[27] Hall JA. Gender effects in decoding nonverbal cues. Psychol. Bull. 1978;85:845–857.

[28] Hall JA, Matsumoto D. Gender Differences in Judgments of Multiple Emotions From Facial Expressions. Emotion 2004;4:201–206.

[29] Hammal Z. Log-Normal and Log-Gabor descriptors for expressive events detection and facial features segmentation. Inf. Sci. (Ny). 2014;288:462–480.

[30] Hammal Z, Kunz M. Pain monitoring: A dynamic and context-sensitive system. Pattern Recognit. 2012;45:1265–1280.

[31] Harrigan JA, Rosenthal R, Scherer KR. The New Handbook of Methods in Nonverbal Behavior Research. Oxford University Press, 2005.

[32] Hegdé J. Time course of visual perception: coarse-to-fine processing and beyond. Prog. Neurobiol. 2008;84:405–39.

28

Page 32: Introduction - Opus: Online Publications Store - Opusopus.bath.ac.uk/47450/1/UploadManuscript.docx · Web viewThe number of hits (correct responses) was calculated for each participant.

Recognising facial expressions of pain

[33] Henderson JM, Hollingworth A. Eye movements during scene viewing: an overview. In: Underwood G, editor. Eye Guidance in Reading and Scene Perception. New York: Elsevier, 1998. 269–283.

[34] Hill ML, Craig KD. Detecting Deception in Facial Expressions of Pain: Accuracy and Training. Clin. J. Pain 2004;20:415–422.

[35] Hole G, Bourne V. Face Processing: Psychological, Neuropsychological, and Applied Perspectives. OUP Oxford, 2010.

[36] Kappesser J, Williams AC de C. Pain and negative emotions in the face: judgements by health care professionals. Pain 2002;99:197–206.

[37] Kauffmann L, Ramanoël S, Peyrin C. The neural bases of spatial frequency processing during scene perception. Front. Integr. Neurosci. 2014;8:37.

[38] Keogh E. Gender differences in the nonverbal communication of pain: A new direction for sex, gender, and pain research? Pain 2014;155:1927–1931.

[39] Kumar D, Srinivasan N. Emotion perception is mediated by spatial frequency content. Emotion 2011;11:1144–1151.

[40] Kunz M, Lautenbacher S, LeBlanc N, Rainville P. Are both the sensory and the affective dimensions of pain encoded in the face? Pain 2012;153:350–358.

[41] Kunz M, Mylius V, Schepelmann K, Lautenbacher S. On the relationship between self-report and facial expression of pain. J. Pain 2004;5:368–376.

[42] LeResche L, Dworkin SF, Wilson L, Ehrlich KJ. Effect of temporomandibular disorder pain duration on facial expressions and verbal report of pain. Pain 1992;51:289–295.

[43] Liu J, Harris A, Kanwisher N. Stages of processing in face perception: an MEG study. Nat. Neurosci. 2002;5:910–916.

[44] Montagne B, Kessels RC, Frigerio E, Haan EF, Perrett D. Sex differences in the perception of affective facial expressions: Do men really lack emotional sensitivity? Cogn. Process. 2005;6:136–141.

[45] Ochsner KN, Zaki J, Hanelin J, Ludlow DH, Knierim K, Ramachandran T, Glover GH, Mackey SC. Your pain or mine? Common and distinct neural systems supporting the perception of pain in self and other. Soc. Cogn. Affect. Neurosci. 2008;3:144–160.

[46] Patrick CJ, Craig KD, Prkachin KM. Observer judgments of acute pain: Facial action determinants. J Pers Soc Psychol 1986;50:1291–1298.

[47] Poole GD, Craig KD. Judgments of genuine, suppressed, and faked facial expressions of pain. J Pers Soc Psychol 1992;63:797–805.

[48] Prkachin KM. Dissociating spontaneous and deliberate expressions of pain: signal detection analyses. Pain 1992;51:57–65.

29

Page 33: Introduction - Opus: Online Publications Store - Opusopus.bath.ac.uk/47450/1/UploadManuscript.docx · Web viewThe number of hits (correct responses) was calculated for each participant.

Recognising facial expressions of pain

[49] Prkachin KM. Effects of deliberate control on verbal and facial expressions of pain. Pain 2005;114:328–338.

[50] Prkachin KM. The consistency of facial expressions of pain: a comparison across modalities. Pain 1992;51:297–306.

[51] Prkachin KM, Berzins S, Mercer SR. Encoding and Decoding of Pain Expressions - a Judgment Study. Pain 1994;58:253–259.

[52] Ratcliff R. Methods for dealing with reaction time outliers. Psychol. Bull. 1993;114:510–32.

[53] Rotter NG, Rotter GS. Sex differences in the encoding and decoding of negative facial emotions. J. Nonverbal Behav. 1988;12:139–148.

[54] Roy C, Blais C, Fiset D, Gosselin F. Visual information extraction for static and dynamic facial expression of emotions: an eye-tracking experiment. J. Vis. 2010;10:531.

[55] Roy C, Roy S, Fiset D, Hammal Z, Blais C, Rainville P, Gosselin F. Recognizing static and dynamic facial expressions of pain : Gaze-tracking and Bubbles experiments. J. Vis. 2008;8:710.

[56] Roy S, Roy C, Fortin I, Ethier-Majcher C, Belin P, Gosselin F. A dynamic facial expression database. J. Vis. 2007;7:944.

[57] Roy S, Roy C, Hammal Z, Fiset D, Blais C, Jemel B, Gosselin F. The use of spatio-temporal Information in decoding facial expression of emotions. J. Vis. 2008;8:707.

[58] Ruiz-Soler M, Beltran F. Face perception: An integrative review of the role of spatial frequencies. Psychol. Res. 2006;70:273–292.

[59] Sander D, Grafman J, Zalla T. The human amygdala: an evolved system for relevance detection. Rev Neurosci 2003;14:303–316.

[60] Schyns PG, Oliva A. Dr. Angry and Mr. Smile: when categorization flexibly modifies the perception of faces in rapid visual presentations. Cognition 1999;69:243–265.

[61] Sekuler R. spatial vision and form perception. In: Blake R, editor. Perception. London: McGraw-Hill, 1994. pp. 151–192.

[62] Shapley R, Lennie P. Spatial frequency analysis in the visual system. Annu. Rev. Neurosci. 1985;8:547–83.

[63] Simon D, Craig KD, Gosselin F, Belin P, Rainville P. Recognition and discrimination of prototypical dynamic expressions of pain and emotions. Pain 2008;135:55–64.

[64] Simon D, Craig KD, Miltner WHR, Rainville P. Brain responses to dynamic facial expressions of pain. Pain 2006;126:309–318.

30

Page 34: Introduction - Opus: Online Publications Store - Opusopus.bath.ac.uk/47450/1/UploadManuscript.docx · Web viewThe number of hits (correct responses) was calculated for each participant.

Recognising facial expressions of pain

[65] Smith ML, Merlusca C. How task shapes the use of information during facial expression categorizations. Emotion 2014;14:478–87.

[66] Sullivan MJL, Bishop SR, Pivik J. The pain catastrophizing scale: Development and validation. Psychol. Assess. 1995;7:524.

[67] Tabachnick BG, Fidell LS. Using Multivariate Statistics. Pearson Education, Limited, 2012.

[68] Taylor S, Zvolensky MJ, Cox BJ, Deacon B, Heimberg RG, Ledley DR, Abramowitz JS, Holaway RM, Sandin B, Stewart SH, Coles M, Eng W, Daly ES, Arrindell WA, Bouvard M, Cardenas SJ. Robust Dimensions of Anxiety Sensitivity: Development and Initial Validation of the Anxiety Sensitivity Index-3. Psychol. Assess. 2007;19:176–188.

[69] Thayer J, Johnsen BH. Sex differences in judgement of facial affect: A multivariate analysis of recognition errors. Scand. J. Psychol. 2000;41:243–246.

[70] Thorpe S, Fize D, Marlot C. Speed of processing in the human visual system. Nature 1996;381:520–2.

[71] Thorpe SJ. NEUROSCIENCE: Seeking Categories in the Brain. Science. 2001;291:260–263.

[72] Vachon-Presseau E, Roy M, Martel MO, Albouy G, Chen J, Budell L, Sullivan MJ, Jackson PL, Rainville P. Neural processing of sensory and emotional-communicative information associated with the perception of vicarious pain. Neuroimage 2012;63:54–62.

[73] De Valois RL, Albrecht DG, Thorell LG. Spatial frequency selectivity of cells in macaque visual cortex. Vision Res. 1982;22:545–59.

[74] Vuilleumier P, Armony JL, Driver J, Dolan RJ. Distinct spatial frequency sensitivities for processing faces and emotional expressions. Nat. Neurosci. 2003;6:624–631.

[75] Wagner HL. On measuring performance in category judgment studies of nonverbal behavior. J. Nonverbal Behav. 1993;17:3–28.

[76] Watier NN, Collin CA, Boutet I. Spatial-frequency thresholds for configural and featural discriminations in upright and inverted faces. Perception 2010;39:502–513.

[77] Watson D, Clark LA, Tellegen A. Development and validation of brief measures of positive and negative affect: the PANAS scales. J Pers Soc Psychol 1988;54:1063–1070.

[78] Willenbockel V, Lepore F, Nguyen DK, Bouthillier A, Gosselin F. Spatial Frequency Tuning during the Conscious and Non-Conscious Perception of Emotional Facial Expressions – An Intracranial ERP Study. Front. Psychol. 2012;3.

[79] Williams AC. Facial expression of pain: an evolutionary account. Behav Brain Sci 2002;25:439–488.

31

Page 35: Introduction - Opus: Online Publications Store - Opusopus.bath.ac.uk/47450/1/UploadManuscript.docx · Web viewThe number of hits (correct responses) was calculated for each participant.

Recognising facial expressions of pain

[80] Young AW, Hellawell D, Hay DC. Configurational information in face perception. Perception 1987;16:747–759.

32

Page 36: Introduction - Opus: Online Publications Store - Opusopus.bath.ac.uk/47450/1/UploadManuscript.docx · Web viewThe number of hits (correct responses) was calculated for each participant.

Recognising facial expressions of pain

Figure Captions

Figure 1: Allocation of expressions in terms of the valence and arousal ratings. Fear is

the most similar expression to pain, and happiness is the most different from pain.

Figure 2: Identification accuracy for each facial expression in the multiple expression

identification task (error bars reflect SEM).

Figure 3: The identification accuracy for each facial expression displayed by broad-

SF, low-SF and high-SF information in the multiple expression identification task

(error bars reflect SEM).

Figure 4: Gender difference in identification accuracy for each facial expression in the

multiple expression identification task (error bars reflect SEM).

Figure 5: Identification accuracy for happiness and pain expressions displayed by

broad-SF, low-SF and high-SF information in the dual expression “either-or” task

(error bars reflect SEM)

33

Page 37: Introduction - Opus: Online Publications Store - Opusopus.bath.ac.uk/47450/1/UploadManuscript.docx · Web viewThe number of hits (correct responses) was calculated for each participant.

Recognising facial expressions of pain

-3 -2 -1 0 1 2 3-3

-2

-1

0

1

2

3

AngerDisgust

Fear

Happiness

Neutral

Pain

Sadness Surprise

Valence

Aro

usal

34

Page 38: Introduction - Opus: Online Publications Store - Opusopus.bath.ac.uk/47450/1/UploadManuscript.docx · Web viewThe number of hits (correct responses) was calculated for each participant.

Recognising facial expressions of pain

Anger Disgust Fear Hap-piness

Neutral Pain Sadness Surprise0

5

10

15

20

25

30N

umbe

r of H

its

35

Page 39: Introduction - Opus: Online Publications Store - Opusopus.bath.ac.uk/47450/1/UploadManuscript.docx · Web viewThe number of hits (correct responses) was calculated for each participant.

Recognising facial expressions of pain

Anger Disgust Fear Happiness Neutral Pain Sadness Surprise0

2

4

6

8

10

Broad-SF

Low-SF

High-SF

Nu

mbe r o

f H

its

*

*Significant differences

36

Page 40: Introduction - Opus: Online Publications Store - Opusopus.bath.ac.uk/47450/1/UploadManuscript.docx · Web viewThe number of hits (correct responses) was calculated for each participant.

Recognising facial expressions of pain

Anger Disgust Fear Happiness Neutral Pain Sadness Surprise0

5

10

15

20

25

30

FemaleMale

Num

ber o

f Hits

*Significant difference

37

Page 41: Introduction - Opus: Online Publications Store - Opusopus.bath.ac.uk/47450/1/UploadManuscript.docx · Web viewThe number of hits (correct responses) was calculated for each participant.

Recognising facial expressions of pain

Pain Happiness34.00

35.00

36.00

37.00

38.00

Broad-SF

Low-SF

High-SF

N u m b er

of

H it s

*

*Significant difference

38

Page 42: Introduction - Opus: Online Publications Store - Opusopus.bath.ac.uk/47450/1/UploadManuscript.docx · Web viewThe number of hits (correct responses) was calculated for each participant.

Recognising facial expressions of pain

Table 1 Confusion matrix of judgements to the expressions in the validation task

Stimuli Judgements

Anger Disgust Fear Happiness Neutral Pain Sadness Surprise

Anger 85 6 2 0 0 4 2 1

Disgust 6 61 1 0 1 17 13 1

Fear 1 3 61 0 1 6 3 25

Happiness 0 0 0 98 0 1 1 0

Neutral 1 0 0 3 79 2 14 1

Pain 1 3 1 9 0 78 5 3

Sadness 0 0 0 1 11 5 83 0

Surprise 1 0 3 0 4 2 0 90

39

Page 43: Introduction - Opus: Online Publications Store - Opusopus.bath.ac.uk/47450/1/UploadManuscript.docx · Web viewThe number of hits (correct responses) was calculated for each participant.

Recognising facial expressions of pain

Table 2 Means (and standard deviations) of simple hit rate, unbiased hit rate, and valence and arousal rating for each expression in the validation

task

Anger Disgust Fear Happiness Neutral Pain Sadness Surprise

Simple Hit Rate (%) 85.00 61.00 61.00 98.00 79.00 78.00 83.00 90.00

Unbiased Hit Rate (%) 76.05 50.97 54.72 86.52 65.01 52.90 56.93 65.85

Valence -1.68 (0.61) -1.73 (0.44) -2.04 (0.56) 1.98 (0.59) 0.19 (0.42) -2.08 (0.70) -1.64 (0.35) -0.02 (0.47)

Arousal 1.73 (0.52) 1.45 (0.38) 2.26 (0.54) -1.48 (0.86) -0.69 (0.92) 1.90 (0.65) 1.11 (0.48) 1.16 (0.48)

40

Page 44: Introduction - Opus: Online Publications Store - Opusopus.bath.ac.uk/47450/1/UploadManuscript.docx · Web viewThe number of hits (correct responses) was calculated for each participant.

Recognising facial expressions of pain

Table 3 The simple hit rate and unbiased hit rate for each expression displayed by broad-SF, low-SF and high-SF information in the multiple

expression identification task

Female Participants Male ParticipantsBroad-SF Low-SF High-SF Broad-SF Low-SF High-SF

Simple Hit Rate (%) Anger 76.36 65.45 64.24 73.23 68.39 64.52Disgust 68.48 66.06 51.82 53.87 46.13 40.65Fear 64.24 60.30 57.58 62.90 52.26 54.84Happiness 96.06 94.55 94.85 96.45 97.10 95.81Neutral 64.85 58.48 68.18 65.16 60.97 64.84Pain 71.21 65.76 58.18 66.77 60.97 52.26Sadness 80.30 69.39 65.76 73.23 68.71 64.52Surprise 80.91 86.97 78.79 78.71 79.35 78.06

Unbiased Hit Rate (%) Anger 67.05 51.04 47.29 60.89 47.53 40.83Disgust 46.20 44.72 33.56 36.28 29.71 24.16Fear 47.62 43.01 44.11 43.34 33.20 38.36Happiness 87.50 80.38 70.68 89.01 80.07 70.96Neutral 51.02 38.92 38.07 48.39 38.80 36.61Pain 49.66 46.78 36.87 35.72 33.89 26.29Sadness 51.78 42.95 41.60 45.05 43.43 44.65Surprise 59.35 58.05 55.22 59.28 54.38 55.40

41

Page 45: Introduction - Opus: Online Publications Store - Opusopus.bath.ac.uk/47450/1/UploadManuscript.docx · Web viewThe number of hits (correct responses) was calculated for each participant.

Recognising facial expressions of pain

Table 4 Means (and standard deviations) of the number of hits for expressions displayed by broad-SF, low-SF and high-SF information in the

multiple expression identification task.

Female Participants Male Participants

Broad-SF Low-SF High-SF Broad-SF Low-SF High-SF

Anger 7.64 (1.75) 6.55 (2.02) 6.42 (1.79) 7.32 (2.53) 6.84 (2.22) 6.45 (2.42)

Disgust 6.85 (1.46) 6.61 (1.73) 5.18 (1.93) 5.39 (2.32) 4.61 (2.01) 4.06 (2.38)

Fear 6.42 (1.79) 6.03 (2.11) 5.76 (2.39) 6.29 (2.00) 5.23 (1.93) 5.48 (2.16)

Happiness 9.61 (0.75) 9.45 (0.83) 9.48 (0.97) 9.65 (0.55) 9.71 (0.46) 9.58 (0.67)

Neutral 6.48 (1.37) 5.85 (1.18) 6.82 (1.40) 6.52 (1.39) 6.10 (1.42) 6.48 (1.65)

Pain 7.12 (2.10) 6.58 (2.00) 5.82 (2.08) 6.68 (1.51) 6.10 (2.33) 5.23 (2.45)

Sadness 8.03 (1.07) 6.94 (1.64) 6.58 (1.66) 7.32 (2.29) 6.87 (1.69) 6.45 (1.96)

Surprise 8.09 (1.67) 8.70 (1.16) 7.88 (1.54) 7.87 (2.00) 7.94 (1.44) 7.81 (1.70)

42

Page 46: Introduction - Opus: Online Publications Store - Opusopus.bath.ac.uk/47450/1/UploadManuscript.docx · Web viewThe number of hits (correct responses) was calculated for each participant.

Recognising facial expressions of pain

Table 5 The simple hit rate and unbiased hit rate for the dual expression “either-or” task

Female Participants Male Participants

Broad-SF Low-SF High-SF Broad-SF Low-SF High-SF

Simple hit rate (%) Pain 89.77 88.22 89.95 85.77 86.16 85.79Fear 89.94 90.45 91.86 88.86 89.52 87.22

Pain 93.46 91.10 90.54 92.47 90.04 87.20Happiness 93.66 92.80 92.14 94.31 92.80 93.22

Fear 94.58 95.04 94.24 94.48 92.05 92.40Happiness 95.68 93.17 92.78 92.06 94.64 92.82

Unbiased hit rate (%) Pain 80.71 79.66 82.55 75.94 76.78 74.73Fear 80.76 79.97 82.73 76.57 77.58 74.93

Pain 87.48 84.39 83.29 87.12 83.38 80.91Happiness 87.58 84.73 83.58 87.34 83.80 81.96

Fear 90.44 88.65 87.50 87.15 86.99 85.73Happiness 90.55 88.47 87.38 86.85 87.30 85.79

43

Page 47: Introduction - Opus: Online Publications Store - Opusopus.bath.ac.uk/47450/1/UploadManuscript.docx · Web viewThe number of hits (correct responses) was calculated for each participant.

Recognising facial expressions of pain

Table 6 Means (and standard deviations) of the number of hits for each expression displayed by broad-SF, low-SF and high-SF information in the dual expression “either-or” task.

Female Participants Male Participants

Broad-SF Low-SF High-SF Broad-SF Low-SF High-SF

Pain 35.39 (4.26) 34.94 (4.14) 35.68 (3.52) 33.97 (3.66) 33.83 (4.12) 33.80 (3.95)

Fear 35.58 (4.46) 35.84 (3.77) 36.19 (3.83) 35.10 (4.47) 35.30 (4.29) 34.13 (5.12)

Pain 37.19 (2.66) 36.26 (3.41) 35.87 (3.75) 36.87 (2.57) 35.87 (3.32) 34.73 (4.77)

Happiness 37.29 (2.25) 37.03 (2.35) 36.55 (3.49) 37.57 (1.72) 36.93 (2.27) 37.13 (2.39)

Fear 37.55 (3.21) 37.68 (2.27) 37.32 (2.65) 37.67 (1.95) 36.67 (3.59) 36.87 (3.68)

Happiness 38.03 (1.60) 37.00 (2.39) 36.94 (2.77) 36.70 (3.09) 37.67 (2.32) 37.03 (2.90)

44

Page 48: Introduction - Opus: Online Publications Store - Opusopus.bath.ac.uk/47450/1/UploadManuscript.docx · Web viewThe number of hits (correct responses) was calculated for each participant.

Recognising facial expressions of pain

Table 7 Means (and standard deviations) of response time for each expression displayed by broad-SF, low-SF and high-SF information in the dual expression “either-or” task.

Female participants Male Participants

Broad-SF Low-SF High-SF Broad-SF Low-SF High-SF

Pain 654 (117) 661 (102) 667 (106) 717 (155) 740 (163) 734 (162)

Fear 707 (118) 697 (108) 718 (116) 756 (165) 762 (152) 756 (157)

Pain 622 (114) 652 (124) 665 (115) 638 (113) 671 (115) 668 (111)

Happiness 617 (102) 632 (116) 652 (107) 616 (117) 638 (117) 650 (102)

Fear 634 (108) 649 (119) 646 (114) 613 (105) 635 (109) 650 (110)

Happiness 604 (90) 610 (84) 629 (82) 584 (103) 608 (114) 610 (118)

45


Recommended