+ All Categories
Home > Documents > Emotion modulates the effects of endogenous attention on retinotopic visual processing

Emotion modulates the effects of endogenous attention on retinotopic visual processing

Date post: 29-Nov-2016
Category:
Upload: ana-gomez
View: 212 times
Download: 0 times
Share this document with a friend
10
Emotion modulates the effects of endogenous attention on retinotopic visual processing Ana Gomez a , Marcus Rothkirch a , Christian Kaul b, c , Martin Weygandt d , John-Dylan Haynes d , Geraint Rees b , Philipp Sterzer a, d, e, a Department of Psychiatry, Campus Charité Mitte, Charité Universitätsmedizin Berlin, Germany b Institute of Cognitive Neuroscience & Wellcome Trust Centre for Neuroimaging, University College London, UK c Department of Psychology and Center for Neural Science, New York University, USA d Bernstein Center for Computational Neuroscience, Berlin, Germany e Berlin School of Mind and Brain, Berlin, Germany abstract article info Article history: Received 5 January 2011 Revised 15 April 2011 Accepted 25 May 2011 Available online 2 June 2011 Keywords: Visual perception Emotion Attention Functional magnetic resonance imaging Retinotopic mapping A fundamental challenge for organisms is how to focus on perceptual information relevant to current goals while remaining able to respond to goal-irrelevant stimuli that signal potential threat. Here, we studied how visual threat signals inuence the effects of goal-directed spatial attention on the retinotopic distribution of processing resources in early visual cortex. We used a combined blocked and event-related functional magnetic resonance imaging paradigm with target displays comprising diagonal pairs of intact and scrambled faces presented simultaneously in the four visual eld quadrants. Faces were male or female and had fearful or neutral emotional expressions. Participants attended covertly to a pair of two diagonally opposite stimuli and performed a gender-discrimination task on the attended intact face. In contrast to the fusiform face area, where attention and fearful emotional expression had additive effects, neural responses to attended and unattended fearful faces were indistinguishable in early retinotopic visual areas: When attended, fearful face expression did not further enhance responses, whereas when unattended, fearful expression increased responses to the level of attended face stimuli. Remarkably, the presence of fearful stimuli augmented the enhancing effect of attention on retinotopic responses to neutral faces in remote visual eld locations. We conclude that this redistribution of neural activity in retinotopic visual cortex may serve the purpose of allocating processing resources to task-irrelevant threat-signaling stimuli while at the same time increasing resources for task-relevant stimuli as required for the maintenance of goal-directed behavior. © 2011 Elsevier Inc. All rights reserved. Introduction Human perception is characterized by a striking ability to discern relevant from irrelevant information. Relevance depends on the present goals of an individual, but also on the potential signicance of goal-unrelated stimuli, such as unattended events that signal threat. A crucial task in perception is therefore to strike a balance between focusing on the current task while remaining able to respond to potential harm. The allocation of processing resources in accordance with current task requirements has been conceptualized as endoge- nous attention (Corbetta and Shulman, 2002). Endogenous visual attention enhances neural responses in retinotopic cortex and functionally specialized extrastriate visual areas (Kastner et al., 2009; Reynolds and Chelazzi, 2004). Behaviorally, directing endoge- nous attention covertly to a particular location improves stimulus detection and discrimination, and enhances spatial resolution (Carrasco, 2006). It is also well established that threat-signaling visual stimuli are processed preferentially. For example, fearful faces are detected more easily (Frischen et al., 2008) and can enhance spatial attention effects on perception (Phelps et al., 2006). Converging evidence from different behavioral and neuroimaging methodologies suggests that emotional stimuli are processed, at least partially, in the absence of attention and even of awareness (Pessoa, 2005; Vuilleumier, 2005). While most previous studies focused on the role of the amygdalae in the detection of threat signals, emotional information processing in visual cortex has been explored less systematically. Functional neuroimaging studies have repeatedly shown that emotional infor- mation enhances responses in visual cortex (Peelen et al., 2007; Pessoa et al., 2002; Sabatinelli et al., 2005; Vuilleumier et al., 2001), but less is known about the interaction of emotional information with endogenous attention in visual cortex. Vuilleumier et al. (2001) used functional magnetic resonance imaging (fMRI) to investigate the effects of endogenous spatial attention and emotion on responses in NeuroImage 57 (2011) 15421551 Corresponding author at: Department of Psychiatry, Campus Charité Mitte, Charitéplatz 1, D-10117 Berlin, Germany. Fax: + 49 30 450517944. E-mail address: [email protected] (P. Sterzer). 1053-8119/$ see front matter © 2011 Elsevier Inc. All rights reserved. doi:10.1016/j.neuroimage.2011.05.072 Contents lists available at ScienceDirect NeuroImage journal homepage: www.elsevier.com/locate/ynimg
Transcript

NeuroImage 57 (2011) 1542–1551

Contents lists available at ScienceDirect

NeuroImage

j ourna l homepage: www.e lsev ie r.com/ locate /yn img

Emotion modulates the effects of endogenous attention on retinotopicvisual processing

Ana Gomez a, Marcus Rothkirch a, Christian Kaul b,c, Martin Weygandt d, John-Dylan Haynes d,Geraint Rees b, Philipp Sterzer a,d,e,⁎a Department of Psychiatry, Campus Charité Mitte, Charité – Universitätsmedizin Berlin, Germanyb Institute of Cognitive Neuroscience & Wellcome Trust Centre for Neuroimaging, University College London, UKc Department of Psychology and Center for Neural Science, New York University, USAd Bernstein Center for Computational Neuroscience, Berlin, Germanye Berlin School of Mind and Brain, Berlin, Germany

⁎ Corresponding author at: Department of PsychCharitéplatz 1, D-10117 Berlin, Germany. Fax: +49 30

E-mail address: [email protected] (P. Sterze

1053-8119/$ – see front matter © 2011 Elsevier Inc. Aldoi:10.1016/j.neuroimage.2011.05.072

a b s t r a c t

a r t i c l e i n f o

Article history:Received 5 January 2011Revised 15 April 2011Accepted 25 May 2011Available online 2 June 2011

Keywords:Visual perceptionEmotionAttentionFunctional magnetic resonance imagingRetinotopic mapping

A fundamental challenge for organisms is how to focus on perceptual information relevant to current goalswhile remaining able to respond to goal-irrelevant stimuli that signal potential threat. Here, we studied howvisual threat signals influence the effects of goal-directed spatial attention on the retinotopic distribution ofprocessing resources in early visual cortex. We used a combined blocked and event-related functionalmagnetic resonance imaging paradigmwith target displays comprising diagonal pairs of intact and scrambledfaces presented simultaneously in the four visual field quadrants. Faces weremale or female and had fearful orneutral emotional expressions. Participants attended covertly to a pair of two diagonally opposite stimuli andperformed a gender-discrimination task on the attended intact face. In contrast to the fusiform face area,where attention and fearful emotional expression had additive effects, neural responses to attended andunattended fearful faces were indistinguishable in early retinotopic visual areas: When attended, fearful faceexpression did not further enhance responses, whereas when unattended, fearful expression increasedresponses to the level of attended face stimuli. Remarkably, the presence of fearful stimuli augmented theenhancing effect of attention on retinotopic responses to neutral faces in remote visual field locations. Weconclude that this redistribution of neural activity in retinotopic visual cortex may serve the purpose ofallocating processing resources to task-irrelevant threat-signaling stimuli while at the same time increasingresources for task-relevant stimuli as required for the maintenance of goal-directed behavior.

iatry, Campus Charité Mitte,450517944.r).

l rights reserved.

© 2011 Elsevier Inc. All rights reserved.

Introduction

Human perception is characterized by a striking ability to discernrelevant from irrelevant information. Relevance depends on thepresent goals of an individual, but also on the potential significance ofgoal-unrelated stimuli, such as unattended events that signal threat. Acrucial task in perception is therefore to strike a balance betweenfocusing on the current task while remaining able to respond topotential harm. The allocation of processing resources in accordancewith current task requirements has been conceptualized as endoge-nous attention (Corbetta and Shulman, 2002). Endogenous visualattention enhances neural responses in retinotopic cortex andfunctionally specialized extrastriate visual areas (Kastner et al.,2009; Reynolds and Chelazzi, 2004). Behaviorally, directing endoge-nous attention covertly to a particular location improves stimulus

detection and discrimination, and enhances spatial resolution(Carrasco, 2006).

It is also well established that threat-signaling visual stimuli areprocessed preferentially. For example, fearful faces are detected moreeasily (Frischen et al., 2008) and can enhance spatial attention effectson perception (Phelps et al., 2006). Converging evidence fromdifferent behavioral and neuroimaging methodologies suggests thatemotional stimuli are processed, at least partially, in the absence ofattention and even of awareness (Pessoa, 2005; Vuilleumier, 2005).While most previous studies focused on the role of the amygdalae inthe detection of threat signals, emotional information processing invisual cortex has been explored less systematically. Functionalneuroimaging studies have repeatedly shown that emotional infor-mation enhances responses in visual cortex (Peelen et al., 2007;Pessoa et al., 2002; Sabatinelli et al., 2005; Vuilleumier et al., 2001),but less is known about the interaction of emotional information withendogenous attention in visual cortex. Vuilleumier et al. (2001) usedfunctional magnetic resonance imaging (fMRI) to investigate theeffects of endogenous spatial attention and emotion on responses in

1543A. Gomez et al. / NeuroImage 57 (2011) 1542–1551

high-level extrastriate visual cortex by modulating these factorsindependently. Attending to faces versus houses enhanced responsesto faces in the face-responsive fusiform cortex regardless ofexpression, but responses to fearful compared to neutral faces werestronger, irrespective of attention. In extrastriate visual cortex,emotional information thus seems to modulate activity independent-ly of attention.

Here, we asked how retinotopically localized emotional visualinformation influences the effects of endogenous spatial attention inearly retinotopic cortex. During fMRI, neutral or fearful faces werepresented simultaneously in two visual field quadrants and scrambledversions of these faces in the remaining two quadrants. Attention wasdirected to one intact/scrambled face pair as indicated by a centralcue. We could thus assess how emotion and endogenous spatialattention interacted in retinotopic visual cortical processing. Wereasoned that if preferential processing of visual threat-related cuesserves selection of appropriate actions, such as reorienting, thenemotional information anywhere in the display should evokeretinotopically specific enhancement of neural activity correspondingto the location of a stimulus. Moreover, if emotion and attention exerttheir effects on visual processing independently as previouslysuggested, emotional information should enhance retinotopic visualprocessing over and above the well-known enhancing effects ofendogenous attention on retinotopic cortex.

Materials and methods

Participants

Fifteen healthy right-handed participants (nine females, aged 21–39) with normal or corrected-to-normal vision participated in thestudy after giving written informed consent as approved by the localethics committee. Three participants (two females) were excludedfrom the study due to inability to maintain central fixation, asrevealed by eyetracking recordings that showed systematic saccadiceye movements towards the face stimulus locations throughout theexperiment.

Stimuli and design

Stimuli were generated using MATLAB (Mathworks Inc.) andCOGENT 2000 toolbox (www.vislab.ucl.ac.uk/Cogent/index.html) andprojected from an LCD projector (ProExtra Multiverse Projector,Sanyo Electric Co. Ltd; refresh rate 60 Hz) onto a screen at the head-end of the scanner that was viewed via a mirror attached to the headcoil directly above the participants' eyes (viewing distance 59 cm).The size of the screen was 24.9°×18.6° of visual angle. We used acombined blocked and event-related fMRI paradigm subdivided in 6experimental runs of 7 min duration. Each run comprised 12 blocks of35 s duration separated by a resting period of 10 s during which only adiagonal gray fixation cross (1.8° of visual angle) and grayplaceholders in the four possible stimulus locations (see below andFig. 1) were displayed. At the beginning of each block, one of the twodiagonal bars of the fixation cross darkened to indicate whichdiagonal stimulus pair the participant should attend to while keepingcentral fixation. That is, throughout each 35 s block participantscovertly attended either to the stimulus location pair in the upper leftand lower right visual field quadrant, or to the pair in the upper rightand the lower left quadrant (Fig. 1). The fixation-cross remained onthe screen throughout the experiment to ensure central fixation,which was essential for the covert attention task. The block order wasfully randomized. Each block contained 4 trials with target displaysthat were presented for 250 ms with a randomly jittered interstimu-lus interval of 3–9 s duration. The target displays consisted of fourstimuli presented simultaneously in the four visual field quadrants(4.7° eccentricity). Stimuli consisted of a selection of eight face

images, 4 female and 4 male, taken from the standardized seriesdeveloped by Ekman and Friesen (Pictures of Facial Affect 1976, PaloAlto, CA: Consulting Psychologists Press). There was an intact and ascrambled version of each face image. Faces were fitted into anelliptical shape (visual angle 4.1°×6.2°) that eliminated backgroundand hair. In each target display, two intact face images and twoscrambled face images were shown, and an intact face was alwayspaired with a scrambled face on one diagonal of the display. That is,there were always pairs of an intact and a scrambled face on theattended and unattended diagonals of the display, respectively. Facescould be either male or female. The occurrence of male or female faceswas completely randomized, that is, the two intact faces in a givendisplay could be both male, both female, or one male and one female.Participants performed a speeded gender discrimination task on theintact face stimulus that appeared in the attended diagonal usingright-hand index ormiddle finger key presses on a custom-madeMRI-compatible button box. The face images could have either fearful orneutral emotional expressions. There were three main conditions: Inone third of all trials, a fearful face appeared in an attended locationwhile a neutral face appeared in an unattended location (emotionattended). In another third, a neutral face appeared in an attendedlocation and another neutral face appeared in an unattended location(no emotion). Finally, in yet another third of trials a neutral faceappeared in an attended location and a fearful face in an unattendedlocation (emotion unattended). Except for the diagonal pairing ofintact and scrambled faces (see above), stimulus locations across trialsand the order of trial types were fully randomized.

This design enabled us to analyze the data according to twoprinciples: First, behavioral data and fMRI responses in non-retinotopically organized brain regions (fusiform gyrus, amygdalae)could be analyzed according to the three above-mentioned mainconditions emotion attended, no emotion, and emotion unattended.Second, in retinotopic cortex the stimulus geometry with onestimulus in each quadrant of the visual field allowed us to separatelyextract the fMRI signal evoked by each of the four stimuli in theirrespective retinotopic cortical representations, resulting in sixpossible conditions for each retinotopic stimulus representation.This is illustrated in Fig. 2, which shows these six possible conditionsin relation to the retinotopic representation of the stimulus in theright upper quadrant. The difference between non-retinotopic andretinotopic analyses is best illustrated with an example: For non-retinotopic areas such as the FFA, panels (A) and (F) in Fig. 2 belong tothe same condition, as in both stimulus configurations a fearful face isattended and a neutral face is unattended. For the retinotopicrepresentations of the upper right stimulus location in areas V1–V3,in contrast, panels (A) and (F) obviously represent differentconditions: In (A), an attended fearful face is present in the rightupper quadrant, while there is an unattended neutral face presentelsewhere in the display; in (F), an unattended neutral face is presentin the same visual field location, while there is an attended fearful facepresent elsewhere in the display. Thus, we could determine theresponses to face stimuli in each retinotopic stimulus representationseparately as a function of whether the stimulus was attended or not,andwhether it was fearful or neutral. Moreover, retinotopic responsesto neutral face stimuli could be analyzed according to whether theother face in the display was neutral or fearful. This led to sixconditions for the analysis of the retinotopic fMRI responses to facestimuli (see Fig. 2): (1) attended emotional face; (2) unattendedemotional face; (3) attended neutral face with another neutral face inthe display; (4) unattended neutral face with another neutral face inthe display; (5) attended neutral face with an emotional face in thedisplay; and (6) unattended neutral face with an emotional face in thedisplay.

While the focus of our study was on the effects of emotion andattention on retinotopic visual processing of faces, the simultaneouspresentation of scrambled faces allowed us to verify the effectiveness

Fig. 1. Experimental design. The direction of spatial attention was blocked over four trials and randomized over twelve blocks. Prior to each block, one of the two diagonal bars of thefixation-cross darkened to indicate which diagonal stimulus pair the participant should attend to while keeping central fixation. Thus, throughout blocks participants covertlyattended either to the stimulus location pair in the upper left and lower right visual field quadrant, or to the pair in the upper right and the lower left quadrant. Each block contained 4trials with target displays that were presented for 250 ms with a randomly jittered interstimulus interval of 3–9 s duration. The target displays consisted of four stimuli presentedsimultaneously in the four visual quadrants (see Fig. 2). Upon appearance of the target display, participants performed a gender-discrimination task on the intact face of the attendeddiagonal stimulus pair. After each block a 10 s resting period followed with the diagonal gray fixation cross and the four gray placeholders on display.

1544 A. Gomez et al. / NeuroImage 57 (2011) 1542–1551

of our manipulation of endogenous spatial attention with stimuliinvolving no emotional or configural facial information (i.e., featuresthat were expected to interact with attention effects); and also toassess possible effects of emotional information on stimuli that weretask-irrelevant. It should be noted, however, that the directcomparison of emotion effects on task-relevant (face) and task-irrelevant (scrambled face) stimuli was not critical to our researchquestion.

fMRI data acquisition

Images were acquired on a TRIO 3T scanner (Siemens, Erlangen,Germany) equipped with a 12-channel head coil. Functional imageswere obtained with a gradient echo-planar imaging sequence (repe-tition time=2.26 s; echo-time=25ms). Whole-brain coverage wasobtained with 38 contiguous slices (voxel size=3×3×3 mm). Themain experiment consisted of 6 runs of 195 volumes each. Additionally,we acquired a T1-weighted structural image (MPRAGE, voxel size1×1×1 mm) and several functional localizer scans. For retinotopicmeridian mapping, we performed two scans of 244 volumes each,during which participants viewed contrast reversing (4 Hz) checker-board stimuli that covered either the horizontal or the vertical meridianand were presented in 22.6 s blocks interleaved with 11.3 s restperiods. To functionally localize the retinotopic representations of theface stimuli in the main experiment, we performed two retinotopicregion-of-interest (ROI) localizer scans of 165 volumes each, duringwhich a contrast reversing (4 Hz) black-and-white oval checkerboard(visual angle 4.1°×6.2°) was presented in alternating 11.3 s blocks inthe four visual quadrants (4.7° eccentricity). Finally, to identify thefusiform face area (FFA) in the mid-fusiform gyrus, we performed a

standard localizer scanof204volumesduringwhichblack-and-white faceand house stimuli were presented foveally in alternating 13.6 s blocks,interleavedwith 9.0 s rest periods. It should benoted, that the existence ofthe FFA as a brain region specialized on face processing is controversial(e.g., Bukach et al., 2006). We use the term FFA pragmatically as referringto a region in themid-fusiform gyrus functionally defined by greater fMRIresponses to faces compared to other objects (Kanwisher and Yovel,2006). Eye movements were monitored online during the mainexperiment using an infrared video eye tracker with a sampling rate of60 Hz (SMI IVIEW X™ MRI-LR, SensoMotoric Instruments, Teltow,Germany) custom-adapted for use in the MRI scanner environment toensure that participants were able to hold fixation.

Analysis

Imaging dataData were analyzed using SPM5 (www.fil.ion.ucl.ac.uk/spm). Pre-

processing was performed following standard methods implementedin SPM5 (Ashburner and Friston, 1997). After discarding the first fourimages of each run to allow for magnetic saturation effects, theremaining images were slice-time-corrected and realigned to the firstimage. The structural T1 image was co-registered to the functionalscans, and all images were normalized into standard MNI space. Datafrom the main experiment and from the FFA localizer were spatiallysmoothed with a 5 mm full-width at half maximum (FWHM)Gaussian kernel for analyses of FFA and amygdala responses. For theanalysis of responses in retinotopic areas V1–V3 unsmoothed datawere used to retain the fine-grained retinotopic information in theseareas and to avoid ‘cross-talk’ from patches of cortex in oppositelocations in the walls of sulci. Data from retinotopic meridian

Fig. 2. Experimental conditions. In each target display, two intact face images and two scrambled face images were shown, and an intact face was always pairedwith a scrambled faceon one diagonal of the display. Participants were cued by the black arrow at fixation to attend to one diagonal stimulus pair. There were six possible conditions for the analysis ofresponses to face stimuli in retinotopic cortex: (A) attended emotional face; (B) unattended emotional face; (C) attended neutral face with another neutral face in the display;(D) unattended neutral face with another neutral face in the display; (E) attended neutral face with an emotional face in the display; (F) unattended neutral face with an emotionalface in the display. The dotted gray line in each panel A–F illustrates which stimulus pair was attended. Here the six possible conditions for the stimulus in the right-upper quadrant(surrounded by a black line for illustration) of the visual field are shown. Note that these six conditions could occur in all four stimulus locations, and fMRI responses were collapsedaccordingly across the four corresponding retinotopic stimulus representations in each visual area V1, V2 and V3 for statistical analysis.

1545A. Gomez et al. / NeuroImage 57 (2011) 1542–1551

mapping and from the retinotopic ROI localizer were smoothed usinga small (3 mm) FWHM Gaussian kernel in order to facilitatedelineation of borders on cortical flatmaps between areas V1–V3and of ROIs in these areas, respectively.

We analyzed the data from the main experiment in a two-stageprocedure. In a first step, each participant's data were analyzedvoxelwise using the general linear model (GLM). The model includedseparate regressors for each possible trial-type in each quadrant of thedisplay to enable separate analysis of activations in retinotopiccortical representations of each stimulus location. For each of thefour stimulus locations, the six possible trial types were modeledseparately (see Stimuli and design section and Fig. 2). In addition, therespective version of each condition with scrambled images wasmodeled separately for each stimulus location. Regressors for eachexperimental condition were modeled as stick functions convolvedwith the canonical hemodynamic response function implemented inSPM5. Motion parameters defined by the realignment procedurewere

added to themodel as six separate regressors. Parameter estimates foreach regressor at every voxel were determined using multiple linearregression and scaled to the global mean signal of each run acrossconditions and voxels. The parameter estimates therefore representsignal change relative to the global brain signal (% global brain signal).We removed low-frequency fluctuations by a high-pass filter with acutoff at 128 s and used an autoregressive model of order one (AR(1)+white noise) to correct for temporal autocorrelation in the data.Data from the retinotopic meridian mapping scans, the retinotopicROI localizer, and the FFA localizer were analogously analyzed usingthe GLM approach implemented in SPM5. The conditions of interestwere modeled as boxcar regressors convolved with the canonicalhemodynamic response function.

In a second step, parameter estimates for each experimentalcondition were extracted from the ROIs defined on the basis of theindependent functional localizer scans in each participant individually.ROIs in visual areas V1, V2, and V3 were defined using data from

Fig. 3. Behavioral results. Trials were sorted according to whether they contained anemotional face or not, and whether the face was in an attended location of the visualfield or not. This resulted in threemain groups of trials: 1) emotional face in an attendedlocation (emo attended), 2) neutral faces only (no emo) and 3) emotional face in anunattended location (emo unattended). Significant differences between these threeconditions were revealed for accuracy (A) but not for reaction time (B). Accuracy wassignificantly lower in for both emotion attended and emotion unattended conditions.*pb .05, (*)pb .1, planned post-hoc t-tests, Bonferroni-corrected. Error bars denotestandard errors corrected for between-subject variability (Cousineau, 2007).

1546 A. Gomez et al. / NeuroImage 57 (2011) 1542–1551

retinotopic meridian mapping and the retinotopic ROI localizer. First,meridian data were used to delineate the ventral and dorsal portions ofareas V1, V2, and V3 of each hemisphere after segmentation andcortical flattening following standard methods using Freesurfer (http://surfer.nmr.mgh.harvard.edu). Within each of these retinotopic areas,the representations of each of the four stimulus locations in the mainexperimentwere delineated in V1, V2 and V3 on the basis of activationsfrom the independent retinotopic ROI localizer thresholded at pb .01,uncorrected for multiple comparisons. FFA ROIs were defined ascontiguous voxels in the fusiform gyrus that responded significantlymore to faces than to houses in the FFA localizer scan at a threshold ofpb .001, uncorrected, and were delineated manually using MRIcron(http://www.sph.sc.edu/comd/rorden/MRicron/). In view of its role inemotional face processing (Phelps and LeDoux, 2005), even though notour primary focus, we also evaluated fMRI responses in the amygdalae.We used standard bilateral amygdala ROIs derived from the WFU Pickatlas (http://www.fmri.wfubmc.edu/cms/software). Finally, parameterestimates from themain experimentwere extracted and averaged fromthose voxels within the retinotopic, FFA, and amygdala ROIs that weregenerally responsive to our stimulus paradigm, as determined with a t-test for the main effect of all stimulus presentations at a liberalthreshold of pb .05, uncorrected.

For the assessment of activations in the retinotopic cortical stimulusrepresentations, parameter estimates were sorted according to the 6possible conditions (Fig. 2) for intact faces and the correspondingconditions for scrambled images, for each of the four stimulus locations.Please note that the six conditions occurred at all four stimulus-locations. For the analysis of each of the six conditions, parameterestimates were therefore pooled across the four correspondingretinotopic stimulus representations in each visual area V1, V2 andV3. These pooled parameter estimates thus provided for each conditiona compositemeasure of fMRI signal irrespective of visual field quadrant,yet preserving the retinotopic specificity of responses to a givenstimulus relative to the remaining three stimuli in each display. Thisprocedure resulted in one average parameter estimate per conditionand participant for each visual area V1, V2, and V3. In contrast, for theanalysis of activations in non-retinotopic regions, i.e., FFA andamygdalae, parameter estimates could not be separated according toretinotopic stimulus location and were thus collapsed into the threemain conditions emotion attended, no emotion, and emotion unattended(see Stimuli and design section). For statistical inference at the grouplevel, parameter estimates from the ROIs were subjected to one-, two-,or three-way repeated-measures ANOVA as appropriate. Effects wereconsidered significant at pb .05. Greenhouse–Geisser correction wasapplied in cases of significant (pb .05) sphericity violation as evidencedby Mauchley's sphericity test. For further exploration of significantresults from ANOVA, planned post-hoc t-tests were used. Resulting pvalues were corrected for multiple comparisons using Bonferronicorrection. We corrected for the number of post-hoc comparisonsperformed subsequent to each ANOVA, i.e., 6 comparisons for theANOVA testing for effects in the FFA and 9 comparisons for the ANOVAtesting for effects in retinotopic cortex.

Behavioral dataPerformance was measured in terms of reaction time and accuracy

and was analyzed for the three main conditions: emotion attended, noemotion, and emotion unattended. Statistical inference was performedusing one-way repeated-measures ANOVA followed by planned post-hoc t-tests with Bonferroni correction for 3 comparisons.

Results

Behavioral performance

Overall mean accuracy (expressed as percentage of correct re-sponses) on the gender-discrimination task was 83%±3 SEM. Overall

mean reaction time was 1071 ms±75 SEM (Fig. 3). Trials were sortedaccording to the threemain conditions emotion attended, no emotion, andemotion unattended. A one-way repeated-measures ANOVA revealeddifferences between these three main conditions in accuracy (F(1,11)=7.58, pb .005, η2=0.41) but not in reaction time (F(1,11)b1). Plannedpost-hoc t-tests verified that compared to the no emotion condition(86%±2 SEM), accuracy was significantly lower in the emotionattended (81%±3 SEM; t(11)=3.95, pb .01, Bonferroni-corrected)and trendwise reduced in the emotion unattended condition (82%±3 SEM; t(11)=2.59, p=.075, Bonferroni-corrected).

Fusiform face area

fMRI responses in FFA were analyzed using 2×3 repeated-measures ANOVA with the factors hemisphere (right and left FFA)and emotion (according to the three main conditions emotionattended, no emotion, and emotion unattended; see Figs. 4A and B).There was a significant main effect of emotion (F(2,22)=3.4, pb .05,η2=0.25) but no main effect of hemisphere (F(1,11)b1) and nosignificant hemisphere-by-emotion interaction (F(2,22)=1.2,pN .1). As emotion-related responses were previously reported forthe right fusiformgyrus only (Vuilleumier et al., 2001), we performedseparate one-way ANOVAs for right and left FFA and indeed foundthat responses in the right FFA were robustly modulated by emotion(F(1,11)=7.1, pb .005, η2=0.39). No significant effect was found inthe left FFA alone (F(1,11)b1). Post-hoc t-tests revealed a significantdifference in evoked responses for emotion attended versus noemotion (t(11)=3.27, pb .05, Bonferroni-corrected) but no signifi-cant effect for emotion attended versus emotion unattended (t(11)=2.13 pN .1, Bonferroni-corrected) and emotion unattended versus noemotion (t(11)=1.98 pN .1, Bonferroni-corrected).

Amygdalae

fMRI signals in the amygdalae were generally weak and insuffi-cient (i.e., no significant voxels within amygdala ROI at 0.05,

Fig. 4. Average parameter estimates of brain activity during the main experimentextracted from the right FFA, left FFA and the retinotopic cortex. fMRI signals wereanalyzed according to the three main conditions: emotion attended, no emotion, andemotion unattended, according to whether they contained an emotional face or not,and whether the face was in an attended location of the visual field or not. There weresignificant differences between the three main conditions in the right FFA (pb0.01,one-way repeated-measures ANOVA) but not in the left FFA. There were no significantgeneral response differences between these three conditions in retinotopic visualcortex (V1–V3) when analyzed in analogy to the FFA. *pb .05, planned post-hoc t-tests,Bonferroni-corrected. Error bars denote standard errors corrected for between-subjectvariability (Cousineau, 2007).

1547A. Gomez et al. / NeuroImage 57 (2011) 1542–1551

uncorrected, using the contrast all trials vs. baseline) for our ROIanalyses in three out of 12 participants in the left amygdala andanother three participants in the right amygdala. Analysis of the fMRIresponses in the remaining nine participants for left and rightamygdala, respectively, revealed no significant differences betweenthe three main conditions emotion attended, no emotion, and emotionunattended (F(1,8)b1, one-way repeated-measures ANOVAs).

Retinotopic visual cortex

The stimulus geometry with one stimulus in each quadrant of thevisual field allowed us to separately analyze the fMRI responses toeach of the four stimuli in their respective retinotopic corticalrepresentations according to the six conditions shown in Fig. 2. Theretinotopic stimulus representations in visual areas V1, V2, and V3were determined using standard retinotopic meridian mapping incombination with a functional localizer scan mapping the fourstimulus locations. All results for retinotopic visual cortex reportedhenceforth refer to these mapped stimulus representations.

General responses in areas V1 to V3In a first step, we aimed to investigate any general effects of

attention and emotion on retinotopic visual processing as, forexample, caused by arousal. For this aim, we assessed the generalresponses in retinotopic stimulus representations to the presence ofattended and unattended emotional stimuli in analogy with the FFAanalysis (see above). Thus, average responses collapsed across all fourstimulus representations in areas V1 to V3were analyzed according tothe three main conditions emotion attended, no emotion and emotionunattended (Fig. 4C). A 3×3 repeated-measures ANOVA with thefactors region (V1, V2, V3) and emotion (emotion attended, no emotionand emotion unattended) showed a main effect of region (F(2,22)=4.2, pb .05, η2=0.28) but no main effect of emotion (F(2,22)=2.2,pN0.1) and no region-by-emotion interaction (F(2,22)=1.5, pN0.1).This indicates that the presence of an emotional stimulus did not havea general effect on early visual processing, e.g., due to general arousal.It is noteworthy that this response pattern is clearly different fromthat in the FFA, where significant differences between the three mainconditions were observed.

Retinotopically specific responses in areas V1 to V3Next, we determined the responses to intact faces in each

retinotopic stimulus representation separately as a function of whetherthe face was attended or not, and whether it was fearful or neutral. Inaddition, responses to neutral faces were analyzed separately for trials

that contained a fearful face in another location and those thatcontained only neutral faces. In other words, our design allowed us toanalyze how the effect of directing endogenous spatial attention to astimulus was affected by the emotional valence of that stimulus itself,and by the emotional valence of the other face stimulus appearing inthe same display. This resulted in the six possible conditions shown inFig. 2. Of note, these six conditions could occur in all four stimuluslocations, and fMRI responses were pooled accordingly across the fourcorresponding retinotopic stimulus representations in each visual areaV1, V2, and V3 for statistical analysis.

Statistical analysis was performed using 3×3×2 factorial repeated-measures ANOVAwith the factors region (V1, V2, V3), emotion (fearfulface, neutral face, and neutral facewith a fearful face in the display), andattention (attended vs. unattended), and planned post-hoc t-tests toassess the effect of attention in the three emotion conditions separately.The results are summarized in Fig. 5. As expected, we found asignificant main effect of attention (F(1,11)=12.1, pb .005, η2=0.52). There was also a significant main effect of region (F(2,22)=4.2,pb .05, η2=0.28), but no main effect of emotion (F(2,22)b1).Importantly, however, there was a significant attention-by-emotioninteraction (F(2,22)=4.6, pb .05, η2=0.29). We also found significantregion-by-attention (F(2,22)=6.3, pb .01, η2=0.36) and region-by-emotion interactions (F(4,44)=3.2, pb .05, η2=0.23), but nosignificant three-way interaction effect region-by-attention-by-emotion (F(4,44)=2.0, pN .1).

As the observed attention-by-emotion interaction was central to ourresearch question,we further explored this effect by planned post-hoc t-tests for attended vs. unattended faces in all three emotion conditions,separately for areas V1, V2, andV3. Therewere no significant differencesbetween attended and unattended emotional faces in any of the visualareas V1 to V3: V1 (t(11)=0.08, pN .1), V2 (t(11)=−0.27, pN .1), andV3 (t(11)=0.89, pN .1). For neutral faces in the presence of anotherneutral face, there was no significant effect of attention in V1 (t(11)=−0.37, pN .1). While the analogous comparison revealed only a trendtowards an attention effect in V2 that did not survive correction formultiple comparisons (t(11)=2.79, p=.15, Bonferroni-corrected), asignificant attention effect was observed in V3 (t(11)=3.79, pb .05,Bonferroni-corrected). Finally, and in stark contrast with the completeabsence of an attention effect on the processing of emotional faces, therewas a robust effect of attention on retinotopic processing of neutralfaces when an emotional face was present in the display. This effect wassignificant throughout all three retinotopic visual areas analyzed (V1:t(11)=3.98, pb .05; V2: t(11)=3.75, pb .05; V3: t(11)=6.95, pb .01;all Bonferroni-corrected).

Taken together, the pattern of responses was similar in all visualareas V1 to V3 (Figs. 5A–C), with the smallest or no effect of attentionfor emotional faces and the largest effect of attention for neutral facesin the presence of an emotional face in the display. In other words, theenhancing effect of endogenous spatial attention, which was task-relevant in our paradigm, was abolished in retinotopic representa-tions of fearful emotional faces. In contrast, the mere presence of afearful face in the display augmented the effect of attention inrepresentations of neutral faces. It is particularly noteworthy thateven in V1, where responses to attended and unattended stimuli inthe absence of emotional stimuli were statistically indistinguishable, arobust attentional enhancement of neutral face processing emerged inthe presence of an emotional face.

Processing of scrambled face stimuliHaving established an interaction of task-irrelevant emotional

information with the effects of attention on retinotopic visualprocessing of face stimuli, we next asked whether this effect wasconfined to processing of intact face stimuli. In our paradigm, eachintact face was paired with a scrambled face on the attended andunattended diagonals, respectively. We therefore assessed the effectof emotion in attended and unattended visual field locations on the

Fig. 5. Parameter estimates of brain activity for each condition during main experiment extracted from each stimulus representation, as determined in separate localizer scans, wereaveraged across 12 participants in V1 (A), V2 (B), and V3 (C). Here responses to intact faces in each retinotopic stimulus representation were analyzed separately as a function of thesix possible conditions depicted in Fig. 2. The six bars plotted in each panel represent these six possible conditions. (D) Parameter estimates pooled across areas V1–V3 for intact facesand (E) for scrambled faces. Responses to scrambled faces were qualitatively similar in areas V1, V2, and V3, and are therefore not shown in separate plots. *pb .05, **pb .01, plannedpost-hoc t-tests, Bonferroni-corrected. Error bars denote standard errors corrected for between-subject variability (Cousineau, 2007).

1548 A. Gomez et al. / NeuroImage 57 (2011) 1542–1551

processing of scrambled faces, analogous to the analysis of intact facestimuli. For example, corresponding to the analysis of retinotopicresponses to intact emotional faces we analyzed fMRI responses inretinotopic representations of scrambled faces that appeared either inthe same attended diagonal as an emotional face, or in the unattendeddiagonal together with an emotional face. In the example given inFig. 2, the scrambled picture that corresponds to each of the sixpossible conditions for intact faces in the upper right quadrant wouldbe the one in the lower left quadrant. We report the results of a3×3×2 (region×emotion×attention) repeated-measures ANOVA(as above for responses to intact faces) for responses to scrambledface-stimuli. There were clear main effects of region (F(2,22)=17.1,pb .001, η2=0.61) and attention (F(1,11)=10.8, pb .01, η2=0.49),but no main effect of emotion (F(2,22)b1). There was a significantregion-by-attention interaction (F(2,22)=7.0, pb .01, η2=0.39), butno region-by-emotion interaction (F(4,44)b1) and, crucially, noemotion-by-attention interaction (F(2,22)b1). That is, while thesignificant main effect for attention confirmed that our manipulationof endogenous spatial attention had a general effect on retinotopicvisual processing, the interaction of emotion with the effect ofendogenous spatial attention in retinotopic visual cortex was limitedto the processing of intact face stimuli. As for intact faces, there was no

significant region-by-attention-by-emotion interaction for responsesto scrambled faces (F(4,44)b1).

Intact vs. scrambled facesWhile our research question was not primarily concerned with

differences in processing of face and non-face stimuli in retinotopicvisual cortex, we nevertheless performed a tentative 2×3×3×2factorial repeated-measures ANOVA with the factors stimulus type(intact vs. scrambled), region (V1, V2, V3), emotion (fearful face, neutralface, and neutral face with a fearful face in the display), and attention(attended vs. unattended) to directly compare responses to intact andscrambled faces. There was a significant main effect of stimulus type(F(1,11)=8.8, pb .013, η2=0.44), which was due to overall strongerresponses to face compared to scrambled stimuli (see Figs. 5D and E).There were again main effects of region (F(2,22)=11.0, pb .001,η2=0.50) and attention (F(1,11)=12.1, pb .01, η2=0.53), but nomain effect of emotion (F(2,22)b1). There was again a significantregion-by-attention interaction (F(2,22)=9.8, pb .001, η2=0.47), andtrends towards significant interaction effects for region×emotion(F(4,44)=2.0, p=.11, η2=0.16), stimulus type×attention (F(1,11)=3.8, p=.08, η2=0.26), emotion×attention (F(2,22)=3.3, p=.06,η2=0.23), stimulus type×emotion×attention (F(2,22)=2.5, p=.11,

1549A. Gomez et al. / NeuroImage 57 (2011) 1542–1551

η2=0.18), and stimulus type×region×emotion×attention (F(4,44)=2.4, p=.07, η2=0.18). All other interactions remained insignificant(Fb1). We should emphasize, however, that our experiment wasprimarily designed to assess the effects and interactions of attendedtask-relevant and unattended task-irrelevant intact face stimuli. Theresults of this latter analysis should therefore be interpreted withcaution, as the direct comparison of intact and scrambled faces ishampered by the fact that scrambled face stimuli were either attendedor unattended but were never task-relevant.

Attentional modulation in areas V1 to V3

It is well established that the degree of attentional modulation israther small in V1 but increases at higher cortical levels of retinopicprocessing (Kastner et al., 1999; Luck et al., 1997; O'Connor et al., 2002).To formally assess attentional modulation in areas V1, V2 and V3 and towhat extent it was influenced by emotion,we calculated the attentionalmodulation index (AMI=(attended−unattended)/(attended+unat-tended); see Kastner et al., 1999) for each visual area and for eachemotion condition separately (Fig. 6). Greater AMI values denotestronger modulation by attention. A 3×3 repeated measures ANOVAwith the factors region (V1, V2, and V3), emotion (fearful face, neutralface, and neutral face with a fearful face in the display) showedsignificant main effects of region (F(2,22)=9.75, pb .005, η2=0.47)and emotion (F(2,22)=4.57, pb .05, η2=0.29) on the AMI, but nosignificant interaction (F(4,44)=2.10, pN .1). Thus, the degree ofattentional modulation increased from V1 to V3 in line with previousfindings (Kastner et al., 1999; Luck et al., 1997; O'Connor et al., 2002),while the effect of emotion on endogenous spatial attentionalmodulation was similar at the three levels of cortical processinganalyzed.

Discussion

We found differential effects of emotional information onattentional modulation in low-level retinotopic visual cortex andfunctionally specialized higher-level visual cortex. The latter, exem-plified by the FFA, showed additive effects of spatial attention andfearful emotional expression. In contrast, fearful expression did notfurther enhance the effect of attention in retinotopic cortex, whileresponses to unattended fearful stimuli were at the level of those toattended stimuli. Thus, there was no difference between localretinotopic responses to attended and unattended fearful faces.Strikingly however, fearful faces exerted a strong remote effect onretinotopic processing of neutral faces: The presence of a fearfulstimulus in the display augmented the effect of spatial attention onprocessing of neutral face stimuli. Even in V1, where responses toattended and unattended stimuli in the absence of emotional stimuli

Fig. 6. Attentional modulation index (AMI=(attended−unattended) /(attended+unattended); greater AMI values denote stronger modulation by attention, see Kastneret al., 1999) for each visual area and for each emotion condition separately. The degreeof attentional modulation increased from V1 to V3, while the effect of emotion onendogenous spatial attentional modulation was similar at the three levels of corticalprocessing analyzed.

were statistically indistinguishable, a robust attentional enhancementof neutral face processing emerged in the presence of an emotionalface elsewhere in the display.

The additive emotion–attention effect observed in the right FFAreplicates previous findings (Vuilleumier et al., 2001) that suggestedindependent mechanisms underlying these phenomena in high-levelvisual cortex. The lack of an overall effect of fearful expression on fMRIresponses in retinotopic cortex is in accord with previous work thatfailed to find an enhancing effect of fearful expression on early visualcortex activity (Pourtois et al., 2006), which was, however, notanalyzed in retinotopic detail in this earlier study. A recent study usedretinotopic mapping to show that responses to fear-conditionedgratingswere increased in V1–V4 (Padmala and Pessoa, 2008). Similareffects are also observed with conditioned faces (Damaraju et al.,2009). This latter study also reported greater retinotopic corticalresponses to task-irrelevant fearful faces than to neutral facesindependent of conditioning, in line with our result that activity inretinotopic representations of unattended fearful faces was elevatedto the level of attended stimuli. Our current study goes beyond theseprevious reports in at least two respects: First, by explicitlymodulating endogenous spatial attention, we could show thatenhancement of local retinotopic processing by fearful expression islimited to task-irrelevant, unattended fearful faces, but is notdetectable for attended fearful faces. Second, our stimulus designallowed us to assess remote effects of fearful faces on the retinotopicprocessing of neutral faces, showing that the modulatory effect ofspatial attention to neutral faces is augmented by the mere presenceof a fearful face at an unattended location elsewhere in the display.

Several previous studies focused on the effects of emotionalinformation as an exogenous attention cue (i.e., attentional capture),contrasting with our investigation of its interactions with endogenousspatial attention. Behaviorally, reaction times and contrast sensitivityare improved for target stimuli that are preceded by threat-signalingstimuli serving as exogenous cues in corresponding visual fieldlocations (Bradley and Lang, 2000; Mogg et al., 1994; Phelps et al.,2006). Accordingly, fMRI and electroencephalography show that fearcues enhance processing of subsequently presented neutral targets(Pourtois et al., 2004, 2006); and that such emotional cueing of spatialattention involves parietal regions previously implicated in endoge-nous spatial attention (Pourtois et al., 2006). Moreover, a recentmagnetoencephalographic study showed that task-irrelevant fearfulfaces elicit an N2pc component, a signal that is known to reflectattentional focusing in visual search (Fenker et al., 2010). Thus, theability of threat-signaling stimuli to capture and direct spatialattention is a possible mechanism underlying the local enhancementof retinotopic cortex activity by unattended fearful faces observed inour study and in previous work (Damaraju et al., 2009).

In our study, attention to a specific location had no direct localeffect on early visual processing of emotional information at thatlocation. However, local processing of unattended emotional infor-mation provoked remote effects on the processing of attended neutralstimuli in different regions of the visual field. This finding that theenhancing effect of attention on the processing of neutral stimuli wasaugmented by the presence of remote emotional information can beconceptualized as a compensatory mechanism. Exogenous attentionthrough attentional capture by salient stimuli interferes withendogenous attention (Jonides and Irwin, 1981). Such interferenceis even more pronounced in high-anxiety individuals (Moriya andTanno, 2009), suggesting that reactivity to task-unrelated salientstimuli may support an individual's readiness to react to potentialthreat. In our study, reduced accuracy on gender-identification ofneutral faces in the presence of a fearful face suggests suchinterference through attentional capture. The augmented attentioneffect in the retinotopic representation of the task-relevant neutralface could thus reflect the compensatory allocation of processingresources in order to maintain task performance despite interference

1550 A. Gomez et al. / NeuroImage 57 (2011) 1542–1551

from the attention-capturing threat signal. In addition, a decrease inthe allocation of processing resources to unattended neutral faces inthe presence of emotional faces could also have contributed to theobserved effect. Importantly, our finding that remote enhancement ofretinotopic processing was only detectable for intact but not forscrambled stimuli in corresponding attended locations, supports thisinterpretation. Although a scrambled face stimulus was alwayspresented along with an intact face in the attended diagonal, onlythe intact face was relevant for the gender-identification task. Thenotion that retinotopic processing of task-relevant stimuli is en-hanced in situations of additional perceptual challenge is also in linewith the previous finding that responses to task-relevant stimuli inretinotopic cortex are increased by the concurrent performance of ademanding task (Pinsk et al., 2004).

Several mechanisms could underlie the remote enhancing effect ofemotional stimuli on attentional modulation of responses to neutralstimuli. It could be directly mediated by feedback signals fromstructures involved in fearful face processing (Amaral et al., 2003;Vuilleumier, 2005). Obvious candidate regions to mediate this effectare the amygdalae and the FFA, but it is difficult to explain howfeedback signals from non-retinotopically organized structures couldselectively target processing of attended neutral stimuli. Furthermore,contrary to retinotopic cortex, the highest activity level in FFA wasevoked by attended emotional stimuli. Thus, more likely, processesthat mediate the effects of endogenous attention on visual processingand that involve higher-order topographic regions in frontal andparietal cortex (Silver and Kastner, 2009) also mediate the effects ofemotion on attentional modulation in retinotopic cortex. Neverthe-less, while our analyses of attentional modulation replicated the well-established progressive increase of attentional effects from lower- tohigher-level retinotopic areas (Kastner et al., 1999; Luck et al., 1997;O'Connor et al., 2002), the effects of emotion on attentionalmodulation were similar in V1 through V3. The mechanismsunderlying the emotion effects on attentional modulation in retino-topic visual cortex are thus probably not identical to the mechanismsof endogenous spatial attention. Rather, the latter may be under themodulatory influence of brain regions involved in threat-signaldetection such as the amygdalae (Phelps and LeDoux, 2005), andregulate the redistribution of processing resources across the visualfield according to the challenge posed by task-irrelevant emotionalinformation.

Because we did not find significant activity differences in theamygdalae we are not able to decide whether the observedretinotopic effects of emotional information might originate fromthe amygdalae, either directly or by modulation of attentionalmechanisms. Most likely, this relates to suboptimal signal in medialtemporal lobe as our scanning sequence focused on visual cortex, andto the generally poor signal-to-noise ratio in the amygdalae (LaBar etal., 2001). Moreover, the relatively attention-demanding task used(mean correct responses ~85%) and peripheral stimulus presentationmay have reduced amygdala activation (Eimer et al., 2003; Holmes etal., 2003; Wright and Liu, 2006). However, the significant effects ofemotional expression observed in visual cortex argue against the lackof significant activity modulation in the amygdalae being task- orstimulus related in this case.

While emotional faces are useful for the investigation of emotionprocessing because of their ecological validity, one might beconcerned that they could not optimally drive responses in retino-topic cortex. However, we found robust effects using these stimuli,with regard to both attentional and emotional modulation, whichspeaks against the concern that our stimulus paradigm may lacksensitivity for effects in retinotopic cortex. Another concern could berelated to possible systematic low-level stimulus differences betweenfearful and neutral faces (Whalen et al., 2004). However, the absenceof any effects of fearful vs. neutral faces in retinotopic areas, but alsoinspection of the effect sizes, e.g., in V1 (Fig. 5A), render the possibility

that any of the effects observed may be related to low-level stimulusdifferences highly unlikely.

In conclusion, we propose that the differential effects of emotionalinformation on attentional modulation in low-level retinotopic andhigh-level functionally specialized visual cortex subserve differentadaptive functions. Additive effects of emotion and attention in FFAmay be adaptive in that emotional expression could signal therequirement of additional face-related processing resources over andabove those allocated by endogenous spatial attention. In contrast tosuch a feature- or object-related tuning of information processing inhigher-order visual cortex, the distribution of processing resources inretinotopic cortex seems to be governed primarily by the spatialrelationship of stimuli potentially relevant for, or interfering with, thecurrent task. The redistribution processes observed in our study maybe adaptive in that they serve the purpose of allocating processingresources to locations of task-irrelevant threat-signaling stimuli whileat the same time increasing resources for task-relevant stimuli asrequired for the maintenance of goal-directed behavior.

Acknowledgments

AG, MR, and PS are supported by Deutsche Forschungsge-meinschaft (Emmy-Noether-Program STE1430/2-1). GR is supportedby the Wellcome Trust. CK is holding a Feodor-Lynen-Stipend byAlexander von Humboldt-Foundation.

References

Amaral, D.G., Behniea, H., Kelly, J.L., 2003. Topographic organization of projections fromthe amygdala to the visual cortex in the macaque monkey. Neuroscience 118 (4),1099–1120.

Ashburner, J., Friston, K., 1997. Multimodal image coregistration and partitioning—aunified framework. Neuroimage 6 (3), 209–217.

Bradley, M.M., Lang, P.J., 2000. Measuring emotion: behavior, feeling, and physiology.Cogn. Neurosci. Emotion 25, 49–59.

Bukach, C.M., Gauthier, I., Tarr, M.J., 2006. Beyond faces andmodularity: the power of anexpertise framework. Trends Cogn. Sci. 10, 159–166.

Carrasco, M., 2006. Covert attention increases contrast sensitivity: psychophysical,neurophysiological and neuroimaging studies. Fundam. Vis. 33.

Corbetta, M., Shulman, G.L., 2002. Control of goal-directed and stimulus-drivenattention in the brain. Nat. Rev. Neurosci. 3 (3), 201–215.

Cousineau, D., 2007. Confidence intervals in within-subject designs: a simpler solution toLoftus andMasson'smethod. Tutorials inQuantitativeMethods for Psychology1, 42–45.

Damaraju, E., Huang, Y.M., Barrett, L.F., Pessoa, L., 2009. Affective learning enhancesactivity and functional connectivity in early visual cortex. Neuropsychologia 47(12), 2480–2487.

Eimer, M., Holmes, A., McGlone, F.P., 2003. The role of spatial attention in the processingof facial expression: an ERP study of rapid brain responses to six basic emotions.Cogn. Affect. Behav. Neurosci. 3 (2), 97–110.

Fenker, D.B., Heipertz, D., Boehler, C.N., Schoenfeld, M.A., Noesselt, T., Heinze, H.J., et al.,2010. Mandatory processing of irrelevant fearful face features in visual search. J.Cogn. Neurosci. 22 (12), 2926–2938.

Frischen, A., Eastwood, J.D., Smilek, D., 2008. Visual search for faces with emotionalexpressions. Psychol. Bull. 134 (5), 662–676.

Holmes, A., Vuilleumier, P., Eimer, M., 2003. The processing of emotional facialexpression is gated by spatial attention: evidence from event-related brainpotentials. Brain Res. Cogn. Brain Res. 16 (2), 174–184.

Jonides, J., Irwin, D.E., 1981. Capturing attention. Cognition 10 (1–3), 145–150.Kanwisher, N., Yovel, G., 2006. The fusiform face area: a cortical region specialized

for the perception of faces. Philos. Trans. R. Soc. Lond. B Biol. Sci. 361,2109–2128.

Kastner, S., Pinsk, M.A., De Weerd, P., Desimone, R., Ungerleider, L.G., 1999. Increasedactivity in human visual cortex during directed attention in the absence of visualstimulation. Neuron 22 (4), 751–761.

Kastner, Mains, A.M., Beck, M., 2009. Mechanisms of selective attention in the humanvisual system: evidence from neuroimaging. In: Gazzanige, M.S. (Ed.), CognitiveNeurosciences. MIT Press.

LaBar, K.S., Gitelman, D.R., Mesulam, M.M., Parrish, T.B., 2001. Impact of signal-to-noise on functional MRI of the human amygdala. Neuroreport 12 (16),3461–3464.

Luck, S.J., Chelazzi, L., Hillyard, S.A., Desimone, R., 1997. Neural mechanisms of spatialselective attention in areas V1, V2, and V4 of macaque visual cortex. J.Neurophysiol. 77 (1), 24–42.

Mogg, K., Bradley, B.P., Hallowell, N., 1994. Attentional bias to threat: roles of traitanxiety, stressful events, and awareness. Q. J. Exp. Psychol. A 47 (4), 841–864.

Moriya, J., Tanno, Y., 2009. Competition between endogenous and exogenous attentionto nonemotional stimuli in social anxiety. Emotion 9 (5), 739–743.

1551A. Gomez et al. / NeuroImage 57 (2011) 1542–1551

O'Connor, D.H., Fukui, M.M., Pinsk, M.A., Kastner, S., 2002. Attention modulates responsesin the human lateral geniculate nucleus. Nat. Neurosci. 5 (11), 1203–1209.

Padmala, S., Pessoa, L., 2008. Affective learning enhances visual detection and responsesin primary visual cortex. J. Neurosci. 28 (24), 6202–6210.

Peelen, M.V., Atkinson, A.P., Andersson, F., Vuilleumier, P., 2007. Emotional modulation ofbody-selective visual areas. Soc. Cogn. Affect. Neurosci. 2 (4), 274.

Pessoa, L., 2005. To what extent are emotional visual stimuli processed withoutattention and awareness? Curr. Opin. Neurobiol. 15 (2), 188–196.

Pessoa, L., McKenna, M., Gutierrez, E., Ungerleider, L.G., 2002. Neural processing ofemotional faces requires attention. Proc. Natl Acad. Sci. 99 (17), 11458.

Phelps, E.A., LeDoux, J.E., 2005. Contributions of the amygdala to emotion processing:from animal models to human behavior. Neuron 48 (2), 175–187.

Phelps, E.A., Ling, S., Carrasco, M., 2006. Emotion facilitates perception and potentiatesthe perceptual benefits of attention. Psychol. Sci. 17 (4), 292–299.

Pinsk, M.A., Doniger, G.M., Kastner, S., 2004. Push–pull mechanism of selectiveattention in human extrastriate cortex. J. Neurophysiol. 92 (1), 622–629.

Pourtois, G., Grandjean,D., Sander, D., Vuilleumier, P., 2004. Electrophysiological correlatesof rapid spatial orienting towards fearful faces. Cereb. Cortex (New York, N.Y.: 1991)14 (6), 619–633.

Pourtois, G., Schwartz, S., Seghier, M.L., Lazeyras, F., Vuilleumier, P., 2006. Neuralsystems for orienting attention to the location of threat signals: an event-relatedfmri study. Neuroimage 31 (2), 920–933.

Reynolds, J. H. & Chelazzi, L. (2004). Attentional modulation of visual processing.Sabatinelli, D., Bradley, M.M., Fitzsimmons, J.R., Lang, P.J., 2005. Parallel amygdala and

inferotemporal activation reflect emotional intensity and fear relevance. Neuro-image 24 (4), 1265–1270.

Silver, M.A., Kastner, S., 2009. Topographic maps in human frontal and parietal cortex.Trends Cogn. Sci. 13 (11), 488–495.

Vuilleumier, P., 2005. How brains beware: neural mechanisms of emotional attention.Trends Cogn. Sci. 9 (12), 585–594.

Vuilleumier, P., Armony, J.L., Driver, J., Dolan, R.J., 2001. Effects of attention and emotionon face processing in the human brain: an event-related fMRI study. Neuron 30 (3),829–841.

Whalen, P.J., Kagan, J., Cook, R.G., Davis, F.C., Kim, H., Polis, S., et al., 2004. Humanamygdala responsivity to masked fearful eye whites. Science 306 (5704),2061.

Wright, P., Liu, Y., 2006. Neutral faces activate the amygdala during identity matching.Neuroimage 29 (2), 628–636.


Recommended