+ All Categories
Home > Documents > Short-term action intentions overrule long-term semantic knowledge

Short-term action intentions overrule long-term semantic knowledge

Date post: 29-Mar-2023
Category:
Upload: independent
View: 0 times
Download: 0 times
Share this document with a friend
12
Short-term action intentions overrule long-term semantic knowledge M. van Elk a, * , H.T. van Schie b , H. Bekkering a a Donders Institute for Brain, Cognition and Behaviour, Radboud University, P.O. Box 9104, 6500 HE Nijmegen, The Netherlands b Behavioural Science Institute, Radboud University, Nijmegen, The Netherlands article info Article history: Received 24 August 2008 Revised 8 December 2008 Accepted 12 December 2008 Keywords: Action intentions Semantics Selection-for-action abstract In the present study, we investigated whether the preparation of an unusual action with an object (e.g. bringing a cup towards the eye) could selectively overrule long-term semantic representations. In the first experiment it was found that unusual action intentions acti- vated short-term semantic goal representations, rather than long-term conceptual associ- ations. In a second experiment the reversal of long-term priming effects was replicated, while reducing the need for internal verbalization as a possible strategy to accomplish the task. Priming effects in the first two experiments were found to involve the selection of object knowledge at a semantic level, rather than reflecting a general effect of action preparation on word processing (Experiment 3). Finally, in a fourth experiment short-term priming effects were shown to extend beyond a lexical level by showing faster responses to pictures representing the short-term action goal. Together, the present findings extend the ‘selection-for-action’ principle previously used in visual attention to a semantic level, by showing that semantic information is selectively activated in line with the short-term goal of the actor. Ó 2008 Elsevier B.V. All rights reserved. 1. Introduction As soon as children are able to grasp objects, they spend a lot of time closely investigating whatever comes within their reach. This eagerness to manipulate and acquire con- ceptual knowledge about objects is a remarkable human feature that differentiates us from other primates (John- son-Frey, 2003; Lewis, 2006). Although Chimpanzees may occasionally learn to use tools (e.g. using stones to open nuts; Biro et al., 2003; Hayashi, Mizuno, & Matsuzawa, 2005; Tomasello, 1990) this ability turns out to be very limited and rigid when it comes to solving tool use prob- lems (Povinelli, Reaux, & Theall, 2000). In contrast, humans apply conceptual knowledge in a wide variety of actions and display generativity in their behavior, combining old elements into new actions (Corballis, 1989). For example, although cups are typically used for drinking, on a hot summer day one may grasp a cup to catch a wasp. Despite its importance for coping with continuously changing environmental demands, the ability to use objects in a flex- ible fashion has often been overlooked in both empirical and theoretical investigations of conceptual knowledge (van Elk, van Schie, Lindemann, & Bekkering, 2007). During the last decade a growing number of studies have investigated the functional and neural mechanisms underlying our long-term conceptual knowledge about ob- jects. At a behavioral level it has been shown that the mere observation of an object facilitates parts of the action that is associated with the object (Tucker & Ellis, 1998; Yoon & Humphreys, 2005). When subjects observed a small object for instance (e.g. a peanut), they were faster by responding with a small compared to a large handgrip and vice versa for observation of a large object (e.g. an orange; Tucker & Ellis, 2001). In a recent study it was found that the gestures that are facilitated upon object observation allow the ob- ject’s volumetrical (grasping to lift) or functional use (grasping to use in a functional manner; Bub, Masson, & Cree, 2008). These behavioral effects are likely mediated by inferior parietal and premotor areas in the brain, which are consistently activated in response to object observation (Chao & Martin, 2000; Grezes, Tucker, Armony, Ellis, & 0010-0277/$ - see front matter Ó 2008 Elsevier B.V. All rights reserved. doi:10.1016/j.cognition.2008.12.002 * Corresponding author. Tel.: +31 24 3615593; fax: +31 24 3616066. E-mail address: [email protected] (M. van Elk). Cognition 111 (2009) 72–83 Contents lists available at ScienceDirect Cognition journal homepage: www.elsevier.com/locate/COGNIT
Transcript

Cognition 111 (2009) 72–83

Contents lists available at ScienceDirect

Cognition

journal homepage: www.elsevier .com/ locate/COGNIT

Short-term action intentions overrule long-term semantic knowledge

M. van Elk a,*, H.T. van Schie b, H. Bekkering a

a Donders Institute for Brain, Cognition and Behaviour, Radboud University, P.O. Box 9104, 6500 HE Nijmegen, The Netherlandsb Behavioural Science Institute, Radboud University, Nijmegen, The Netherlands

a r t i c l e i n f o a b s t r a c t

Article history:Received 24 August 2008Revised 8 December 2008Accepted 12 December 2008

Keywords:Action intentionsSemanticsSelection-for-action

0010-0277/$ - see front matter � 2008 Elsevier B.Vdoi:10.1016/j.cognition.2008.12.002

* Corresponding author. Tel.: +31 24 3615593; faE-mail address: [email protected] (M. van

In the present study, we investigated whether the preparation of an unusual action with anobject (e.g. bringing a cup towards the eye) could selectively overrule long-term semanticrepresentations. In the first experiment it was found that unusual action intentions acti-vated short-term semantic goal representations, rather than long-term conceptual associ-ations. In a second experiment the reversal of long-term priming effects was replicated,while reducing the need for internal verbalization as a possible strategy to accomplishthe task. Priming effects in the first two experiments were found to involve the selectionof object knowledge at a semantic level, rather than reflecting a general effect of actionpreparation on word processing (Experiment 3). Finally, in a fourth experiment short-termpriming effects were shown to extend beyond a lexical level by showing faster responses topictures representing the short-term action goal. Together, the present findings extend the‘selection-for-action’ principle previously used in visual attention to a semantic level, byshowing that semantic information is selectively activated in line with the short-term goalof the actor.

� 2008 Elsevier B.V. All rights reserved.

1. Introduction

As soon as children are able to grasp objects, they spenda lot of time closely investigating whatever comes withintheir reach. This eagerness to manipulate and acquire con-ceptual knowledge about objects is a remarkable humanfeature that differentiates us from other primates (John-son-Frey, 2003; Lewis, 2006). Although Chimpanzees mayoccasionally learn to use tools (e.g. using stones to opennuts; Biro et al., 2003; Hayashi, Mizuno, & Matsuzawa,2005; Tomasello, 1990) this ability turns out to be verylimited and rigid when it comes to solving tool use prob-lems (Povinelli, Reaux, & Theall, 2000). In contrast, humansapply conceptual knowledge in a wide variety of actionsand display generativity in their behavior, combining oldelements into new actions (Corballis, 1989). For example,although cups are typically used for drinking, on a hotsummer day one may grasp a cup to catch a wasp. Despiteits importance for coping with continuously changing

. All rights reserved.

x: +31 24 3616066.Elk).

environmental demands, the ability to use objects in a flex-ible fashion has often been overlooked in both empiricaland theoretical investigations of conceptual knowledge(van Elk, van Schie, Lindemann, & Bekkering, 2007).

During the last decade a growing number of studieshave investigated the functional and neural mechanismsunderlying our long-term conceptual knowledge about ob-jects. At a behavioral level it has been shown that the mereobservation of an object facilitates parts of the action thatis associated with the object (Tucker & Ellis, 1998; Yoon &Humphreys, 2005). When subjects observed a small objectfor instance (e.g. a peanut), they were faster by respondingwith a small compared to a large handgrip and vice versafor observation of a large object (e.g. an orange; Tucker &Ellis, 2001). In a recent study it was found that the gesturesthat are facilitated upon object observation allow the ob-ject’s volumetrical (grasping to lift) or functional use(grasping to use in a functional manner; Bub, Masson, &Cree, 2008). These behavioral effects are likely mediatedby inferior parietal and premotor areas in the brain, whichare consistently activated in response to object observation(Chao & Martin, 2000; Grezes, Tucker, Armony, Ellis, &

M. van Elk et al. / Cognition 111 (2009) 72–83 73

Passingham, 2003; Kellenbach, Brett, & Patterson, 2003).Inferior parietal and premotor areas probably code the size,shape and orientation of an object in order to generate amotor program for grasping the object (Murata, Gallese,Luppino, Kaseda, & Sakata, 2000). Together these findingssupport the idea that deriving the object’s affordances al-lows the appropriate grasping for using the object (Gibson,1979).

In addition to selecting the appropriate handgrip, in or-der to use an object in a meaningful way one needs to acti-vate semantic knowledge about the function of an object(Patterson, Nestor, & Rogers, 2007). Studies of patientswith semantic dementia show that a loss of semanticknowledge about objects impairs the ability to use objectsin a functional way, resulting in senseless actions (e.g.using matches as a cigarette or pencil; Bozeat, LambonRalph, Patterson, & Hodges, 2002; Hodges, Bozeat, LambonRalph, Patterson, & Spatt, 2000). The involvement ofsemantic processing in the appropriate use of objects hasbeen shown in healthy subjects as well (Creem & Proffitt,2001). Subjects made more errors in grasping objects inan appropriate fashion when performing a concurrentsemantic task, but not when performing a visuospatialtask, suggesting a close interaction between semantic pro-cessing and the execution of meaningful actions. In addi-tion, several studies have shown that grasping kinematicsare influenced by the semantic distracting properties ofwords that are presented during the action (Boulengeret al., 2006; Gentilucci, 2003; Glover & Dixon, 2002). Forexample, reading the word ‘large’ printed on top of an ob-ject resulted in a larger maximum grip aperture than read-ing the word ‘small’ (Glover & Dixon, 2002). Together,these studies suggest close links between language and ac-tion and support the idea that semantics are involved inaction preparation and execution.

Whereas most studies discussed thus far have focusedon cross-talk between language and overt motor behavior,two recent studies have begun to investigate the semanticsthat are activated during the preparation of meaningful ac-tions with objects (Lindemann, Stenneken, van Schie, &Bekkering, 2006; van Elk, van Schie, & Bekkering, 2008b).Lindemann et al. (2006) asked subjects to prepare a mean-ingful action with an object, such as bringing a cup towardsthe mouth. When subjects prepared to grasp the object, afacilitation in response times was found for words that cor-responded to the goal location of the action (e.g. fasterresponding to the word ‘mouth’ when preparing to bringa cup to the mouth). In contrast, when subjects prepareda finger-lifting response, no priming effects of words refer-ring to the object’s prototypical action goal were found,suggesting that action semantics are selectively activatedonly when subjects prepare a meaningful action with anobject.

In a follow-up ERP study, the semantic nature of thereaction time effects was further demonstrated, by show-ing an N400-priming effect for words that were congruentwith the object’s long-term goal association only whensubjects prepared a meaningful action with the object(e.g. preparing to grasp a cup and bring it to the mouth;van Elk et al., 2008b). When subjects prepared a meaning-less action with an object (e.g. preparing to grasp a cup to

bring it towards the eye) no modulation of the N400-com-ponent was found, indicating that long-term semanticknowledge about the goal of the object was selectivelyactivated only during well-known actions. Together, thesestudies suggest that the preparation of meaningful actionsinvolves the activation of semantic knowledge about theobject, specifying the goal- or end-location of the action.

However, as suggested before, everyday objects can beused in a flexible fashion in order to accomplish short-termbehavioral goals. For example, we can use a hairbrush tocomb our hair or to scratch our back, depending on ourcurrent goals and desires. An intriguing question is howwe select behavioral goals that deviate from our default ac-tion repertoire. It is unclear, for instance, whether theselection of alternative action goals involves the inhibitionof long-term semantic associations. In order to addressthese questions, in the present study we set out to investi-gate whether short-term behavioral goals might overrulelong-term semantic representations of an object and itsassociated goal.

2. Experiment 1

The first experiment was designed to investigatewhether short-term behavioral goals (e.g. bringing a cupto the eye) overrule long-term semantic representations(e.g. a cup is typically associated to the mouth). If so, short-er reactions times are expected for actions probed by theshort-term goal (in this case: ‘eye’) than for the prototypi-cal goal (e.g. ‘mouth’). However, if long-term associationscan not be overruled by a short-term action goal, thelong-term associated word (e.g. ‘mouth’) should result inshorter reaction times.

More specifically, in the first experiment subjects wereinstructed in separate blocks to either perform usual ac-tions with objects (e.g. bring a cup towards the mouth)or to perform unusual actions with objects (e.g. bring acup towards the eye). A picture on the screen indicatedwhich object to grasp (either a magnifying glass or a cup).Semantic processing was investigated by measuring reac-tion times towards words that could be congruent orincongruent with respect to the intended goal of the action(the words ‘eye’ and ‘mouth’).

2.1. Methods

2.1.1. SubjectsTwenty-four subjects participated in the Experiment 1

(19 females, mean age 22.1 years, all students at the Rad-boud University Nijmegen). All participants declaredthemselves to be right-handed, to be Dutch native speakersand to have normal or corrected-to-normal vision. Partici-pants were offered 6 euros or course credits forparticipation.

2.1.2. Experimental setup and procedureThe experimental setup and procedure is represented in

Fig. 1. Subjects were seated behind a table, on which acylindrical cup without a handle (object 1, diameter7.5 cm) and a magnifying glass (object 2, diameter

Fig. 1. Experimental setup and procedure. Subjects were seated behind atable on which a cup (object 1) and a magnifying glass (object 2) wereplaced. A picture on the screen indicated which object should be used.Subjects were instructed in separate blocks to perform usual or unusualactions with the objects (Experiment 1) or to bring the object always toeither the mouth or the eye (Experiments 2 and 4). Reaction times weremeasured in response to words appearing on the screen that referred toeither a body part or an animal (go/no-go semantic categorization task).

74 M. van Elk et al. / Cognition 111 (2009) 72–83

7.5 cm) with a handgrip (length 9.0 cm) were placed with-in line drawings of the object-contours. Object-location(left/right) was counterbalanced between participants. Infront of the subject a button-box was placed, for recordingreaction times and initiating the next trial.

In two subsequent blocks, the experimenter instructedthe subject to either perform usual actions with the objects(e.g. bring the cup towards your mouth) or unusual actions(e.g. bring the cup towards your eye). Block order (usual orunusual) was counterbalanced between participants and atthe beginning of each block the experimenter demon-strated the required actions with both objects. Each trialstarted with a picture of one of both objects (500 ms), indi-cating which object the subject should grasp. After a vari-able interval of 500–1000 ms a word appeared on thescreen, which referred either to a body part or to an ani-mal. Subjects performed a go/no-go semantic categoriza-tion task and responded by releasing the starting buttonand performing an action with the indicated object onlyif the word represented a body part. The word remainedon the screen during execution of the action until the sub-ject returned to the starting button. Next, a fixation crossappeared on the screen for 1000–1500 ms before the nexttrial was presented. If the word represented an animal, no

Table 1Word-stimuli and categories used in Experiments 1, 2 and 4, classified relative to

Target words Object-congruent wordsObject-incongruent words

Action-unrelated body parts

Animal words

subsequent action was required (no-go trials) and subjectswaited until the next picture appeared on the screen. No-go trials, in which no subsequent action had to be per-formed, were randomly presented in 23% of all trials.

Word stimuli consisted of 12 Dutch words that eitherrepresented an animal or a body part (see Table 1). Thewords ‘eye’ and ‘mouth’ were used as target words thatrepresented the goal locations of the intended actions. Fouraction-unrelated body parts were chosen (‘nose’, ‘belly’,‘knee’, ‘toe’), all part of the natural category of the humanbody. In addition, six filler-words were chosen that repre-sented animals (‘fish’, ‘hare’, ‘duck’, ‘ant’, ‘dove’, ‘goat’).Words were matched for word-length, written frequencyand word-category (CELEX lexical database, Burnage,1990). Each block consisted of a total number of 104 trials,in which 40 target words (20 object-congruent words and20 object-incongruent words), 40 action-unrelated bodyparts and 24 filler-words representing animals were pre-sented. Stimuli were presented on a 17 inch computer dis-play with a refresh rate of 100 Hz. Viewing distance wasapproximately 80 cm resulting in a visual angle of about3.5� for the picture stimuli, indicating which object tograsp.

The experiment was programmed and controlled, withPresentation 9.17 software (Neurobehavioral Systems,Albany, USA). Two Minibird movement sensors wereattached to the subject’s index finger thumb by which sub-ject’s movements could be tracked in real-time (miniBIRD800, Ascension Technology Corporation). Spherical areas of5 cm around each object and 10 cm around the mouth andthe eye were defined in order to control task performanceonline. Movements were recorded and stored for offlineanalysis. Analysis of movement kinematics focused onreach time to object, peak velocity, percentage of time tomaximum grip aperture (time of maximum grip aperturedivided by reach time), smoothness of movements (totalnumber of accelerations and decelerations between grasp-ing of object and end-posture) and maximum object height(maximum relative height of object near face). For analysisof reaction times, incorrect trials and reaction times thatexceeded the subject’s mean by more than two standarddeviations were excluded from analysis. Reaction timeswere analyzed using a 2 (usual vs. unusual action) � 2 (ob-ject-congruent vs. object-incongruent word) repeatedmeasures ANOVA.

each object.

Cup Magnifying glassMouth EyeEye Mouth

Belly IdemKnee IdemNose IdemToe Idem

Ant IdemDove IdemDuck IdemFish IdemGoat IdemHare Idem

Fig. 2. Reaction times in Experiment 1 for usual actions (left bars) andunusual actions (right bars). Light bars correspond to object-congruentwords and dark bars correspond to object-incongruent words. Error barsrepresent within-subjects confidence intervals (Loftus & Masson, 1994).1

M. van Elk et al. / Cognition 111 (2009) 72–83 75

2.2. Results

In the first experiment, subjects incorrectly respondedto words referring to animals (false alarm rate) in 2.2% ofall trials and subjects grasped the wrong object in 0.5% ofall trials. Average reaction times from the first experimentare represented in Fig. 2. Analysis of reaction times to tar-get words revealed a marginally significant main effect ofAction, F(1,23) = 4.8, p = .06, g2 = .15, indicating that sub-jects responded slower when preparing unusual actions(561 ms) than when preparing usual actions (530 ms) withobjects. Importantly, a significant interaction was foundbetween Action (usual vs. unusual) and Word (object-con-gruent vs. object-incongruent), F(1,23) = 9.5, p < .01,g2 = .29 indicating that priming effects differed betweenusual and unusual actions. For usual actions, post-hoc t-tests revealed a statistical trend between object-congruentand object-incongruent words, t(23) = �2.0, p = .06, reflect-ing faster reaction times to object-congruent words(525 ms) than to object-incongruent words (535 ms). Forunusual actions, a significant difference was found be-tween object-congruent (567 ms) and object-incongruentwords (554 ms), t(23) = 2.2, p < .05, reflecting faster re-sponses to words congruent with the short-term ratherthan with the long-term action goal. For object-congruentwords subjects responded slower when they prepared anunusual action with the object (567 ms) compared to whenthey prepared a usual action (525 ms), t(23) = �2.6, p < .05.In addition, a significant difference was found between tar-get words (545 ms) and action-unrelated body parts(571 ms), F(1,23) = 39.1, p < .001, g2 = .63 indicating thatsubjects responded slower to words representing body partsthat were presented less frequently than target words.

Kinematic parameters were analyzed using 2(object) � 2 (action) � 2 (word congruency) repeated mea-sures ANOVA’s (see Table 2). To correct for multiple com-parisons a Bonferroni correction was used, loweringsignificance criterion a to 0.01. Only significant differencesin kinematic variables will be reported. For peak velocity, a

1 Note: The large error bars are caused mainly by deviant data from onesubject. Exclusion of this subject did not affect the reaction time patternobserved or the significance of the effects reported.

marginally significant main effect of Object, F(1,23) = 6.4,p = .02, g2 = .22 was found, reflecting slower peak velocityfor grasping the cup (117 cm/s) compared to grasping themagnification glass (132 cm/s). As expected, a main effectof Object, F(1,23) = 13.3, p < .001, g2 = .26 indicated thatoverall the magnification glass had a higher relative endposition (27.2 cm) than the cup (25.1 cm). In addition,analysis of the maximum height revealed a significantinteraction between Object and Action, F(1,23) = 41.7,p < .001, g2 = .47, indicating that the end position per ob-ject differed between usual and unusual action conditions.

2.3. Discussion

To sum up, when subjects prepared a usual action withan object (e.g. bringing a cup to the mouth) reaction timeswere faster to object-congruent words, thereby replicatingprevious findings (Lindemann et al., 2006). Interestingly,when subjects prepared an unusual action with an object(e.g. bringing a cup to the eye) priming effects reversed,with faster reaction times to words that were congruentwith the short-term action goal (object-incongruentwords) rather than with the long-term action goal (ob-ject-congruent words). Thereby, the first experiment sug-gests that long-term semantic associations between anobject and its goal can selectively be overruled when ob-jects are used in an unusual fashion.

3. Experiment 2

In the first experiment, it was found that long-term ob-ject-goal associations are temporarily overruled when sub-jects prepare to use a well-known object in an unusualfashion. However, given the unfamiliar task instruction, itcould well be that subjects verbally represented the upcom-ing action goal in short-term memory in order to correctlyselect the required action (cf. Gruber & Goschke, 2004). Afterconducting the experiment, informal inquiry about the sub-ject’s strategy revealed that most subjects directed theirattention to the spatial location of the object they had tograsp, rather than to the goal to which the object should betransported. Although these informal observations partlyrule out internal verbalization of the action goal as a possiblestrategy, in a second experiment the need to rely on internalverbalization was further minimized by keeping the goallocation of the action constant from trial to trial.

3.1. Method

3.1.1. SubjectsIn Experiment 2, 24 subjects were tested (18 females,

mean age = 20.9 years) whom had not participated inExperiment 1. All participants declared themselves to beright-handed, to be Dutch native speakers and to have nor-mal or corrected-to-normal vision. Participants were of-fered 6 euros or course credits for participation.

3.1.2. Experimental setup and procedureThe same experimental setup was used as in the first

experiment. However, rather than instructing subjects

Table 2Kinematic variables for grasping the cup (left) or the magnifying glass (right). MT, movement time towards object; PV, peak velocity; PTMGA, percentage oftime to maximum grip aperture; NAD, number of accelerations and decelerations; MOH, maximum object height. Standard errors are represented in grey.

76 M. van Elk et al. / Cognition 111 (2009) 72–83

per block either to perform a usual or an unusual actionwith the objects, subjects were required to always bringthe object to the same goal location within a block. Moreprecisely, in half of the experimental blocks subjects wereinstructed to bring on every trial one of both objects al-ways towards their mouth. In the other half of the experi-ment subjects were instructed to bring on every trial one ofboth objects always towards their eye. As a consequence,usual and unusual actions were combined within the sameblock, while the goal location was kept constant from trialto trial. Block order (mouth vs. eye block) was counterbal-anced across participants.

3.2. Results

Subjects incorrectly responded to words referring toanimals (false alarms) in 2.8% of all trials and subjects

grasped the wrong object in 1.1% of all trials. Reactiontimes are represented in Fig. 3. The difference betweenreaction times to target words for usual actions (524 ms)and unusual actions (533 ms) did not reach statistical sig-nificance, F(1,23) = 3.1, p = .09, g2 = .12,. In line with thefirst experiment a significant interaction between Action(usual vs. unusual) and Word (object-congruent vs. ob-ject-incongruent) was found, F(1,23) = 10.3, p < .005,g2 = .31, indicating that priming effects differed betweenusual and unusual actions. For usual actions post-hoc t-tests revealed a significant difference between object-con-gruent (517 ms) and object-incongruent words (531 ms),t(23) = �2.3, p < .05, reflecting faster responses to wordsthat are congruent with the long-term goal association ofthe object. For unusual actions a significant differencewas found between object-congruent (542 ms) and ob-ject-incongruent words (524 ms), t(23) = 3.1, p < .01,

Fig. 3. Reaction times in Experiment 2 for usual actions (left bars) andunusual actions (right bars). Light bars correspond to object-congruentwords and dark bars correspond to object-incongruent words. Error barsrepresent within-subjects confidence intervals (Loftus and Masson,1994).

M. van Elk et al. / Cognition 111 (2009) 72–83 77

reflecting faster responses to words congruent with theshort-term rather than the long-term action goal. Reactiontimes were slower in response to object-congruent wordswhen subjects prepared an unusual action (542 ms) com-pared to when they prepared a usual action (517 ms) withthe object, t(23) = 3.0, p < .01. In general, subjects re-sponded slower to words representing body parts(553 ms) that were presented less frequently comparedto target words (528 ms), F(1,23) = 67.2, p < .001, g2 = .75.

Analysis of kinematic variables revealed a main effect ofobject for maximum object height, F(1,23) = 52.0, p < .001,g2 = .70, indicating that the average end position of themagnifying glass (relative height = 30 cm) was higher thanthe end position of the cup (relative height = 26 cm, Table2). In addition an interaction-effect between Object andAction, F(1,23) = 215.7, p < .001, g2 = .90, indicated thatthe relative end position of the objects differed betweenusual and unusual actions.

3.3. Discussion

Findings from Experiment 2 extend findings from thefirst experiment, by showing that subjects respond fasterto words that are congruent with the short-term actiongoal rather than with the object’s long-term goal associa-tion, when required to perform an unusual action withthe object. Moreover, because the goal location was keptconstant from trial to trial it seems unlikely that semanticpriming effects are partly caused by internal verbalizationof the task by the subjects. Rather, the present findingssuggest that the preparation of an unusual action overruleslong-term semantic associations between an object and itsprototypical goal location.

4. Experiment 3

Data from the first two experiments suggests that pre-paring a usual action with an object activates long-termsemantics, whereas preparing an unusual action with anobject activates representations that are relevant to theshort-term behavioral goal. These priming effects of action

intention on word processing may be considered an in-stance of biased competition between different semanticrepresentations (Desimone & Duncan, 1995; Kan & Thomp-son-Schill, 2004). According to this interpretation, prepar-ing a well-known action with an object facilitates theretrieval of long-term object semantics. In contrast, pre-paring an unusual action requires the inhibition of long-term semantics and facilitates the processing of semanticinformation that is relevant to the short-term action goal.This interpretation fits nicely with Allport’s principle of‘selection-for-action’ according to which our intentions toperform a specific action determine the selection of ac-tion-relevant perceptual information (Allport, 1987). Thefindings from the first two experiments extend the princi-ple of ‘selection-for-action’ to a semantic level by showingthat object semantics are selectively facilitated or inhibiteddepending on our action intentions.

However, one strong alternative explanation for thepresent findings is the possibility that the priming effectsdo not reflect the selection of semantics, but merely reflectthe congruence or incongruence between the signal to act(the words ‘eye’ or ‘mouth’ appearing on the screen) andthe goal location to which the action is directed (eye ormouth). More precisely, the pattern of reaction time data(facilitation of goal-congruent words) could hold irrespec-tive of the actual object being grasped, suggesting that theeffects may reflect a general influence of the activated goalrepresentation on word processing. To investigate whetherthe priming effects from the first two studies reflect effectsof action preparation on the selection of semantics or ageneral priming effect of action goals a third experimentwas conducted which involved novel meaningless objects.

4.1. Method

4.1.1. SubjectsIn Experiment 3, 24 subjects participated (15 females,

mean age 21.5 years) and none of the subjects had partic-ipated in the previous experiments. All participants de-clared themselves to be right-handed, to be Dutch nativespeakers and to have normal or corrected-to-normal vi-sion. Participants were offered 6 euros or course creditsfor participation.

4.1.2. Experimental setup and procedureTwo nonsense objects were constructed that allowed

respectively a precision grip and a full grip: a blue objectconsisted of a small round pin (d = 14 mm, h = 20 mm) thatwas mounted at a round base (d = 70 mm, h = 10 mm) anda yellow object that consisted of a cylinder (d = 50 mm,h = 50 mm) that was mounted at a similar round base(d = 70 mm, h = 10 mm). Half of all subjects were in-structed to move the blue object always towards theirmouth and the yellow object always towards their righteye. The other half of all subjects received opposite instruc-tions. Similar to the previous experiment, a picture on thescreen indicated which object subjects should prepare tograsp and subsequently a word on the screen indicatedwhether or not subjects should respond (go/no-go seman-tic categorization task). Word stimuli and the experimentalprocedure were comparable to the first experiment.

78 M. van Elk et al. / Cognition 111 (2009) 72–83

If the effects in the previous studies reflect a general ef-fect of an activated goal representation on word processingthen in the third experiment with meaningless objects apriming effect is expected for words that are congruentwith the action goal (e.g. faster responding to the word‘mouth’ when preparing an action towards the mouth). Incontrast, if no priming effects of action on word processingare observed with meaningless objects, this would providefurther support for the suggestion that the effects in theprevious studies reflect the selection of information at asemantic level.

4.2. Results

In the third experiment, subjects incorrectly respondedto words referring to animals (false alarm rate) in 2.7% ofall trials and subjects grasped the wrong object in less than1% of all trials. Reaction time data from the third experi-ment is represented in Fig. 4. Analysis of reaction timesto target words using a 2 (action: mouth vs. eye) � 2(word: mouth vs. eye) repeated measures ANOVA revealeda significant main effect of Action, F(1,25) = 6.1, p < .05,g2 = .23, indicating that subjects were faster in initiatingactions directed towards the eye (583 ms) than towardsthe mouth (596 ms). Interestingly and in contrast to thefirst and second experiment, no interaction was found be-tween Action and Word (F < 1). As can be clearly seen inFig. 4, priming effects from the short-term action goal totarget words are absent. In line with the previous experi-ments, a significant difference was found between targetwords (589 ms) and action-unrelated body parts(609 ms), F(1,23) = 5.8, p < .05, g2 = .19, indicating thatsubjects responded slower to words representing bodyparts that were presented less frequently than targetwords.

4.3. Discussion

In the third experiment, we investigated whether thepriming effects observed in the first two studies can be

Fig. 4. Reaction times in Experiment 3 for actions with meaninglessobjects directed towards the mouth (left bars) and directed towards theeye (right bars). Light bars correspond to target words representing‘mouth’ and dark bars correspond target words representing ‘eye’. Errorbars represent within-subjects confidence intervals (Loftus and Masson,1994).

attributed to a general effect of action preparation on wordprocessing. Interestingly, when subjects prepared goal-di-rected actions with meaningless objects, no priming effectsof action on word processing were observed. Apparently,the priming effects in the first two experiments do notmerely reflect the congruence or incongruence betweenan intended goal location and a word appearing on thescreen but appear to be specific to the use of well-knownobjects, supporting the semantic nature of the effects re-ported in the two previous experiments. The reversal oflong-term priming effects in Experiments 1 and 2 thereforelikely reflects competition at a semantic level in whichlong-term semantics are inhibited, while short-termsemantic information is facilitated. The results of the pres-ent experiment suggest that no facilitation or inhibition ofsemantics is required for objects for which no semanticrepresentation is available.

5. Experiment 4

In Experiments 1 and 2 a reversal of long-term primingeffects was found when subjects were required to selectunusual actions with objects. In order to further rule outthe possibility of verbally mediated priming and to furtherstrengthen the semantic nature of the action priming ef-fects reported here, a fourth experiment was conductedusing pictures as stimuli instead of words.

5.1. Method

5.1.1. SubjectsFor the fourth experiment, 32 subjects were tested (22

females, mean age 21.2 years) who had not participatedin the other experiments. All participants declared them-selves to be right-handed, to be Dutch native speakersand to have normal or corrected-to-normal vision. Partici-pants were offered 6 euros or course credits forparticipation.

5.1.2. Experimental setup and procedureThe experimental design was similar to Experiment 2,

except that words were replaced by line drawings,representing either body parts or animals (Snodgrass &Vanderwart, 1980). In Experiment 4, subjects were in-structed in separate blocks to bring the indicated objecteither towards their mouth or towards their eye. The firstpicture on the screen indicated which object should begrasped and subjects performed a semantic categorizationtask to the second picture, by only responding if the picturerepresented a part of the human body.

5.2. Results

Subjects incorrectly categorized pictures depicting ananimal in less than 1% of all trials (false alarms) and sub-jects grasped the incorrect object in less than 1% of all tri-als. Reaction times are represented in Fig. 5 and show acomparable pattern as in Experiments 1 and 2. Analysisof reaction times revealed a marginally significant main ef-fect of Action, F(1,31) = 3.4, p = .07, g2 = .10, indicating a

Fig. 5. Reaction times in Experiment 4 for usual actions (left bars) andunusual actions (right bars). Light bars correspond to object-congruentpictures and dark bars correspond to object-incongruent pictures. Errorbars represent within-subjects confidence intervals (Loftus and Masson,1994).

M. van Elk et al. / Cognition 111 (2009) 72–83 79

trend to slower reaction times for unusual actions(591 ms) compared to usual actions (583 ms). An interac-tion was found between Action (usual vs. unusual) and Pic-ture (object-congruent vs. object-incongruent),F(1,31) = 9.8, p < .005, g2 = .24, reflecting the reversal ofpriming effects between usual and unusual action condi-tions. Post-hoc t-tests revealed a significant difference forusual actions between object-congruent (577 ms) and ob-ject-incongruent pictures (590 ms), t(31) = �2.8, p < .01.For unusual actions, also a significant difference was foundbetween object-congruent (598 ms) and object-incongru-ent pictures (585 ms), t(31) = 2.1, p < .05. The inhibitionof long-term semantic information is likely reflected inslower reaction times to object-congruent pictures whensubjects prepared an unusual action (598 ms) comparedto when they prepared a usual action with the object(577 ms), t(31) = �3.4, p < .01. Comparison of target andfiller pictures showed that subjects responded faster to tar-get pictures (587 ms) than to pictures representing bodyparts that were unrelated to the action (597 ms),F(1,31) = 11.5, p < .005, g2 = .27.

A significant difference was found in peak velocity be-tween grasping the cup (108 cm/s) and grasping the mag-nifying glass (125 cm/s), F(1,31) = 16.5, p < .001, g2 = .35.For peak velocity a significant interaction-effect betweenObject and Picture was found, F(1,31) = 6.1, p < .05,g2 = .17, indicating that when grasping a cup, peak velocitywas slower when the picture on the screen represented aneye (107 cm/s) than when it represented a mouth (108 cm/s), whereas for grasping the magnifying glass peak velocitywas comparable for different pictures (125 cm/s). In gen-eral, the relative end position for the cup was lower(24 cm) than the end position for the magnifying glass(27 cm), F(1,31) = 89.3, p < .001, g2 = .74. In addition, formaximum object height a main effect of Action indicatedthat usual actions had a higher end position (26 cm) thanunusual actions (25 cm), F(1,31) = 44.8, p < .001, g2 = .59.Finally, for maximum object height an interaction betweenObject and Action indicated that the goal locations of theobjects differed between usual and unusual actions,F(1,31) = 440.2, p < .001, g2 = .93.

5.3. Discussion

To conclude, the fourth experiment extends findingsfrom Experiments 1 and 2, by showing that action primingeffects extend beyond a lexical level to the visual domain.In line with the previous experiments it was found that theselection of unusual actions overrules long-term semanticpriming. Theoretical implications of the present series ofstudies will be discussed in the final section.

6. General discussion

In the present study, we investigated whether theintention to use an object in an unusual fashion mightoverrule long-term semantic associations between the ob-ject and its typical goal location. In line with previous find-ings (Lindemann et al., 2006) it was found that whensubjects prepared a well-known action with an object(e.g. bringing a cup towards the mouth), actions were ini-tiated faster in response to words that were congruentwith the long-term goal association of the object (e.g. fas-ter responding to the word ‘mouth’). In contrast, when sub-jects prepared an unusual action with an object (e.g.bringing a cup towards the eye), reaction times were fasterfor words that described the short-term action goal ratherthan the long-term goal association of the object (e.g. fas-ter responding to the word ‘eye’). In the second experimentthe goal of the action was kept constant across trials, there-by reducing the need for internal verbalization as a possi-ble strategy. Again, unusual actions were initiated faster inresponse to words describing the short-term goal of the ac-tion rather than the long-term goal associated to the ob-ject. Importantly, in the third experiment when subjectsprepared goal-directed actions with meaningless objects,no priming effects of action on word processing werefound, suggesting that the priming effects in the first stud-ies do not merely reflect the congruence between a pre-pared action and the word appearing on the screen.Rather, the reversal of the priming effects in unusual actionconditions likely reflects the inhibition of long-term objectsemantics and the facilitation of semantic information thatis relevant to the short-term action goal. Finally, in a fourthexperiment action priming effects were shown to extendbeyond a lexical level to the visual domain, thereby furthersupporting the semantic nature of the effects.

The main difference between usual and unusual actionsin the present study consists of a reversal of priming ef-fects, indicating that unusual actions overrule long-termsemantic representations. Thereby present findings indi-cate that semantic representations are flexible and con-text-dependent, which is in line with a perceptualsymbols account of semantic memory (Barsalou, 1993; Pe-cher & Raaijmakers, 1999). According to Barsalou’s percep-tual symbols theory (1999) semantic knowledge isrepresented across modality-specific systems in the brain.Thinking about a particular concept activates sensory-mo-tor areas that constitute a simulation of the actual encoun-ter with the concept (Barsalou, Kyle Simmons, Barbey, &Wilson, 2003). Semantic representations are flexible anddifferent features may become activated in different

80 M. van Elk et al. / Cognition 111 (2009) 72–83

contexts (e.g. ‘to move a piano’ activates a different repre-sentation than ‘to play a piano’). For example, Glenbergand Robertson (2001) showed that subjects can easilymake sense of objects being used in an odd – though sen-sible fashion (e.g. using a newspaper to protect one’s faceagainst the rain) compared to objects being used in a non-sense fashion (e.g. using a matchbook to protect one’s faceagainst the rain). These findings suggest that in languageprocessing object representations can be flexibly usedand integrated into novel action contexts. Three differentfactors affect the availability of semantic features, namely(1) current context, (2) frequency of activation and (3) re-cent experiences (Barsalou, 1993). The present study ex-tends this view by showing an influence of action contexton the activation of semantic representations. More impor-tantly, our experiments suggest that action intentions canoverrule semantic associations that are expected to bedominant on the basis of frequency of usage. For example,although cups are typically brought towards the mouth,this long-term association is overruled when the object isbrought towards the eye and therefore is used in an unu-sual way.

The dominant role of action intentions in the activationof semantic information suggests that ‘selection-for-action’(Allport, 1987) may be an organizing principle, determin-ing the availability of semantic information. Originally,the principle of ‘selection-for-action’ captures the idea thatour visual system evolved in order to allow us to interactwith the surrounding world (Allport, 1989). Several studieshave supported the suggestion that visual information isselected in order to perform an action (Bekkering &Neggers, 2002; Craighero, Fadiga, Rizzolatti, & Umilta,1999; Hannus, Cornelissen, Lindemann, & Bekkering,2005). Selection-for-action at a semantic level is supportedby recent studies, showing that long-term semantic knowl-edge about objects is only activated if subjects prepare ausual action with an object (Lindemann et al., 2006; vanElk et al., 2008b). In addition, the present study shows thatthe preparation of unusual actions is accompanied byselection of semantic information that is relevant to theshort-term goal of the action. These findings suggest thatsemantic information is selectively activated, in line withthe intention of the actor. However, the functional andneural mechanisms whereby the selection of relevantinformation is accomplished are still not well understood.

At a functional level, a possible mechanism underlyingthe selection of action-relevant information dates back toWilliam James’ ideomotor principle, according to which ac-tions are selected on the basis of the effects they produce(see also: Hommel, Musseler, Aschersleben, & Prinz,2001). The formation of action-effect representations de-pends on a process of associative learning during which agiven response is repeatedly followed by a specific effect(Hommel, 1996). Closely related to the ideomotor principleis the theory of associative sequence learning, according towhich learning sensory-motor associations depends onconcurrent activation of sensory and motor representa-tions (contingence) and on the extent to which activationof one component predicts activation of the other (contigu-ity; Heyes, Bird, Johnson, & Haggard, 2005). The majorstrength of these associative learning approaches to per-

ception-action coupling is that they explain a wide rangeof phenomena, by showing that sensory-motor representa-tions depend critically on action experience (see for exam-ple: Heyes & Bird, 2007). With regard to the present study,the long-term priming effects found for well-known ac-tions fit nicely within an associative learning approach,according to which through repeated experience for in-stance cups have become strongly associated to the con-cept ‘mouth’. However, when subjects without priortraining performed unusual actions with objects a reversalof long-term priming effects was observed (Experiments 1,2 and 4). These short-term priming effects are contrary towhat one would expect on the basis of statistical associa-tions and thereby pose an interesting challenge to associa-tive learning approaches.

The reversal of long-term priming effects during thepreparation of unusual actions likely requires both theinhibition of long-term object-goal associations and theactivation of a motor program guiding the action towardsthe short-term action goal. The inhibition of long-term ob-ject semantics is reflected in slower reaction times to ob-ject-congruent words when subjects prepared unusualactions with objects. Therefore, in our view the presentfindings argue for a process of top-down cognitive controlduring the preparation of unusual or novel actions. At aneural level, the lateral prefrontal cortex, due to its widerange of connections to sensory, motor and subcorticalstructures, plays an important role in cognitive control(Miller & Cohen, 2001), in representing forthcoming ac-tions (Pochon et al., 2001) and in guiding behavior towardsgoals (Duncan, Emslie, Williams, Johnson, & Freer, 1996).Interestingly, in a recent study it was found that neuronsin the lateral prefrontal cortex primarily represent the con-sequence of an intended action (the end-goal) rather thanthe limb movement itself (the means by which the end-goal is accomplished; Saito, Mushiake, Sakamoto, Itoyama,& Tanji, 2005). Furthermore, frontal brain patients’ behav-ior is strongly guided by sensory cues that automaticallyelicit associated actions, such as spontaneously startingto use an object in an appropriate fashion, even whenexplicitly required not to do so (e.g. utilization behavior;Archibald, Mateer, & Kerns, 2001). Apparently, these pa-tients have difficulty in suppressing task-irrelevant infor-mation and in maintaining a representation of thecurrent behavioral goals. In sum, the lateral prefrontal cor-tex plays a key role in the executive control of behavior andin representing upcoming actions in terms of their finalgoals (Tanji & Hoshi, 2008).

In addition, preparing an action with an object involvesthe selection of relevant semantic information fromsemantic memory (Patterson et al., 2007). It has been sug-gested that the left inferior frontal cortex is involved in theselection of relevant semantic information from competingalternatives (Kan & Thompson-Schill, 2004) and in inte-grating information about the meaning of a word fromtemporal and sensory-motor areas (Gennari, MacDonald,Postle, & Seidenberg, 2007). For example, increased activa-tion in inferior frontal cortex has been found in associationwith the processing of ambiguous compared to unambigu-ous words, likely reflecting an increase in controlledsemantic retrieval (Ihara, Hayakawa, Wei, Munetsuna, &

M. van Elk et al. / Cognition 111 (2009) 72–83 81

Fujimaki, 2007). Given the strong links between semanticsin action and language an intriguing possibility is that thebrain areas supporting the selection of semantics in lan-guage also play a role in the selection of semantics for ac-tion (Nazir, 2008). In support of this view, lesions in the leftinferior frontal cortex have been found to result in deficitsin the pantomime of tool use, suggesting that this area isinvolved in the retrieval and selection of conceptual knowl-edge about objects as well (Goldenberg, Hermsdorfer,Glindemann, Rorden, & Karnath, 2007). The priming effectsof action intention on word processing found in the pres-ent study may be considered an instance of biased compe-tition between long-term and short-term semanticrepresentations (Desimone & Duncan, 1995; Kan & Thomp-son-Schill, 2004). The preparation of a well-known actionwith an object involves the selection and activation oflong-term object semantics, whereas the preparation ofan unusual action with an object involves the inhibitionof long-term semantics and the selective activation ofsemantic information that is relevant to the short-termgoal. The semantic nature of the short-term priming effectsis further confirmed by the third experiment, in which thepreparation of actions with meaningless objects for whichno conceptual associations were available did not result inword priming effects. The absence of priming effects fornovel objects is in line with recent findings showing thatit takes considerable time and training to acquire a seman-tic representation of a novel object (e.g. Desmarais, Dixon,& Roy, 2007; Kiefer, Sim, Liebich, Hauk, & Tanaka, 2007).An interesting question to be addressed in future studiesis how much training with novel objects would be requiredto obtain comparable priming effects of usual and unusualaction intentions as observed with well-known objects inthe present study.

Interestingly, priming effects of both usual and unusualaction intentions were reliably obtained with only a lim-ited class of objects. Apparently semantic informationwas consistently activated during action preparation, de-spite the large number of repetitions, thereby replicatingprevious findings (Lindemann et al., 2006; van Elk et al.,2008b). In addition, recent studies from our lab extendthe notion that object semantics are organized primarilyaround action goals to a wide class of different objects(e.g. van Elk, van Schie, & Bekkering, 2008a). Accordingly,we suggest that the present findings can be generalizedto object use in general and thereby provide new insightin how we select actions with objects that deviate fromour default action repertoire in daily life.

Although several studies that have reported effects ofthe semantic distracting properties of words on graspingkinematics (Boulenger et al., 2006; Glover & Dixon,2002), in the present study no significant interactions be-tween action execution and words were found. A plausibleexplanation for the absence of kinematic effects is that be-cause subjects were required to prepare the action beforeword-onset and to execute the action after semantic cate-gorization, word processing and action execution did nottake place at the same time. The absence of effects of wordprocessing on movements is in line with previous findings,in which the action also was found unaffected by thewords presented (Lindemann et al., 2006; van Elk et al.,

2008b). Interestingly, the present study did not show reli-able kinematic differences between usual and unusual ac-tions, as for instance movement time, percentage of time tomaximum grip aperture or peak velocity. Although it istypically assumed that usual actions are performed in arelatively automatic fashion, whereas unusual actions re-quire more attentional control (Cooper, 2002), this distinc-tion does not become apparent at a behavioral level. Only atrend was found for slower reaction times when initiatingan unusual action, suggesting that differences betweenusual and unusual actions occur mainly in the preparatoryphase of the action (Rosenbaum, 1980). This suggestion re-ceives further support from the finding that reaction timedifferences between usual and unusual actions almost dis-appear if the goal of the action is known beforehand(Experiments 2 and 4).

In the fourth experiment action priming effects werefound to extend beyond a lexical level, by showing fasterresponses to pictures representing the correct goal locationof the action. As semantic representations are more easilyaccessed in response to pictures than to words (Carr,McCauley, Sperber, & Parmelee, 1982), findings from thefourth experiment provide further support for the seman-tic nature of the action priming effects. Although it is diffi-cult to completely rule out the possibility that subjectsinternally named the picture, thereby leading to a similareffect as when presenting words, these findings suggest apossible link between the semantics that are activated dur-ing action preparation and the visual semantics associatedwith the processing of concrete words (Kellenbach, Wijers,& Mulder, 2000; van Schie, Wijers, Mars, Benjamins, &Stowe, 2005). This suggestion is in line with previous find-ings (van Elk et al., 2008b), in which the frontal N400-ef-fect for words that were incongruent with the action goalwas comparable to the frontal concreteness effect that istypically obtained in response to the processing of wordsreferring to concrete objects (Holcomb, Kounios, Anderson,& West, 1999). Thus, action preparation might involveselecting a visual representation of the intended actiongoal, which can be considered an instance of the ideomotorprinciple.

An intriguing question is why action preparation acti-vates visual representations of body parts that are not di-rectly visible to the actor (i.e. you never see your ownmouth when drinking from a cup). Several studies haveshown that observation of other’s body parts is closely re-lated to the perception of one’s own body (Reed & Farah,1995; Tipper et al., 1998). For example, viewing other’sbody parts increases sensitivity of the same body site inthe observer, even when the body parts cannot be vieweddirectly from a first-person perspective (e.g. one’s face orneck; Tipper et al., 2001). Furthermore, accumulating evi-dence suggests that comparable representations are in-volved in self-produced and observed actions (Buccinoet al., 2001; Rizzolatti & Craighero, 2004). Together thesestudies suggest that brain areas that are involved in theidentification of other people’s body parts (such as theextrastriate body area), could support the planning of one’sown actions as well (Astafiev, Stanley, Shulman, & Corbet-ta, 2004). According to this interpretation, the preparationof actions facilitates the perception of other’s body parts

82 M. van Elk et al. / Cognition 111 (2009) 72–83

that are involved in the same action, because the body rep-resentations involved in perception and action share simi-lar neural substrates. Although still speculative, the notionof shared body representations in action and perceptioncalls for further investigation.

7. Conclusions

In the present study, we investigated the semantic rep-resentations that support the flexible use of objects. It wasfound that preparation of unusual actions overruled long-term semantic knowledge by activating short-term seman-tic goal representations. These findings extend the ‘selec-tion-for-action’ principle to a semantic level and call forfurther investigation into the neural networks involved.

Acknowledgements

The present study was supported by VICI Grant 453-05-001 from the Dutch Organization for Scientific Research(NWO). We thank Boris Waterschoot, Maurice Rijnaardand Fred Noten for their assistance with the data collec-tion. We would like to thank 3 anonymous reviewers fortheir constructive comments on a previous version of thismanuscript.

References

Allport, A. (1987). Selection for action: Some behaviorial andneurophysiological considerations of attention and action. In H.Heuer & A. F. Sanders (Eds.), Perspectives on perception and action(pp. 395–419). Hillsdale, NJ, England: Lawrence Erlbaum Associates.

Allport, A. (1989). Visual attention. In M. I. Posner (Ed.), Foundations ofcognitive science (pp. 631–682). Cambridge, MA: The MIT Press.

Archibald, S. J., Mateer, C. A., & Kerns, K. A. (2001). Utilization behavior:Clinical manifestations and neurological mechanisms.Neuropsychology Review, 11(3), 117–130.

Astafiev, S. V., Stanley, C. M., Shulman, G. L., & Corbetta, M. (2004).Extrastriate body area in human occipital cortex responds to theperformance of motor actions. Nature Neuroscience, 7(5), 542–548.

Barsalou, L. W. (1993). Flexibility, structure and linguistic vagary inconcepts: Manifestations of a compositional system of perceptualsymbols. In S. E. G. A.F. Collins, M. A. Conway, & P. E. Morris (Eds.),Theories of memory (pp. 29–101). Hillsdale, NJ: Erlbaum.

Barsalou, L. W. (1999). Perceptual symbol systems. Behavioral and BrainSciences, 22(4), 577–609 (discussion 610-560).

Barsalou, L. W., Kyle Simmons, W., Barbey, A. K., & Wilson, C. D. (2003).Grounding conceptual knowledge in modality-specific systems.Trends in Cognitive Sciences, 7(2), 84–91.

Bekkering, H., & Neggers, S. F. (2002). Visual search is modulated by actionintentions. Psychological Science, 13(4), 370–374.

Biro, D., Inoue-Nakamura, N., Tonooka, R., Yamakoshi, G., Sousa, C., &Matsuzawa, T. (2003). Cultural innovation and transmission of tooluse in wild Chimpanzees: Evidence from field experiments. AnimalCognition, 6(4), 213–223.

Boulenger, V., Roy, A. C., Paulignan, Y., Deprez, V., Jeannerod, M., & Nazir,T. A. (2006). Cross-talk between language processes and overt motorbehavior in the first 200 msec of processing. Journal of CognitiveNeuroscience, 18(10), 1607–1615.

Bozeat, S., Lambon Ralph, M. A., Patterson, K., & Hodges, J. R. (2002). Whenobjects lose their meaning: What happens to their use? Cognitive andAffective Behavioral Neuroscience, 2(3), 236–251.

Bub, D. N., Masson, M. E. J., & Cree, G. S. (2008). Evocation of functionaland volumetric gestural knowledge by objects and words. Cognition,106(1), 27–58.

Buccino, G., Binkofski, F., Fink, G. R., Fadiga, L., Fogassi, L., Gallese, V., et al(2001). Action observation activates premotor and parietal areas in asomatotopic manner: An fMRI study. European Journal of Neuroscience,13(2), 400–404.

Carr, T. H., McCauley, C., Sperber, R. D., & Parmelee, C. M. (1982). Words,pictures, and priming: on semantic activation, consciousidentification, and the automaticity of information processing.Journal of Experimental Psychology: Human Perception andPerformance, 8(6), 757–777.

Chao, L. L., & Martin, A. (2000). Representation of manipulable man-madeobjects in the dorsal stream. Neuroimage, 12(4), 478–484.

Cooper, R. (2002). Order and disorder in everyday action: The roles ofcontention scheduling and supervisory attention. Neurocase, 8(1–2),61–79.

Corballis, M. C. (1989). Laterality and human evolution. PsychologicalReview, 96(3), 492–505.

Craighero, L., Fadiga, L., Rizzolatti, G., & Umilta, C. (1999). Action forperception: A motor-visual attentional effect. Journal of ExperimentalPsychology: Human Perception and Performance, 25(6), 1673–1692.

Creem, S. H., & Proffitt, D. R. (2001). Grasping objects by their handles: Anecessary interaction between cognition and action. Journal ofExperimental Psychology: Human Perception and Performance, 27(1),218–228.

Desimone, R., & Duncan, J. (1995). Neural mechanisms of selective visualattention. Annual Review of Neuroscience, 18, 193–222.

Desmarais, G., Dixon, M. J., & Roy, E. A. (2007). A role for action knowledgein visual object identification. Memory and Cognition, 35(7),1712–1723.

Duncan, J., Emslie, H., Williams, P., Johnson, R., & Freer, C. (1996).Intelligence and the frontal lobe: The organization of goal-directedbehavior. Cognitive Psychology, 30(3), 257–303.

Gennari, S. P., MacDonald, M. C., Postle, B. R., & Seidenberg, M. S. (2007).Context-dependent interpretation of words: Evidence for interactiveneural processes. Neuroimage, 35(3), 1278–1286.

Gentilucci, M. (2003). Object motor representation and language.Experimental Brain Research, 153(2), 260–265.

Gibson, J. J. (1979). The ecological approach to visual perception. Boston,MA, US: Houghton, Mifflin and Company.

Glenberg, A. M., & Robertson, D. A. (2001). Symbol grounding andmeaning: A comparison of high-dimensional and embodiedtheories of meaning. Journal of Memory and Language, 43, 379–401.

Glover, S., & Dixon, P. (2002). Semantics affect the planning but notcontrol of grasping. Experimental Brain Research, 146(3),383–387.

Goldenberg, G., Hermsdorfer, J., Glindemann, R., Rorden, C., & Karnath, H.O. (2007). Pantomime of tool use depends on integrity of left inferiorfrontal cortex. Cerebral Cortex, 17(12), 2769–2776.

Grezes, J., Tucker, M., Armony, J., Ellis, R., & Passingham, R. E. (2003).Objects automatically potentiate action: An fMRI study of implicitprocessing. European Journal of Neuroscience, 17(12), 2735–2740.

Gruber, O., & Goschke, T. (2004). Executive control emerging fromdynamic interactions between brain systems mediating language,working memory and attentional processes. Acta Psychologica, 115(2–3), 105–121.

Hannus, A., Cornelissen, F. W., Lindemann, O., & Bekkering, H. (2005).Selection-for-action in visual search. Acta Psychologica, 118(1–2),171–191.

Hayashi, M., Mizuno, Y., & Matsuzawa, T. (2005). How does stone-tool useemerge? Introduction of stones and nuts to naive Chimpanzees incaptivity. Primates, 46(2), 91–102.

Heyes, C., & Bird, G. (2007). Mirroring, association, and thecorrespondence problem. In P. Haggard, Y. Rossetti, & M. Kawato(Eds.), Attention and performance XXII: Sensorimotor foundations ofhigher cognition (pp. 461–479). Oxford: Oxford University Press.

Heyes, C., Bird, G., Johnson, H., & Haggard, P. (2005). Experiencemodulates automatic imitation. Brain Research and Cognitive BrainResearch, 22(2), 233–240.

Hodges, J. R., Bozeat, S., Lambon Ralph, M. A., Patterson, K., & Spatt, J.(2000). The role of conceptual knowledge in object use evidence fromsemantic dementia. Brain, 123(9), 1913–1925.

Holcomb, P. J., Kounios, J., Anderson, J. E., & West, W. C. (1999). Dual-coding, context-availability, and concreteness effects in sentencecomprehension: An electrophysiological investigation. Journal ofExperimental Psychology: Learning, Memory, and Cognition, 25(3),721–742.

Hommel, B. (1996). The cognitive representation of action: Automaticintegration of perceived action effects. Psychological Research, 59(3),176–186.

Hommel, B., Musseler, J., Aschersleben, G., & Prinz, W. (2001). The theoryof event coding (TEC): A framework for perception and actionplanning. Behavioral and Brain Sciences, 24(5), 849–878 (discussion878-937).

M. van Elk et al. / Cognition 111 (2009) 72–83 83

Ihara, A., Hayakawa, T., Wei, Q., Munetsuna, S., & Fujimaki, N. (2007).Lexical access and selection of contextually appropriate meaning forambiguous words. Neuroimage, 38(3), 576–588.

Johnson-Frey, S. H. (2003). What’s so special about human tool use?Neuron, 39(2), 201–204.

Kan, I. P., & Thompson-Schill, S. L. (2004). Selection from perceptual andconceptual representations. Cognitive and Affective BehavioralNeuroscience, 4(4), 466–482.

Kellenbach, M. L., Brett, M., & Patterson, K. (2003). Actions speak louderthan functions: The importance of manipulability and action in toolrepresentation. Journal of Cognitive Neuroscience, 15(1), 30–46.

Kellenbach, M. L., Wijers, A. A., & Mulder, G. (2000). Visual semanticfeatures are activated during the processing of concrete words:Event-related potential evidence for perceptual semantic priming.Brain Research and Cognitive Brain Research, 10(1–2), 67–75.

Kiefer, M., Sim, E. J., Liebich, S., Hauk, O., & Tanaka, J. (2007). Experience-dependent plasticity of conceptual representations in humansensory-motor areas. Journal of Cognitive Neuroscience, 19(3),525–542.

Lewis, J. W. (2006). Cortical networks related to human use of tools.Neuroscientist, 12(3), 211–231.

Lindemann, O., Stenneken, P., van Schie, H. T., & Bekkering, H. (2006).Semantic activation in action planning. Journal of ExperimentalPsychology: Human Perception and Performance, 32(3), 633–643.

Loftus, G. R., & Masson, M. E. J. (1994). Using confidence intervals inwithin-subjects designs. Psychonomic Bulletin and Review, 1, 476–490.

Miller, E. K., & Cohen, J. D. (2001). An integrative theory of prefrontalcortex function. Annual Review of Neuroscience, 24, 167–202.

Murata, A., Gallese, V., Luppino, G., Kaseda, M., & Sakata, H. (2000).Selectivity for the shape, size and orientation of objects for graspingin neurons of monkey parietal area AIP. Journal of Neurophysiology,83(5), 2580–2601.

Nazir, T. E. (2008). Links and interactions between language and motorsystems in the brain. Journal of Physiology Paris, 102(1–3), 1–152(special issue).

Patterson, K., Nestor, P. J., & Rogers, T. T. (2007). Where do you know whatyou know? The representation of semantic knowledge in the humanbrain. Nature Reviews Neuroscience, 8(12), 976–987.

Pecher, D., & Raaijmakers, J. G. (1999). Automatic priming effects for newassociations in lexical decision and perceptual identification.Quarterly Journal of Experimental Psychology, 52(3), 593–614.

Pochon, J. B., Levy, R., Poline, J. B., Crozier, S., Lehericy, S., Pillon, B., et al(2001). The role of dorsolateral prefrontal cortex in the preparation offorthcoming actions: An fMRI study. Cerebral Cortex, 11(3), 260–266.

Povinelli, D.-J., Reaux, J.-E., & Theall, L.-A. (2000). Folk physics for apes:The Chimpanzee’s theory of how the world works. Biology andPhilosophy, 17(5), 695–702.

Reed, C. L., & Farah, M. J. (1995). The psychological reality of the bodyschema: A test with normal participants. Journal of ExperimentalPsychology: Human Perception and Performance, 21(2), 334–343.

Rizzolatti, G., & Craighero, L. (2004). The mirror-neuron system. AnnualReview of Neuroscience, 27, 169–192.

Rosenbaum, D. A. (1980). Human movement initiation: specification ofarm, direction, and extent. Journal of Experimental Psychology: General,109(4), 444–474.

Saito, N., Mushiake, H., Sakamoto, K., Itoyama, Y., & Tanji, J. (2005).Representation of immediate and final behavioral goals in themonkey prefrontal cortex during an instructed delay period.Cerebral Cortex, 15(10), 1535–1546.

Snodgrass, J. G., & Vanderwart, M. (1980). A standardized set of 260pictures: Norms for name agreement, image agreement, familiarity,and visual complexity. Journal of Experimental Psychology: HumanLearning, 6(2), 174–215.

Tanji, J., & Hoshi, E. (2008). Role of the lateral prefrontal cortex inexecutive behavioral control. Physiological Reviews, 88(1), 37–57.

Tipper, S. P., Lloyd, D., Shorland, B., Dancer, C., Howard, L. A., & McGlone, F.(1998). Vision influences tactile perception without proprioceptiveorienting. Neuroreport, 9(8), 1741–1744.

Tipper, S. P., Phillips, N., Dancer, C., Lloyd, D., Howard, L. A., & McGlone, F.(2001). Vision influences tactile perception at body sites that cannotbe viewed directly. Experimental Brain Research, 139(2), 160–167.

Tomasello, M. (1990). Cultural transmission in the tool use andcommunicatory signaling of Chimpanzees? In K. R. Gibson & S. T.Parker (Eds.), ‘‘Language” and intelligence in monkeys and apes:Comparative developmental perspectives (pp. 274–311). New York:Cambridge University Press.

Tucker, M., & Ellis, R. (1998). On the relations between seen objects andcomponents of potential actions. Journal of Experimental Psychology:Human Perception and Performance, 24(3), 830–846.

Tucker, M., & Ellis, R. (2001). The potentiation of grasp types during visualobject categorization. Visual Cognition, 8(6), 769–800.

van Elk, M., van Schie, H. T., & Bekkering, H. (2008a). Conceptualknowledge for understanding other’s actions is organized primarilyaround action goals. Experimental Brain Research, 189(1), 99–107.

van Elk, M., van Schie, H. T., & Bekkering, H. (2008b). Semantics in action:An electrophysiological study on the use of semantic knowledge foraction. Journal of Physiology Paris, 102(1–3), 95–100.

van Elk, M., van Schie, H. T., Lindemann, O., & Bekkering, H. (2007). Usingconceptual knowledge in language and action. In P. Haggard, Y.Rossetti, & M. Kawato (Eds.), Attention and performance XXII:Sensorimotor foundations of higher cognition (pp. 575–599). Oxford:Oxford University Press.

van Schie, H. T., Wijers, A. A., Mars, R. B., Benjamins, J. S., & Stowe, L. A.(2005). Processing of visual semantic information to concrete words:Temporal dynamics and neural mechanisms indicated by event-related brain potentials. Cognitive Neuropsychology, 22(3–4), 364–386.

Yoon, E. Y., & Humphreys, G. W. (2005). Direct and indirect effects ofaction on object classification. Memory and Cognition, 33(7),1131–1146.


Recommended