+ All Categories
Home > Documents > From gestural pointing to vocal pointing in the brain · 1 FROM GESTURAL POINTING TO VOCAL POINTING...

From gestural pointing to vocal pointing in the brain · 1 FROM GESTURAL POINTING TO VOCAL POINTING...

Date post: 12-Aug-2020
Category:
Upload: others
View: 6 times
Download: 0 times
Share this document with a friend
13
HAL Id: hal-00360753 https://hal.archives-ouvertes.fr/hal-00360753 Submitted on 11 Feb 2009 HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. From gestural pointing to vocal pointing in the brain Hélène Loevenbruck, Marion Dohen, Coriandre Vilain To cite this version: Hélène Loevenbruck, Marion Dohen, Coriandre Vilain. From gestural pointing to vocal pointing in the brain. Revue Française de Linguistique Appliquée, Paris : Publications linguistiques, 2008, XIII (2), pp.23-33. hal-00360753
Transcript
Page 1: From gestural pointing to vocal pointing in the brain · 1 FROM GESTURAL POINTING TO VOCAL POINTING IN THE BRAIN Hélène Lœvenbruck 1 Marion Dohen 1,2 Coriandre Vilain 1 1 Département

HAL Id: hal-00360753https://hal.archives-ouvertes.fr/hal-00360753

Submitted on 11 Feb 2009

HAL is a multi-disciplinary open accessarchive for the deposit and dissemination of sci-entific research documents, whether they are pub-lished or not. The documents may come fromteaching and research institutions in France orabroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, estdestinée au dépôt et à la diffusion de documentsscientifiques de niveau recherche, publiés ou non,émanant des établissements d’enseignement et derecherche français ou étrangers, des laboratoirespublics ou privés.

From gestural pointing to vocal pointing in the brainHélène Loevenbruck, Marion Dohen, Coriandre Vilain

To cite this version:Hélène Loevenbruck, Marion Dohen, Coriandre Vilain. From gestural pointing to vocal pointing inthe brain. Revue Française de Linguistique Appliquée, Paris : Publications linguistiques, 2008, XIII(2), pp.23-33. �hal-00360753�

Page 2: From gestural pointing to vocal pointing in the brain · 1 FROM GESTURAL POINTING TO VOCAL POINTING IN THE BRAIN Hélène Lœvenbruck 1 Marion Dohen 1,2 Coriandre Vilain 1 1 Département

1

FROM GESTURAL POINTING TO VOCAL POINTING IN THE BRA IN

Hélène Lœvenbruck1

Marion Dohen1,2

Coriandre Vilain 1 1 Département Parole et Cognition, GIPSA-lab, UMR CNRS 5216, Universités de Grenoble 2ATR, Cognitive Information Science Laboratories, Kyoto, Japan

Abstract

Deixis, or pointing, is the ability to draw the viewer/listener’s attention to an object, a person, a direction or an

event. Pointing is involved at different stages of human communication development, in multiple modalities:

first with the eyes, then with the finger, then with intonation and finally with syntax. It is ubiquitous and

probably universal in human interactions. The role of index-finger pointing in language acquisition suggests that

it may be a precursor of vocal pointing or that vocal pointing may be grounded in the same cerebral network as

gestural pointing.

Résumé

La deixis, ou le pointage, est la capacité d’attirer l’attention du spectateur ou de l’auditeur vers un objet, une

personne, une direction ou un événement. Le pointage est impliqué à différents stades du développement de la

communication chez l’être humain, via diverses modalités : d’abord avec les yeux, puis avec le doigt (l’index),

puis l’intonation et enfin la syntaxe. Il est ubiquitaire lors des interactions humaines et est probablement

universel. Le rôle du pointage avec l’index dans l’acquisition du langage suggère qu’il pourrait être un

précurseur du pointage vocal et que pointages gestuel et vocal pourraient être ancrés dans un même réseau

cérébral.

INTRODUCTION

Pointing, or deixis, is a universal ability which orients the attention of another person so that an

object/person/direction/event becomes the shared focus of attention. It serves to single out, to individuate what

will become the referent. A pointing gesture is most often performed with the index finger and arm extended in

the direction of the interesting object and with the other fingers curled inside the hand (Butterworth, 2003). But

pointing can be expressed in other modalities. It can for instance be vocal or linguistic. Linguists define deixis as

the way language expresses reference to points in time, space or events (Fillmore, 1997). In its narrow sense,

deixis refers to the contextual meaning of deictic words, which include pronouns, place deictics, time deictics

and demonstratives. In its broad sense, it provides a means to highlight relevant elements in the discourse, to

designate, identify or select an element. In French, as in many languages, broad sense deixis can be conveyed by

syntactic extraction or prosodic focus (Berthoud, 1990). Syntactic extraction involves the use of a cleft

presentation form such as in the example below:

Page 3: From gestural pointing to vocal pointing in the brain · 1 FROM GESTURAL POINTING TO VOCAL POINTING IN THE BRAIN Hélène Lœvenbruck 1 Marion Dohen 1,2 Coriandre Vilain 1 1 Département

2

C’est Madeleine qui m’amena (It’s Madeleine who brought me along.)

Deixis can also be conveyed by contrastive prosodic focus, i.e. by using a specific intonational contour on the

pointed item, such as in the example below:

MADELEINEF m’amena. (MADELEINEF brought me along.)

The effect of this intonational contour is to highlight the pointed item (“Madeleine”), the rest of the utterance

bearing a flat, post-focal contour. Compared with a broad focus or non-deictic rendition of an utterance, prosodic

focus involves precise acoustic and articulatory modifications. It has been shown that speakers use reliable

proprioceptive strategies to organize phonation and articulation adequately in order to convey prosodic focus

(see e.g. Dohen et al., 2006). On the perception side, visual only and auditory-visual perception tests carried out

in our department (Dohen & Lœvenbruck, 2005; in press) have shown that these phonatory and articulatory

patterns are recovered by listeners/viewers and used for prosodic focus detection.

It should be mentioned that syntactic extraction may or not be accompanied by prosodic focus. The use of a

specific (stored) linguistic construction can convey the pointing meaning, without the need for specific

somatosensory (acoustic and articulatory) change, with respect to a non-deictic utterance.

Interestingly, pointing is involved at several stages of human communication development. Ocular pointing (or

deictic gazes, at 6 - 9 months) and, later, index finger-pointing (deictic gestures, at 9 - 11 months) have been

shown to be two key stages in infant cognitive development that are correlated with stages in oral speech

development.

At 9 to 11 months, when infants start to be able to understand a few words, they produce pointing gestures. The

emergence of pointing is a good predictor of first-word onset, and gesture production is related to gains in

language development between 9 and 13 months (Bates et al., 1979, Butcher & Goldin-Meadow, 2000; Caselli,

1990). As explained in Butterworth (2003), pointing not only serves to single out the object but also to build a

connection between the object and the speech sound. Finger pointing therefore seems clearly associated with

lexicon construction. Then at 16 to 20 months, during the transition from the one-word stage to the two-word

stage, combinations of words and deictic gestures can be observed (such as a pointing gesture to a location and

pronouncing “dog”, to indicate that a dog is there). Furthermore, the number of gestures and gesture-word-

combinations produced at 16 months are predictive of total vocal production at 20 months (Morford & Goldin-

Meadow, 1992 ; Capirci et al., 1996; Goldin-Meadow & Butcher, 2003 ; Volterra et al., 2005). Finger pointing

therefore clearly seems associated with morphosyntax emergence.

Little is known about the stages of development of vocal pointing, i.e. prosodic focus and syntactic extraction in

children.

Concerning the acoustic realization of prosody, pre-school children have been shown to master contrastive focus

in English, in the absence of any formal teaching (Hornby & Hass, 1970). As concerns articulatory production of

prosodic pointing, Ménard et al. (2006) showed that French-speaking children (aged 4 and 8) do not differentiate

articulatory and acoustic patterns across focused and unfocused syllables as much as adult speakers do. To

summarize the findings, although the production of prosodic focus, or vocal pointing, seems to be mastered quite

early (as early as 3 years of age) in the acoustic domain, and without formal teaching, its articulatory correlates

Page 4: From gestural pointing to vocal pointing in the brain · 1 FROM GESTURAL POINTING TO VOCAL POINTING IN THE BRAIN Hélène Lœvenbruck 1 Marion Dohen 1,2 Coriandre Vilain 1 1 Département

3

seem to be acquired much later. But what seems lacking in children is not the capacity to hyper-articulate

focused phrases, but to hypo-articulate surrounding phrases. The delay in articulatory performance could thus be

related to general articulatory proficiency rather than to specific linguistic mastery. Since manual pointing

emerges spontaneously before the end of the first year, it could be that acoustic correlates of prosodic focus are

in fact mastered much earlier than 3 years of age.

The development of syntax in young children has been extensively studied by Tomasello and colleagues, among

others. Diessel & Tomasello (2000) showed that the earliest and most frequent relative clauses that children

(aged 2 and younger: 1;11) learn occur in presentational constructions that are propositionally simple, such as:

“That’s the sugar that goes in there”. Interestingly, the main clause contains a deictic pronoun. They quoted a

study by Jisa and Kern (1998) on French children who also reported extensive use of presentational

constructions in young children.

To sum up, pointing seems to be one of the first communicative tools used by babies (e.g. Bates et al., 1975;

Bruner, 1975; Hewes, 1981; Kita, 2003; Tomasello et al., 2007). It is a key part of the shared attention

mechanism in child-adult interaction. It seems to emerge spontaneously, in stages, and in association with oral

productions. Some researchers even claim that gestural pointing is at the origin of human communication (see

e.g. Corballis, 1991). The crucial role of index-finger pointing in language development suggests that vocal

pointing may share features in common with manual pointing, and typically may be grounded in close cerebral

tissues. The aim of this article is to explore the cerebral basis of gestural and vocal pointing.

1. CEREBRAL REGIONS INVOLVED IN GESTURAL POINTING

As explained above, the crucial role of pointing in human communication, be it with the finger, with intonation

or syntax, suggests that pointing with the finger and pointing with the voice may be grounded in a common

cerebral network. The first question we want to address is: what are the cerebral correlates of manual pointing?

Several observations provide preliminary answers to this question.

The first set of observations deals with the role of the posterior parietal regions of both hemispheres in manual

pointing tasks. It has been suggested that the spatial representations formed in the posterior parietal and premotor

frontal regions could provide “perceptual – premotor interfaces for the organization of movements (e.g. pointing,

locomotion) directed towards targets in personal and extrapersonal space” (Vallar, 1997, our underlining).

Patients with left unilateral neglect have been shown to present deficits in pointing tasks (e.g. Edwards &

Humphreys, 1999) while PET studies on normal subjects show activation within the left and/or right inferior

parietal lobule (IPL) during pointing tasks (e.g. Lacquaniti et al., 1997; Kertzman et al., 1997). The role of the

right and left posterior/inferior parietal regions in pointing tasks may further be related to data on brain-damaged

deaf signers. Bellugi and colleagues presented a study of deaf signers of American Sign Language (ASL), two of

which presented lateralized parietal lesions, one in the right hemisphere and the other in the left (Bellugi et al.,

1989). Space in ASL is handled in two ways. The first is topographic: in the description of the layout of objects

in space, spatial relations among signs reproduce the actual spatial relations among the objects. The second is

deictic: space is used for referential indexing. The right-lesioned signer had difficulty in the use of space for

Page 5: From gestural pointing to vocal pointing in the brain · 1 FROM GESTURAL POINTING TO VOCAL POINTING IN THE BRAIN Hélène Lœvenbruck 1 Marion Dohen 1,2 Coriandre Vilain 1 1 Département

4

topography: room description was distorted spatially, with the left side of the signing space neglected. In the use

of space for syntax, however, the entire signing space was covered and consistent reference to spatial loci was

preserved. By contrast, the left-lesioned signer produced room descriptions without spatial distortions but made

errors in the deictic use of space.

More recently a study on finger pointing suggests that pointing with the finger seems to recruit a left lateralized

network including the frontal eye field and the posterior parietal cortex (Astafiev et al., 2003). In this study, the

manual pointing task included a preparation phase and an execution phase. During the preparation phase, the

subjects had to prepare to point at a cued location with the right index finger. During the execution phase, the

subjects were asked to point towards the target as soon as the target flashed. In pointing preparation, bilateral

frontal eye field (FEF) and intraparietal sulcus (IPS) activations were observed. Additional left hemisphere

activation was observed in the angular gyrus, the supramarginal gyrus, the superior parietal lobule, the dorsal

precentral gyrus, and the superior temporal sulcus. These results suggest that the posterior parietal cortex and the

superior frontal cortex contain regions that code preparatory signals for finger pointing, with left hemisphere

dominance.

The question we want to address now is whether a similar network is involved in vocal pointing. Our conjecture

is that it could well be the case, for prosodic pointing. We mentioned in the introduction that prosodic focus

involves precise acoustic and articulatory control. It may therefore need integrated (acoustic, articulatory,

proprioceptive) representations in order to produce acoustic contours and articulatory patterns adequately. These

representations may be formed via the activation of associative cerebral areas, such as temporal and/or parietal

regions. The fact that, as detailed above, manual pointing recruits the (associative) posterior parietal cortex is a

strong motivation to explore the cerebral correlates of prosodic pointing. As concerns syntactic pointing, the fact

that a stored linguistic construction may be used, without the additional requirement of any sophisticated

articulatory and phonatory strategy, suggests that associative cerebral regions may not be necessarily recruited.

2. CEREBRAL REGIONS INVOLVED IN VOCAL POINTING : PRODUCTION AND PERCEPTION

2.1. fMRI study of the silent production of vocal (prosodic and syntactic) pointing

The first fMRI production study presented below is described in more details in Lœvenbruck et al. (2005). The

aim was to examine the cerebral correlates of the production of vocal pointing in French, conveyed by prosodic

focus and syntactic extraction. Sixteen healthy, male, right-handed native speakers of French were examined.

The stimuli consisted of visually presented sentences in French. Three isosyllabic sentences were presented, one

for each condition:

- baseline condition: “Madeleine m’amena” (Madeleine brought me around).

- prosodic deixis condition: “MADELEINE m’amena” (MADELEINE brought me around), to elicit

contrastive focus on the agent.

- syntactic deixis condition: “C’est Mad’leine qui m’am’na” (It’s Mad’leine who brought me ’round).

The number of syllables in the sentence was maintained equal to 6, using schwa deletion. Each sentence was

presented for 3 seconds at the beginning of the corresponding condition. Then a fixation mark, alternating every

Page 6: From gestural pointing to vocal pointing in the brain · 1 FROM GESTURAL POINTING TO VOCAL POINTING IN THE BRAIN Hélène Lœvenbruck 1 Marion Dohen 1,2 Coriandre Vilain 1 1 Département

5

3 seconds between a ‘+’ and a ‘x’ sign, appeared in the middle of the screen. This alternation aimed at triggering

the silent (covert speech) repetition (14 times per condition) of the sentence presented.

Functional MR imaging was performed on a 1.5T imager (Philips NT) with echo-planar (EPI) acquisition. Three

functional scans were performed. A block paradigm was used. The order of presentation of the conditions was

varied across scans and across subjects.

The results of the fixed effect group and random effects analyses are reported here. The results of the random

effect analysis, using the same statistical significance threshold (p < .001 corrected), for the same contrasts did

not provide significant activations. With a less stringent significance threshold however (p<.05 non corrected),

the contrasts provide a similar pattern of activations as the one obtained with the fixed effect analysis.

Figure 1 provides the functional activations obtained for the main effect of prosodic pointing with the fixed

effect analysis. Compared to the baseline, prosodic pointing recruited Broca’s region (BA 45, 47), the left insula

and the premotor cortex (BA 6) bilaterally. In addition, it activated the left anterior cingulate gyrus (BA 24, 32),

the left supramarginal gyrus (LSMG, BA 40) and the left postero-superior temporal gyrus (Wernicke’s area, BA

22).

Figure 1: Axial views of activations in the (prosodic pointing – baseline) contrast. The following regions are observed: a. Broca’s region; b. Wernicke’s area; c.-e. left supramarginal gyrus and bilateral premotor cortex.

Broca’s region or the Left Inferior Frontal Gyrus (LIFG) was activated during prosodic pointing. This finding is

consistent with Dogil and colleagues’ fMRI study on the production of prosodic features at the syllable and

phrase levels which also revealed LIFG activation (Mayer et al., 2002; Dogil et al., 2002). The left insula was

also found to be activated in the (prosodic pointing – baseline) contrast. The involvement of the left precentral

gyrus of the insula in articulatory planning during speech has already been shown (Dronkers, 1996). As

described above, prosody has acoustic and articulatory correlates. The production of prosodic focus may require

more accurate planning of the movements of the larynx, the tongue and the jaw, which could explain why the

prosodic pointing condition yields significant activation of the left insula when compared with the baseline

(same words to articulate, but a more stringent prosody).

The activation of the LSMG and of Wernicke’s area in the prosodic pointing condition is in accordance with our

conjecture. As mentioned earlier about gestural pointing, and in line with Hickock & Poeppel’s (2000) view of

speech perception, the inferior parietal regions in both hemispheres can be considered as interfaces which form

representations necessary in the organization of motor actions, such as pointing at targets. In speech, a left

temporo-parieto-frontal network might be recruited in the organization of verbal motor actions from auditory

a. b. c. d. e.

Page 7: From gestural pointing to vocal pointing in the brain · 1 FROM GESTURAL POINTING TO VOCAL POINTING IN THE BRAIN Hélène Lœvenbruck 1 Marion Dohen 1,2 Coriandre Vilain 1 1 Département

6

representations. Our results, with the activations of the LSMG, the LIFG and Wernicke’s area in prosodic

pointing, are in line with this hypothesis. Like visually-guided manual pointing, prosodic pointing may need

multisensory representations to be formed via superior temporal and inferior parietal regions to organize

articulation and phonation adequately.

The functional activations obtained for the main effect of syntactic pointing with the fixed effect analysis are

shown in Figure 2. Compared to the baseline, syntactic pointing recruited Broca’s region (BA 45, 47), the left

insula and the premotor cortex (BA 6) bilaterally, with a left dominance.

Figure 2: Axial views of activations in the (syntactic pointing – baseline) contrast. The following regions are observed: a. Broca’s region (BA 47); b. Broca’s region (BA 45); c.-e. premotor cortex (left dominance). Although Broca’s region is activated (just as in prosodic pointing), no left parietal activation is observed.

The LIFG activation during the production of syntactic pointing is consistent with functional neuroimaging

studies on complex syntactic processing, which we will further describe below (e.g. Caplan et al., 2000,

Friederici, 2002; Just et al., 1996). Taken together with the results for prosodic pointing, these observations

support the claim that the role of the LIFG is that of an action-structure parser, which, in morphosyntactic

encoding and decoding, handles the parsing of the predicate and its arguments, or the attentional monitoring of

“who does what to whom”.

Compared to the baseline condition, the syntactic utterances required more accurate articulatory planning, given

the larger number of consonant clusters involved (due to schwa deletion). This could explain the left insula

activation observed in the syntactic deixis condition (compared to the baseline).

Contrary to prosodic pointing, syntactic pointing did not involve any parietal activation. In the instructions, we

had explicitly asked the speakers to restrain from using a focused intonation when using the syntactic extraction

form. One interpretation for the lack of parietal activation may be that the use of a (stored) formalized

presentational construction, without specific on-line articulatory and phonatory control, may not require the

activation of associative cerebral regions.

We therefore suggest that non-grammaticalized linguistic pointing (prosodic focus) recruits the temporo-parieto-

frontal network and that grammaticalized pointing (syntactic pointing) is handled solely by the LIFG.

2.1. fMRI study of the perception of vocal pointing

We first present the preliminary results of an fMRI study of the processing of prosodic pointing. A review of the

a. b. c. d. e.

Page 8: From gestural pointing to vocal pointing in the brain · 1 FROM GESTURAL POINTING TO VOCAL POINTING IN THE BRAIN Hélène Lœvenbruck 1 Marion Dohen 1,2 Coriandre Vilain 1 1 Département

7

literature provides data on the processing of syntactic pointing.

Perception of prosodic pointing

The production study described above suggests that associative brain areas are recruited for the production of

prosodic vocal pointing. It can be hypothesized that the cerebral network recruited for the production of prosodic

pointing is, at least partly, also recruited for its perception. This is what the study presented hereafter tested.

Although the acoustic and articulatory correlates of prosodic focus have been quite extensively studied, it

remains unclear what neural processes underlie its perception. Meanwhile studies have shown that prosodic

processing in general cannot be restricted to the right hemisphere (see Baum & Pell, 1999 for a review). Two

studies have analysed the processing of prosodic focus (or closely related prosodic phenomena). The first one

(Wildgruber et al., 2004) aimed at contrasting affective vs. linguistic prosody. The linguistic prosodic task was

an indirect informational focus detection task (find the most suitable answer to a specific question). For the

linguistic prosodic task, the authors found bilateral activations of the primary and secondary auditory cortices, of

the anterior insular cortex and of the frontal operculum (BA 6/44/47), right hemisphere activation of the dorso-

lateral-frontal regions and left hemisphere activations of the inferior frontal cortex. The second study which

examined the processing of prosodic focus (Tong et al., 2005) aimed at differentiating the processing of

“intonation” (question/affirmation discrimination) and that of contrastive stress. It additionally compared English

and Chinese. For the processing of contrastive stress, the authors put forward bilateral activation of the intra-

parietal sulcus (BA 40/7), right hemisphere activation of the medial frontal gyrus (BA 9/46) and left hemisphere

activations of the supramarginal gyrus and the posterior medio-temporal gyrus (BA 21/20/37).

Moreover, even though the perception of prosodic focus is often considered as uniquely auditory, it is possible to

perceive prosodic focus visually and the visual modality can enhance perception when prosodic auditory cues

are degraded (Dohen & Lœvenbruck, in press). This finding emphasizes the necessity to consider the perception

of prosodic contrastive focus and speech prosody in general as multimodal. The perception fMRI study

presented here aims at analyzing the neural processing of prosodic focus from a multimodal point of view.

fMRI recordings were conducted for 12 native speakers of French at the ATR Brain Activity Imaging Center

(Japan). Subjects were scanned while they were performing a prosodic focus detection task for three modalities

(audio only A, visual only V and audiovisual AV). The stimuli were subject-verb-object (SVO) structured

sentences uttered in both normal and whispered speech. In some cases, S was under prosodic contrastive focus.

The speaker was a female native speaker of French. After seeing/hearing/seeing and hearing each stimulus,

subjects were asked to tell whether they had perceived a correction (i.e. contrastive focus) or not.

Preliminary fMRI results

The analysis of the fMRI data is still underway and the results presented here are only preliminary. A preliminary

analysis was conducted for the focus vs. no focus contrast for the auditory and auditory-visual modalities. It

appeared that auditory alone (A) detection of prosodic focus involved the right associative auditory cortex and

fusiform gyrus (BA 19) as well as the left middle frontal gyrus (BA 6/46) and inferior temporal gyrus (BA 37)

and the cerebellum bilaterally. For the auditory-visual modality, we found bilateral activations of the middle and

Page 9: From gestural pointing to vocal pointing in the brain · 1 FROM GESTURAL POINTING TO VOCAL POINTING IN THE BRAIN Hélène Lœvenbruck 1 Marion Dohen 1,2 Coriandre Vilain 1 1 Département

8

inferior frontal gyri (BA 40), the middle temporal gyrus (BA 21), the inferior parietal lobule (BA 40) and the

fusiform gyrus (BA 37) as well as left activation of the supramarginal gyrus (BA 40).

It appears that, for all modalities, prosodic focus detection or processing involves bilateral activations of

associative brain areas. Auditory perception of prosodic focus (vs. no focus) appears to be essentially processed

in associative areas: right superior temporal gyrus and left inferior temporal gyrus (BA 37). Multimodal (AV)

perception of prosodic focus involves bilateral activations of temporal and parietal associative areas as well as

inferior and middle frontal regions. This illustrates the underlying necessity of associating various types of

information to detect focus (especially auditory and articulatory) and supports the assumption that the

articulatory and phonatory patterns produced by the speaker are integrated in perception.

Similar activations of a complex neural network in the processing of focus have been observed in two recent

ERP studies (Bornkessel et al., 2003 and Magne et al., 2005). The implication of the left parietal lobe in the

auditory-visual perception of prosodic pointing is interesting, in the light of our fMRI study of the production of

prosodic pointing, and of the studies on manual pointing described above.

Figure 3: Sagittal views of activations in the (prosodic pointing – baseline) contrast in the auditory (top) and audiovisual (bottom) modalities.

Perception of syntactic pointing

Syntax processing has been the object of interest of many research teams, since the 1960s, when it was noticed

that Broca’s aphasics, in addition to having problems with speech production, also present difficulties in speech

comprehension and specifically with syntactic processing (Caramazza & Zurif, 1976; Zurif, 1980). Since then,

an extensive body of lesion-studies and neuroimaging experiments have been carried out about the brain

networks involved in syntactic processing. Although syntactic pointing (or syntactic extraction) itself has rarely

been the specific focus of these research works, interesting facts can be drawn from them.

It has been reported by many researchers that Broca’s aphasic patients show random performance on the

comprehension of cleft-object sentences such as “It was the girl who the boy pushed” (Caplan, 1985; Berndt &

Caramazza, 1980; Grodzinsky, 1995, 2000; Mauner, Fromkin & Cornell, 1993; Rigalleau et al. 2004). This

could mean that Broca’s region is involved in syntactic pointing. The fact that Broca’s aphasics show normal

Page 10: From gestural pointing to vocal pointing in the brain · 1 FROM GESTURAL POINTING TO VOCAL POINTING IN THE BRAIN Hélène Lœvenbruck 1 Marion Dohen 1,2 Coriandre Vilain 1 1 Département

9

performance on cleft-subject sentences such as “It was the boy who pushed the girl” has been the subject of

many debates (see e.g. Grodzinsky, 2000; Rigalleau et al., 2004). It could either mean that this type of syntactic

extraction does not require Broca’s region or, that Broca’s aphasics use contextual information and linguistic

canonicity (typically, order rules such as Subject Verb Object) to recover the meaning of these sentences.

Neuroimaging studies provide further data on the cerebral regions recruited during syntactic pointing. Caplan et

al. (1999) show that plausibility judgments regarding auditorily presented cleft object sentences (e.g. “It was the

juice that the child enjoyed”) recruit Broca’s area, pars triangularis (BA 45), when contrasted to the processing

of cleft subject sentences (“It was the child who enjoyed the juice”). They interpret their findings in the line of

those of Stromswold et al. (1996) who reported an increase in Broca’s area activation (BA 45) when PET

activity associated with the processing of more complex sentences (such as Object Subject sentences: “The juice

that the child spilled stained the rug.”) was subtracted from that associated with Subject Object sentences (“The

child spilled the juice that stained the rug”). Similar results were obtained by Just et al. (1996) who showed

increased activation in both Broca’s area (BA 44, 45) and Wernicke’s area (BA 22, 42, 21) as well as in the

homologous regions of the right hemisphere when subjects were presented with complex relative clauses.

Overall, these lesion studies and neuroimaging findings suggest that Broca’s region (BA 44, 45) can be

considered to be involved in complex syntactic processing when thematic-role monitoring is required, i.e. the

processing of ‘who-does-what-to-whom’. None of these studies show significant activation in the parietal lobe.

Syntactic pointing typically involves thematic role monitoring, or the tracking of ‘who-did-what-to-whom’, since

it operates a contrastive pointing at one specific agent or patient of the action. It can be assumed that it should

involve activation of the left inferior frontal region, with possible left superior and middle temporal gyri.

Activation of the parietal lobe during syntactic pointing seems less likely. Further experiments should be carried

out to verify these assumptions.

CONCLUSION

We have shown that pointing is a critical device to explore the potential common neural networks at play in

gesture and language. The role of manual (or digital) pointing in language acquisition strongly suggests that

vocal pointing and pointing in other modalities may well be grounded in a common cerebral network.

We have argued from several neuroimaging studies on manual pointing that this modality seems to recruit a

network including the left posterior parietal and frontal cortices.

The results of a first study on covert vocal pointing, including prosodic pointing (i.e. focus) and syntactic

pointing (i.e. syntactic extraction) have been presented. It was shown that covert prosodic pointing does recruit

left parietal regions in addition to frontal regions, whereas syntactic pointing mainly involves frontal region. The

involvement of the left parietal lobe was also evident in a second fMRI study of the perception of prosodic

pointing. Finally, a review of the literature suggests that the perception of syntactic pointing solely recruits

Broca’s region without additional parietal lobe involvement, just as the production of syntactic pointing.

Altogether these findings are in line with our conjecture that linguistic on-line pointing (prosodic focus) is

grounded in the same cerebral network as gestural (manual) pointing. The more formalized pointing (syntactic

extraction) would be an evolved form of vocal pointing which would not necessitate parietal recruitment. More

Page 11: From gestural pointing to vocal pointing in the brain · 1 FROM GESTURAL POINTING TO VOCAL POINTING IN THE BRAIN Hélène Lœvenbruck 1 Marion Dohen 1,2 Coriandre Vilain 1 1 Département

10

analyses are underway to consolidate these results.

ACKNOWLEDGEMENTS

Many researchers were involved in the different works presented here. In alphabetical order, they are C. Abry,

M. Baciu, A. Callan, D. E. Callan, F. Carota, M.-A. Cathiard, H. Hill, L. Lamalle, P. Perrier, C. Pichat, C.

Savariaux, J.-L. Schwartz, C. Segebarth. We thank them all sincerely. The fMRI study on the perception of

prosodic focus (section 2.1) was supported by a research grant (Kagakukenkyu-hi Hojokin) from the Japan

Society for the Promotion of Science awarded to Marion Dohen.

REFERENCES

Astafiev S. V., Shulman G. L., Stanley C. M., Snyder A. Z., Van Essen D. C. & Corbetta M. (2003). Functional

Organization of Human Intraparietal and Frontal Cortex for Attending, Looking, and Pointing. Journal of

Neuroscience, 23, 4689-4699.

Bates, E., Camaioni, L. & Volterra V. (1975). The acquisition of performatives prior to speech. Merrill-Palmer

Quarterly. 21: 205-226.

Bates E., Benigni L., Bretherton I., Camaioni L., Volterra V. (1979). The emergence of symbols: cognition and

communication in infancy. New York: Academic Press.

Baum, S. R. & Pell M. D. (1999). The neural bases of prosody: Insights from lesion studies and neuroimaging.

Aphasiology, 13(8), 581-608.

Bellugi U., Poizner H. & Klima E. S. (1989). Language, modality and the brain. Trends in Neurosciences, 12

(10), 380-388.

Berthoud A.-C. (1990). Deixis, thématisation et détermination. La deixis. Colloque en Sorbonne. M.-A. Morel &

L. Danon-Boileau (dir.), PUF, 527-542.

Bornkessel I., Schlesewsky M., & Friederici A. D. (2003). Contextual information modulates initial processes of

syntactic integration: The role of inter- versus intrasentential predictions. Journal of Experimental Psychology:

Learning Memory and Cognition, 29, 269-298.

Bruner, J. (1975). The ontogenesis of speech acts. Journal of Child Language, 2, 1– 19.

Butcher C. & Goldin-Meadow S. (2000). Gesture and the transition from one- to two-word speech: when hand

and mouth come together. In D. McNeill (Ed.), Language and gesture. Cambridge: Cambridge University Press:

235-257.

Butterworth G. (2003). Pointing is the royal road to language. Pointing, where language, culture, and cognition

meet. Kita S. (ed.). Lawrence Erlbaum Associates Publishers, Mahwah, NJ, 9-34.

Capirci O., Iverson J. M., Pizzuto E., & Volterra V. (1996). Gestures and words during the transition to two-word

speech. Journal of Child language, 23, 645- 673.

Caplan D., Alpert N., Waters G. & Olivieri A. (2000). Activation of Broca’s area by syntactic processing under

conditions of concurrent articulation. Hum. Brain Mapp., 9, 65-71.

Caselli M.C. (1990). Communicative gestures and first words. In V. Volterra & C.J. Erting (Eds.), From gesture

to language in hearing and deaf children. New York: Springer-Verlag, pp. 56-67.

Page 12: From gestural pointing to vocal pointing in the brain · 1 FROM GESTURAL POINTING TO VOCAL POINTING IN THE BRAIN Hélène Lœvenbruck 1 Marion Dohen 1,2 Coriandre Vilain 1 1 Département

11

Corballis, M. C. (1991). The lopsided ape: Evolution of the generative mind. New York: Oxford University

Press; 1991.

Diessel H. & Tomasello M. (2000). The development of relative clauses in spontaneous child speech. Cognitive

Linguistics, 11, 131-151.

Dogil G., Ackermann H., Grodd W., Haider H., Kamp H., Mayer J., Riecker A., Wildgruber D. (2002). The

Speaking Brain. Journal of Neurolinguistics, 15 (1), 59-90.

Dohen, M. & Lœvenbruck, H. (2005). Audiovisual Production and Perception of Contrastive Focus in French: a

multispeaker study. Proceedings of Interspeech 2005, 2413-2416.

Dohen M., Lœvenbruck H. & Hill H. (2006). Visual correlates of prosodic contrastive focus in French:

description and inter-speaker variability. Proceedings of Speech Prosody 2006, Dresden, Germany, May 2-5

2006, Vol. I, 221-224.

Dohen M. & Lœvenbruck H. (in press). Interaction of audition and vision for the perception of prosodic

contrastive focus. Language and Speech.

Dronkers N. F. (1996). A new brain region for coordinating speech articulation. Nature, 384, 159-161.

Edwards M. G. & Humphreys G. W. (1999). Pointing and grasping in unilateral visual neglect: effect of on-line

visual feedback in grasping. Neuropsychologia, 37 (8), 959-973.

Fillmore C. J. (1997). Lectures on deixis. Stanford: Center for the Study of Language and Information.

Friederici A. D. (2002). Towards a neural basis of auditory sentence processing. Trends in Cognitive Sciences, 6

(2), 78-84.

Goldin-Meadow S. & Butcher C. (2003). Pointing toward two-word speech in young children. In Pointing:

Where language, culture, and cognition meet S. Kita (Ed.), 85-107. Mahwah, NJ: Earlbaum Associates.

Hewes G. W. (1981). Pointing and language. In T. Myers, J. Laver & J. Anderson (eds.). The cognitive

representation of speech, 263-269. Amsterdam: North-Holland.

Hornby P. A. & Hass W. A. (1970). Use of contrastive stress by preschool children. Journal of Speech and

Hearing Research, 13, 395-399.

Jisa H. & Kern S. (1998). Relative clauses in French children's narrative texts. Journal of Child Language, 25,

623-652.

Just M. A., Carpenter P. A., Keller T. A., Eddy W. F. & Thulborn K. R. (1996). Brain activation modulated by

sentence comprehension. Science, 274, 114-116.

Kertzman C., Schwarz U., Zeffiro T. A. & Hallett M. (1997). The role of posterior parietal cortex in visually

guided reaching movements in humans. Exp. Brain Res., 114 (1), 170-183.

Lacquaniti F., Perani D., Guigon E., Bettinardi V., Carrozzo M., Grassi F., Rossetti Y. & Fazio F. (1997).

Visuomotor transformations for reaching to memorized targets: a PET study. Neuroimage, 5 (2), 129-146.

Lœvenbruck H., Baciu M., Segebarth C. & Abry C. (2005). The left inferior frontal gyrus under focus: an fMRI

study of the production of deixis via syntactic extraction and prosodic focus. Journal of Neurolinguistics, 18,

237-258.

Magne C., Astésano C., Lacheret-Dujour A, Morel M., Alter K., Besson M. (2005). On-Line processing of "pop-

Page 13: From gestural pointing to vocal pointing in the brain · 1 FROM GESTURAL POINTING TO VOCAL POINTING IN THE BRAIN Hélène Lœvenbruck 1 Marion Dohen 1,2 Coriandre Vilain 1 1 Département

12

out" words in Spoken French dialogues. Journal of Cognitive Neurosciences, 17(5), 740-756.

Mayer J., Wildgruber D., Riecker A., Dogil G., Ackermann H. & Grodd W. (2002). Prosody production and

comprehension: converging evidence from fMRI studies. Proceedings of Speech Prosody 2002, Aix-en-

Provence, France, 11-13 April 2002, 487-490.

Ménard L., Lœvenbruck H. & Savariaux C. (2006). Articulatory and acoustic correlates of contrastive focus in

French: a developmental study, in Harrington, J. & Tabain, M. (eds), Speech Production: Models, Phonetic

Processes and Techniques, Psychology Press : New York, 227-251.

Morford M. & Goldin-Meadow S. (1992). Comprehension and production of gesture in combination with speech

in one-word speakers. Journal of Child Language, 19 (3), 559-580.

Tomasello, M., Carpenter, M., & Liszkowski, U. (2007). A new look at infant pointing. Journal of Child

Language, 34, 1-20.

Tong Y., Gandour J., Talavage T., Wong D., Dzemidzic M., Xu Y., Li X. & Lowe M. (2005). Neural circuitry

underlying sentence-level linguistic prosody. NeuroImage, 28, 417-428.

Volterra V., Caselli M. C., Capirci O. & Pizzuto E. (2005). Gesture and the emergence and development of

language. In Beyond Nature-Nurture. Essays in Honor of Elizabeth Bates. M. Tomasello & D. Slobin (eds.),

Mahwah, NJ: Lawrence Erlbaum Associates. 3-40.

Wechsler, S. & Wayan, A. (1998). Syntactic ergativity in Balinese: an argument structure based theory. Natural

Language and Linguistic Theory 16, 387-441.

Wildgruber D., Hertrich I., Riecker A., Erb M., Anders S., Grodd W. & Ackerman H. (2004). Distinct Frontal

Regions Subserve Evaluation of Linguistic and Emotional Aspects of Speech Intonation. Cerebral Cortex,

14(12), 1384-1389.

Contact

Hélène Lœvenbruck

Marion Dohen

Coriandre Vilain

Département Parole et Cognition, GIPSA-lab, UMR CNRS 5216, Universités de Grenoble

961 rue de la Houille Blanche

Domaine universitaire - BP 46

F - 38402 Saint Martin d'Hères cedex

{Helene.Loevenbruck ; Marion.Dohen ;Coriandre.Vilain}@gipsa-lab.inpg.fr

Tel :04 76 57 47 14

Fax : 04 76 82 63 84


Recommended