+ All Categories
Home > Documents > ISRAEL DE LA FUENTE April 5, 2010

ISRAEL DE LA FUENTE April 5, 2010

Date post: 19-Jan-2016
Category:
Upload: darby
View: 81 times
Download: 6 times
Share this document with a friend
Description:
UNITARY vs MULTIPLE SEMANTICS: PET STUDIES OF WORD AND PICTURE PROCESSING P. Bright, H. Moss & L.K. Tyler. ISRAEL DE LA FUENTE April 5, 2010. Papanicolaou (1998). Positron Emission Tomography (PET). Positron Emission Tomography (PET). Positron Emission Tomography (PET). - PowerPoint PPT Presentation
Popular Tags:
46
UNITARY vs MULTIPLE SEMANTICS: PET STUDIES OF WORD AND PICTURE PROCESSING P. Bright, H. Moss & L.K. Tyler ISRAEL DE LA FUENTE April 5, 2010
Transcript
Page 1: ISRAEL DE LA FUENTE April 5, 2010

UNITARY vs MULTIPLE SEMANTICS: PET STUDIES OF WORD AND PICTURE

PROCESSING

P. Bright, H. Moss & L.K. Tyler

ISRAEL DE LA FUENTE

April 5, 2010

Page 2: ISRAEL DE LA FUENTE April 5, 2010

Papanicolaou (1998)

Positron Emission Tomography(PET)

Page 3: ISRAEL DE LA FUENTE April 5, 2010

Positron Emission Tomography (PET)

Page 4: ISRAEL DE LA FUENTE April 5, 2010

Positron Emission Tomography (PET)

Page 5: ISRAEL DE LA FUENTE April 5, 2010

PET: the nature imaged

Neurons utilize a variety of organic molecules and compounds to subsist and to function (i.e. to receive and transmit messages).

Since the overall activity of cells throughout the brain is not uniform, the distribution of molecules is not uniform either.

PET images capture the distribution of particular organic molecules and compounds throughout the brain, reflecting local variations in either metabolic or blood flow rates.

Page 6: ISRAEL DE LA FUENTE April 5, 2010

PET: the electromagnetic signal

The constituent elements of these molecules and compounds (e.g. oxygen, carbon, nitrogen, etc.) are not radioactive, therefore they do not emit any electromagnetic signals.

It is possible to introduce into the brain, through the blood (via intravenous injection), equivalent organic molecules that contain atoms that are isotopes of the natural ones and that emit positively charged particles (positrons).

Positrons interact with electrons and produce photons that can be detected over the head surface.

Page 7: ISRAEL DE LA FUENTE April 5, 2010

PET: the electromagnetic signal (cont’d)

These compounds, charged with radioactive atoms (manufactured in particle accelerators), are called tracers or probers.

They allow us to “trace” the processes of neural signaling and metabolism by revealing their position and their relative concentration by means of shedding their excess positrons.

The time required for all positrons to be emitted differs from one type of isotope to the other and is measured in half-life.

Page 8: ISRAEL DE LA FUENTE April 5, 2010

PET: formation of surface distribution

Positrons collide with one of the electrons in their environment and both are annihilated, i.e. they are converted into a pair of high-frequency photons.

Photons fly with equal speed in diametrically opposite directions and constitute the electromagnetic signals that form the surface distribution imaged.

Page 9: ISRAEL DE LA FUENTE April 5, 2010

PET: recording apparatus

It consists of an array of scintillation detectors arranged around the head.

When a photon hits the crystal, visible light is emitted. This light interacts with a cathode plate and with a series of

dynodes resulting in a sufficiently amplified electrical pulse.

Page 10: ISRAEL DE LA FUENTE April 5, 2010

PET: recording apparatus (cont’d)

A photon pair is likely to interact with a pair of detectors simultaneously if the origin of photons was mid-way between the two detectors.

The duration of the time-of-flight of each pair can be used to estimate the position of the tracer molecule inside the brain.

The relative degree of activation of the different areas can be inferred from the relative number of photons originating in each.

Page 11: ISRAEL DE LA FUENTE April 5, 2010

PET: developing the functional image

Errors in estimating the true origin of photon pairs

More photons of superficial origin are likely to be detected

More photons of superficial origin are likely to be detected

Though coincident, the two recorded photons do not belong to the

same pair

Though coincident, the two recorded photons do not belong to the

same pair

Photons originate in the same collision point but their course is

deflected

Photons originate in the same collision point but their course is

deflected

Page 12: ISRAEL DE LA FUENTE April 5, 2010

PET: developing the functional image (cont’d)

Solution: back projection Given a coincident

detection of two photons, their common origin is assumed to be, with equal probability, anywhere along the line between the detectors.

Page 13: ISRAEL DE LA FUENTE April 5, 2010

PET: developing the functional image (cont’d)

We apply the same procedure to all trajectories of coincident photons and transform the sums of probabilities in each pixel into colors, each representing a different range of values of these sums

We add the probabilities of a photon emission origin at each pixel – some pixels have a much higher probability of containing the origin of the emissions than others

Page 14: ISRAEL DE LA FUENTE April 5, 2010

PET: developing the functional image (cont’d)

Each different shade of color represents a different degree of activation of the underlying brain structures

To identify which are the structures more or less activated, it is necessary to superimpose these functional images on structural ones

Page 15: ISRAEL DE LA FUENTE April 5, 2010

PET: fidelity of the image

It is easier to represent the type of activation intended when the radioisotopes used are part of particular neurotransmitter molecules that bind to receptors of specific types of cells.

Page 16: ISRAEL DE LA FUENTE April 5, 2010

PET: fidelity of the image (cont’d)

Spatial resolution: Size of the detectors (the wider the detector, the poorer the resolution)

Energy of the emitted positrons (depending on the type of radioisotope, the average distance between the two points can vary from a fraction of a mm to more than one mm)

Number of detectors (the greater the number, the more adjacent activated areas that can be resolved and the greater the volume of the brain that can be monitored)

Page 17: ISRAEL DE LA FUENTE April 5, 2010

PET: fidelity of the image (cont’d)

Temporal resolution: Requisite number of photon pair counts for

establishing a surface distribution Each isotope has a specific half-life that determines

the length of the recording interval and, therefore, the temporal resolution of the image

For example, 15O allows construction of images of the greater temporal resolution, since its half-life is in the order of 2 minutes

Pawel: type of radioisotope and temporal resolution Albert: PET temporal resolution and relation to stimuli presentation

Page 18: ISRAEL DE LA FUENTE April 5, 2010

Bright, Moss & Tyler (2004)

Unitary vs multiple semantics: PET studies of word and picture

processing

Page 19: ISRAEL DE LA FUENTE April 5, 2010

Motivation for the study

Conceptual knowledge (e.g. language comprehension and production, reasoning, and object recognition) Are all these processes employed by a unitary system of

conceptual representations? Unitary semantics account (all processing routes converge on

a single set of conceptual representations common to all modalities)

Are there separate representations for the same concept for different modalities of input or output? Modality-specific account (there are distinct conceptual

representations for the verbal and visual input modalities)

Page 20: ISRAEL DE LA FUENTE April 5, 2010

Motivation for the study

Investigate whether the conceptual knowledge accessed by pictures and words form two neurologically distinct components of the semantic system (modality-specific semantics) or whether both stimulus types converge on the same set of representations (unitary semantics)

Page 21: ISRAEL DE LA FUENTE April 5, 2010

Modality-specific semantics account

Paivio’s Dual Coding Theory (Paivio, 1971) “One (the image system) is specialised for dealing with

perceptual information concerning non-verbal objects and events. The other (the verbal system) is specialised for dealing with linguistics events”

The two systems are assumed to be functionally and structurally distinct although interconnected by referential relations between representations in the two.

Is this dissociation located at the level of conceptual representation or within the pre-semantic representations or processes necessary for access to the conceptual system?

Page 22: ISRAEL DE LA FUENTE April 5, 2010

Unitary semantics account

Caramazza et al.’s (1990) Organized Unitary Conceptual Hypothesis (OUCH) A common conceptual system will be recruited during the

processing of an item, irrespective of modality.

While the semantic representation of a visually presented object and its verbal description is the same, the procedure to access that representation is different: A visually (and aurally) presented word will activate the lexicon, which

will in turn activate the semantic properties that define its meaning

A visually presented object will directly access those same semantic properties

Page 23: ISRAEL DE LA FUENTE April 5, 2010

Previous neuroimaging studies

Previous studies found no differences between word and picture processing, favoring a unitary conceptual system (e.g. Wise et al., 1996)

The inferior frontal gyrus (IFG) has been found to be consistently activated, irrespective of the modality of input (e.g. Demb et al., 1995)

Distinction between modality-specific activation of conceptual knowledge and modality-specific activation associated with earlier stages of input processing Two posterior regions generally associated with the latter:

The lateral occipital complex (pictures of objects with clear shape interpretations)

The middle portion of the left fusiform gyrus (BA 37) (visually presented words)

Page 24: ISRAEL DE LA FUENTE April 5, 2010

Research questions & predictions

Are there distinct (separable) neural regions that underlie the semantic representation of objects and words?

Two competing predictions: If conceptual representations for words and pictures are

separable and non-overlapping, we would expect them to involve distinct semantic processing regions

If the unitary conceptual system position is correct, there should be extensive co-activation for word and picture categorization in the more anterior, semantic regions, although there may be differential activations in posterior areas related to modality-specific pre-semantic effects

Page 25: ISRAEL DE LA FUENTE April 5, 2010

Materials and methods

Meta-analysis of four PET studies (three semantic categorization tasks and one lexical decision task), two using pictures and two using words Marianna: why not visual vs auditory stimuli?

Methodologies and procedures, stimulus sets, and scanner settings were held constant across all tasks

Subjects: 38 in total (mean age=30; range=21-48; 37M, 1F) Lynn: age range and language experience All were right-handed, native English speakers, without any known

history of neurological or psychiatric illness No subject participated in more than one task Lucy: session length and individual variation

Page 26: ISRAEL DE LA FUENTE April 5, 2010

Words 1: Lexical decision

12 participants Lexical decision task on visually presented words 10 scans acquired for each subject (2 for each semantic condition –

animals, fruits, vehicles & tools –and 2 baseline scans) Baseline condition: find the x in orthographically illegal letter strings

Marianna: baseline and main task unrelated

Words were matched on familiarity, concreteness, neighborhood size, written word frequency, number of letters, and number of syllables

Baseline stimuli matched the non-lexical and non-semantic properties of the test stimuli

Page 27: ISRAEL DE LA FUENTE April 5, 2010

Words 1: Lexical decision

In the first 45s, 10 words from a single category appeared in a pseudorandom order, intermixed with 5 non-words

Each item was presented for 500ms, with 2500ms between successive items

The same words were repeated in a different order for the remaining 45s, intermixed with 5 different non-words

Subjects responded with a right-button (word/x) or a left-button (non-word/no x) press Albert: does pressing a button affect the PET scan?

Page 28: ISRAEL DE LA FUENTE April 5, 2010

Words 2: Semantic categorization

8 participants Semantic categorization task on visually presented words Same 4 semantic categories – 2 semantic conditions (living and

non-living things) 12 scans acquired (4 for each of the 2 conditions and 4 baseline) A trial comprised 3 lower-case cue-words (200ms each),

followed by a target in capital letters (200ms) Words were matched for frequency, familiarity and letter length Baseline condition: 3 variable length strings of the same letter

and a target string of the same capitalized letter or a different capitalized letter

Page 29: ISRAEL DE LA FUENTE April 5, 2010

Words 2: Semantic categorization

Inter-stimulus duration was 400ms, with 1500ms between successive trials

12 trials (cue triplets plus target) were presented for the initial 45s of the scan followed by a blank screen for 45s

Participants had to indicate whether the target was a member of the set defined by the three cue items or not by pressing the right (SAME) or left (DIFFERENT) button

96 semantic categorization trials and 48 baseline trials

Page 30: ISRAEL DE LA FUENTE April 5, 2010

Pictures 1: Semantic categorization

9 participants

Same semantic categorization task with pictures as stimuli

10 scans acquired for each subject (2 for each of the four conditions and 2 baseline scans)

Pictures were matched for familiarity, visual complexity and semantic relatedness

Baseline condition: simple 2D shapes of different shapes and colors

Page 31: ISRAEL DE LA FUENTE April 5, 2010

Pictures 1: Semantic categorization

Trials consisted of 3 pictures presented sequentially for 400ms each, followed by a framed target picture

Inter-stimulus duration was 200ms, with 2217ms between trials

Participants had to indicate if the target was a member of the set defined by the three cue items or not by pressing the left (SAME) or right (DIFFERENT) button

In each condition, 12 trials were presented during the initial 53s of the scan, followed by a blank screen for 37s

96 test trials and 48 baseline trials

Page 32: ISRAEL DE LA FUENTE April 5, 2010

Pictures 2: Semantic categorization

9 participants

Semantic categorization task on pictures

Experimental design, test stimuli, and timings were identical to those employed in Pictures 1 task

New baseline condition: meaningless simple shapes made up of combinations of small squares which varied in number and color

Page 33: ISRAEL DE LA FUENTE April 5, 2010

Picture tasks 1 & 2: items

Page 34: ISRAEL DE LA FUENTE April 5, 2010

Data collection & analysis

GE Advance PET Scanner 18 rings of crystals – 35 image planes (4.25mm thick) Axial field-of-view is 15.3cm – whole brain acquisition

Participants received a bolus of 300 MBq of H2O15 before each scan (total radiation exposure of 4.2mSv)

Emission data was acquired with the septa retracted (3D mode) and reconstructed using the PROMIS algorithm

Voxel sizes were 2.34, 2.34 and 4.25mm Stimuli presentation and behavioral data collected with DMDX

software Structural and functional images were created according to the MNI

mean brain parameters

Page 35: ISRAEL DE LA FUENTE April 5, 2010

Data collection & analysis (cont’d)

Conjunction analysis that calculates main effects by summing simple main effects and excluding regions where there are significant differences between main effects

Masking procedures were conducted to distinguish between common and specific activated cluster when comparing conditions of interest (words and pictures)

Page 36: ISRAEL DE LA FUENTE April 5, 2010

Results: semantic activations

Page 37: ISRAEL DE LA FUENTE April 5, 2010

Results: regional specificity of word and picture processing

Page 38: ISRAEL DE LA FUENTE April 5, 2010

Results: regional specificity of word and picture processing (cont’d)

Page 39: ISRAEL DE LA FUENTE April 5, 2010

Results: regional specificity of word and picture processing (cont’d)

Page 40: ISRAEL DE LA FUENTE April 5, 2010

Discussion

Both words and pictures robustly activated a common region of the left fusiform gyrus (BA 36/37), left inferior frontal gyrus (BA 47), the most anterior aspect of the left temporal pole (BA 38) and the right inferior frontal gyrus.

There was regionally extensive recruitment of anterior temporal lobes during semantic judgments of words but not pictures.

Picture-specific activations were primarily restricted to occipital and posterior temporal areas, bilaterally, including inferior occipital gyrus (BA 19), fusiform gyrus (BA 19/37) and lingual gyrus (BA 11).

Page 41: ISRAEL DE LA FUENTE April 5, 2010

Discussion: common effects for words and pictures

Anterior fusiform (BA 37) involvement in semantic processing is not differentiated by form of input Both verbal and visual input routes seem to converge on these

anterior temporal sites

Activation in posterior regions (BA 19) seems not to differentiate meaningful from non-meaningful stimuli, whereas anterior and medial regions (BA 37/20/36) become significantly more active during the processing of meaningful stimuli

Common activation of the inferior frontal gyrus (BA 45/47)

Page 42: ISRAEL DE LA FUENTE April 5, 2010

Discussion: word-specific effects

More extensive anterior temporal activation for words relative to pictures which involved a broad region of bilateral temporal poles (BA 38)

Anterior temporal regions may be involved in processing detailed aspects of object attributes (i.e. when fine-grained discrimination among similar objects is required but not when discriminating among semantically meaningless stimuli)

Activation in the right temporal pole (semantic system bilaterally represented)

These regions (temporal poles) may be involved in semantic representation in both modalities but their engagement might be determined by the level of processing required for the task

Page 43: ISRAEL DE LA FUENTE April 5, 2010

Discussion: picture-specific effects Picture-specific recruitment was restricted to more posterior

regions, including inferior occipital gyrus (BA 19), lingual gyrus (BA 18) and fusiform gyrus (BA 37)

Activation of more posterior aspects of the anterior fusiform gyrus Functional differences throughout the posterior-anterior extent of this

region, with modality-specificity (pictures) recruitment of posterior BA 37 Anterior regions of the lateral occipital cortex (posterior and mid fusiform

gyrus) are activated more strongly by whole, intact objects than by scrambled objects – no distinction between familiar and novel shapes

Picture-specific activations in this area reflect an intermediate or pre-semantic stage of visual processing

Page 44: ISRAEL DE LA FUENTE April 5, 2010

Conclusion

Critical role of the anterior extent of fusiform gyrus in the representation of conceptual knowledge, irrespective of the modality of visual input This suggests that it holds unitary semantic representations formed

via converging inputs from more posterior areas

Left parahippocampal and perirhinal cortex recruited when a semantic level of representation is required Integration of sensory information into semantically meaningful

polymodal feature combinations

Common recruitment of left inferior frontal gyrus Executive or working memory role

Page 45: ISRAEL DE LA FUENTE April 5, 2010

Conclusions (cont’d)

Word-specific activations in anterior temporal cortex

Picture-specific activations in occipitotemporal cortex This activation might relate to intermediate, non-semantic

levels of representation Pawel: couldn’t this be because the semantic information is input

specific?

Temporal poles are an important part of a distributed system subserving semantic representations, but their involvement may depend on the level of specificity of object-presentation

Page 46: ISRAEL DE LA FUENTE April 5, 2010

THANK YOU!


Recommended