+ All Categories
Home > Documents > CNL - University of California, Irvinecnllab/HumanAudCx.pdf · CNL MEG Behavioral Psychophysics...

CNL - University of California, Irvinecnllab/HumanAudCx.pdf · CNL MEG Behavioral Psychophysics...

Date post: 04-Dec-2018
Category:
Upload: trandang
View: 218 times
Download: 0 times
Share this document with a friend
31
MEG Investigations of Spectral and Temporal Resolution Properties of Human Auditory Cortex CNL MEG Behavioral Psychophysics Genetics Nicole M. Gage, PhD Department of Cognitive Sciences University of California, Irvine
Transcript

MEG Investigations of Spectral and Temporal Resolution Properties of Human Auditory Cortex

CNL

MEG

Behavioral

Psychophysics

Genetics

Nicole M. Gage, PhDDepartment of Cognitive Sciences

University of California, Irvine

•Neurobiology of language dysfunction in developmental disorder

cognitiveneuroscienceof languageLaboratory

CNL

•Speech perception and hemispheric asymmetries in speech processing

•Cortical language function and mapping in healthy adults and pre-surgical patients

MEGBehavioralPsychophysicsGenetics

•Auditory perception and cortical sound processing

•History of neuroscience: Carl Wernicke’s model for language and his theory of conceptual representation in cortex

cognitiveneuroscienceof languageLaboratory

CNL

Research Goals:

To understand neural mechanisms that underlie speech and language processing in healthy adults and typically developing children

To elucidate neural processes that underlie language dysfunction in developmental disorder, such as autism

To understand the correspondence between genetics, brain, and behavior in the language domain

MEGBehavioralPsychophysicsGenetics

•Neurobiology of language dysfunction in developmental disorder

cognitiveneuroscienceof languageLaboratory

CNL

•Speech perception and hemispheric asymmetries in speech processing

•Cortical language function and mapping in healthy adults and pre-surgical patients

MEGBehavioralPsychophysicsGenetics

•Auditory perception and cortical sound processing

•History of neuroscience: Carl Wernicke’s model for language and his theory of conceptual representation in cortex

Plan of the Talk

Studies of cortical sound processing in adultsSpectral resolution properties of auditory cortexIntegrative processes underlying cortical evoked componentsTemporal resolution properties of auditory cortex

Cortical sound processing in typically developing children and children with autism

Spectral resolution for speech and non-speech soundsMaturational changes in cortical evoked componentsTemporal resolution properties of auditory cortex

A case study: Child with autism and language impairment, a rare chromosome deletion on a region implicated in language, and extreme sensory reactivity

Temporal Resolution of Auditory Cortical Systems

The temporal resolution of the auditory system is exquisite, with neural systems that decode features in the acoustic signal capable of submillisecond resolution.

The high level of resolution in auditory cortical systems provides the capability for decoding fine-grained fluctuations in sounds, critical to the accurate perception of speech.

Magnetoencephalography (MEG)

• Millisecond temporal resolution• Post-synaptic, dendritic flow• Synchronized response of populations of neurons• Time-locked to a stimulus event• Modeled by a single equivalent current dipole

Neuromagnetic Auditory Evoked Field

Weak FieldWeak Field

Strong FieldStrong Field

Iso-Field ContoursIso-Field Contours

Recording SurfaceRecording Surface

Magnetic Fields Sources Orientation of Neurons

Right Left

M100Dipole

Detection DeviceLiquid Helium

SQUID

SuperconductingCoils

Magnetic Field

Basic Principles of MEG

Magnetic Field Pattern Model

MEG recording of neuromagnetic evoked fieldsis entirely non-invasive … and silent

Nose

Left Right

Sensor coils

148 Channel Sensor Array

Magnetic Field Contour Map

Left and Right Hemisphere Auditory Cortical Dipolar Activity

Magnetic Field Contour Map

Left and Right Hemisphere Auditory Cortical Dipolar Activity

0 100 200Time (ms)

M100

A prototype auditory evoked neuromagnetic field detected by MEG; 37 channels with y-scale representing evoked response magnitude in units of femtotesla (fT) are shown collapsed on the same horizontal time axis.

Neuromagnetic Auditory Evoked Field

M50

Right Left

M100Dipole

M100 localizes to auditory cortex

Nose

Left Right

M100

MEG Investigations of Spectrotemporal Resolution

Properties of Auditory Cortex in Adults

Frequency Dependence of the M100: In healthy adults, M100 latency is modulated by tone frequency, with longer latencies for low (100-200 Hz) as compared to high (1000-3000 Hz) frequency tones.

For sinusoidal tones, M100 latency is modulated as a function of tone frequency, with a ‘fixed cost of ~100 ms plus a period dependent time that is roughly equal to 3 periods of the sinusoid (~30 ms for a 100 Hz, ~3 ms for a 1kHz tone). The dynamic range of frequency modulation in adults is ~25 ms.

M100 Latency is modulated by tone frequency: sinusoidal

tones 100-1000 Hz

95

100

105

110

115

120

125

M10

0 La

tenc

y (m

s)

100 300 400 500 600 700 1000

Tone Frequency (Hz)

Tone Continuum Response Latency

Delta 15-30 ms

Vowel Continuum varying in values for F1 but otherwise matched.

/u/ /a/F0 100 Hz 100 HzF1 250 Hz 50 Hz steps 750 HzF2 1000 Hz 1000 HzF3 2500 Hz 2500 Hz

Frequency of F1 is inversely related to vowel height, with lower F1associated with high vowels (/u/) and higher F1 with low vowels (/a/).

Investigative Question:Will M100 latency reflect the spectral center of gravity of 3 formant vowels (curvilinear function) or vowel identity (stepped function)?

M100 Role in Speech PerceptionDoes the M100 reflect sensory (acoustic)

or perceptual (representational) processes?

200250300350400450500550600650700750

/u/ /u/ /u/ ambiguous ambiguous ambiguous /a/ /a/ /a/

Tone Frequency (Hz)

Vowel Continuum

80

90

100

110

120

130

140

M100 Latency Prediction - F1 Frequency

80

90

100

110

120

130

140

M100 Latency Prediction - Vowel Identity

160

170

180

190

200

210

220

M10

0 La

tenc

y (m

s)

Stimulus Class

Vowel Continuum Response Latency

GROUP Mean 214.2 214.2 211.6 201.3 199.9 201.3 196.3 196.0 188.1 188.2 186.0/u/ /u/ /u/ amb amb amb amb amb /a/ /a/ /a/

M100 latency reflects vowel identity as well as secondary spectral features in speech sounds

M100 amplitude reflects experience with speech sounds, with lower response amplitudes to novel tokens.

0.50.60.7

0.80.91.01.1

1.21.3

/u/ /u/ /u/ amb amb amb amb amb /a/ /a/ /a/

Stimulus Class

Vowel Continuum Response Amplitude

Neural mechanisms underlying the M100 component reflect phonetically-relevant features in speech

M100 latency reflects vowel identity as well as secondary spectral features in speech sounds

M100 amplitude reflects experience with speech sounds, with lower response amplitudes to novel speech-like tokens.

160

170

180

190

200

210

220

M10

0 La

tenc

y (m

s)

Stimulus Class

Vowel Continuum Response Latency

GROUP Mean 214.2 214.2 211.6 201.3 199.9 201.3 196.3 196.0 188.1 188.2 186.0/u/ /u/ /u/ amb amb amb amb amb /a/ /a/ /a/

0.50.60.7

0.80.91.01.1

1.21.3

/u/ /u/ /u/ amb amb amb amb amb /a/ /a/ /a/

Stimulus Class

Vowel Continuum Response Amplitude

Roberts, Flagg, & Gage, 2004

Boon

0 10040

Time (ms)

Stimulus Onset

Time (ms)

0 40 100

The M100 component has a brief (~35 ms) and finite integrative window during which stimulus attributes are accumulated in the processes leading to the formation of the M100 peak.

Within this integrative window, it is stimulus presence -- and not peak or integrated energy -- that dominates the processes underlying the M100.

A Temporal Window of Integration for the M100

Gage & Roberts, 2000

M100 is highly sensitive (within a brief integrative window) to transient features in consonants that cue distinctive feature contrasts in speech, such as manner and place of articulation, voicing.

The selective activation of the M100 for some stimulus features (periodicity, formant transitions) and not others (absolute sound level) has led to its description as an intermediate processing stage between sensory (acoustic) and perceptual (representational) processing.

M100 Latency for Within-Speech Contrasts

90

95

100

105

110

115

120

Left Hemisphere Right Hemisphere

Stops No-stops

Gage et al., 1998, Gage et al., 2002

M100 Latency for Place Contrasts

0.95

0.97

0.99

1.01

1.03

Left Hemisphere Right Hemisphere

/ba/ /da/ /ga/

Boon

0 10040

Time (ms)

Stimulus Onset

Time (ms)

0 40 100

What is the Temporal Resolution for Resolving Brief Discontinuities in Sounds within the M100 Integrative Window?

Temporal Resolution of the Auditory M100:Gap Detection Experiments

Psychophysical investigations of auditory perceptual acuity frequently employ gap detection paradigms, where a silent gap is inserted in a tone or noise burst and the minimum detectable gap is measured

Gap detection thresholds correspond to speech perception acuity,indicating that similar or overlapping neural processes are employed both in detecting brief silent gaps and in resolving the fine structure of the speech signal.

The investigation: we know that the M100 is sensitive to the presence of a stimulus within a brief and finite integrative window.

What are the lower limits of the resolution for brief discontinuities – or the absence of a stimulus – within the M100 window of integration?

Gage, Roberts, & Hickok, In Press 2005

20 600

Time (ms)

40

0257

1015203050

Gap Duration

(ms)How sensitive is the M100 to fine-grained temporal discontinuities in sounds?

We address this question by inserting brief gaps of silence at +10 ms post stimulus onset and measuring M100 modulation as a function of gap duration.

In a second condition, we inserted gaps at +40 ms post onset. Here we predicted that M100 would not be modulated by gaps of silence because the gaps were inserted outside the integrative window.

Temporal Resolution of the Auditory M100:Gap Detection Experiments

Temporal Windowof Integration (~25-40 ms)

LH R2 = 0.93

RH R2 = 0.99

80

100

120

140

0 ms 2 ms 5 ms 7 ms 10 ms 15 ms 20 msGap Duration (ms)

Left Hemisphere Right Hemisphere

Results: M100 Latency is modulated by Gap Duration

LH R2 = 0.93

RH R2 = 0.93

70

90

110

130

0 ms 2 ms 5 ms 7 ms 10 ms 15 ms 20 msGap Duration (ms)

Left Hemisphere Right Hemisphere

Results: M100 Amplitude is modulated by Gap Duration

80

100

120

140

0 2 5 7 10 15 20Gap Duration (ms)

Left Hemisphere Right Hemisphere

Results: M100 is not affected when gaps are inserted at +40 ms post onset

Time (ms)

0 100 200

+10 ms

+40 ms

0ms gap 10ms gap 20ms gap

Conclusions

The integrative processes underlying M100 formation are highly sensitive to fine-grained discontinuities in sounds.

M100 sensitivity to the shortest gap (2 ms) corresponds to clinical and behavioral measures of auditory acuity, where detection thresholds have been reported for gaps of <5 ms.

These data provide further evidence for a short (~35 ms) and finite window of integration in the accumulation processes leading to the M100 peak.

Fine-grained Temporal Resolution of the M100

A Finite Temporal Window of Integration for the M100

M100 - ~35 ms TWI Secondary Auditory CortexFeature Discrimination Processes

The Time Course of Auditory Cortical ProcessingIntegrative Windows for the M50 and M100 ComponentsReflect Underlying Sensory and Perceptual Mechanisms

Gage, Hickok, & Roberts, 2005

M50 - ~10 ms TWI Primary Auditory CortexDetection, Habituation Mechanisms

The CNL Team …


Recommended