+ All Categories
Home > Documents > Speech Recognition Models of the Interdependence Among Prosody, Syntax, and Segmental Acoustics

Speech Recognition Models of the Interdependence Among Prosody, Syntax, and Segmental Acoustics

Date post: 17-Mar-2016
Category:
Upload: medea
View: 49 times
Download: 0 times
Share this document with a friend
Description:
Speech Recognition Models of the Interdependence Among Prosody, Syntax, and Segmental Acoustics. Mark Hasegawa-Johnson [email protected] Jennifer Cole, Chilin Shih, Ken Chen, Aaron Cohen, Sandra Chavarria, Heejin Kim, Taejin Yoon, Sarah Borys, and Jeung-Yoon Choi. Outline. - PowerPoint PPT Presentation
31
Speech Recognition Models of the Interdependence Among Prosody, Syntax, and Segmental Acoustics Mark Hasegawa-Johnson [email protected] Jennifer Cole, Chilin Shih, Ken Chen, Aaron Cohen, Sandra Chavarria, Heejin Kim, Taejin Yoon, Sarah Borys, and Jeung-Yoon Choi
Transcript
Page 1: Speech Recognition Models of the Interdependence Among  Prosody, Syntax,  and  Segmental Acoustics

Speech Recognition Models of the Interdependence Among

Prosody, Syntax, and

Segmental AcousticsMark Hasegawa-Johnson

[email protected]

Jennifer Cole, Chilin Shih, Ken Chen, Aaron Cohen, Sandra Chavarria, Heejin Kim, Taejin Yoon, Sarah Borys, and Jeung-Yoon Choi

Page 2: Speech Recognition Models of the Interdependence Among  Prosody, Syntax,  and  Segmental Acoustics

Outline

• Prosodic tags as “hidden mode” variables• Acoustic models

– Factored prosody-dependent allophones– Knowledge-based factoring: pitch & duration– Allophone clustering: spectral envelope

• Language models– Factored syntactic-prosodic N-gram– Syntactic correlates of prosody

Page 3: Speech Recognition Models of the Interdependence Among  Prosody, Syntax,  and  Segmental Acoustics

A Bayesian network view of a speech utterance

X: acoustic-phonetic observationsY: acoustic-prosodic observationsQ: phonemesH: phone-level prosodic tagsW: wordsP: word-level prosodic tagsS: syntaxM: message

Word Level

Segmental Level

Y X

WP

S

M

QH

Frame Level

Page 4: Speech Recognition Models of the Interdependence Among  Prosody, Syntax,  and  Segmental Acoustics

Prosody modeled in our system

• Two binary tag variables (Toneless ToBI):– The Pitch Accent (*)– The Intonational Phrase Boundary (%)

• Both are highly correlated with acoustics and syntax.– Pitch accents: pitch excursion (H*, L*); encode syntax

information (e.g. content/function word distinction).– IPBs: preboundary lengthening, boundary tones, pause,

etc.; Highly correlated with syntactic phrase boundaries

Page 5: Speech Recognition Models of the Interdependence Among  Prosody, Syntax,  and  Segmental Acoustics

Prosody dependent speech recognition framework

• Advantages:– A natural extension of PI-ASR– Allow the convenient integration of

useful linguistic knowledge at different levels

– Flexible

S

M

W,P

X,Y

Q,H

ˆ[ ] arg max ( | , ) ( , | , )

( , )

W p O Q Hp Q H W P

p W P

Page 6: Speech Recognition Models of the Interdependence Among  Prosody, Syntax,  and  Segmental Acoustics

Prosodic tags as “hidden speaking mode” variables

(inspired by Ostendorf et al., 1996, Stolcke et al., 1999)

W = argmaxW maxQABSP p(X,Y|Q,A,B) p(Q,A,B|W,S,P) p(W,S,P)

Standard Variable

Hidden Speaking Mode

Gloss

Word W=[w1,…,wM] P=[p1,…,pM],

S=[s1,…,sM]

Prosodic tags,Syntactic tags

Allophone Q=[q1,…,qL] A=[a1,…,aL],

B=[b1,…,bL]

Accented phone,Boundary phone

Acoustic Features X=[x1,…,xT] Y=[y1,…,yT] F0 observations

Page 7: Speech Recognition Models of the Interdependence Among  Prosody, Syntax,  and  Segmental Acoustics

Prosody dependent language modeling

p(wi|wi-1) => p(wi,pi|wi-1,pi-1)

Prosodically tagged words:cats* climb trees*%

Prosody and word string jointly modeled:

p( trees*% | cats* climb ) wi-1 wi

si-1 si

wi-1 wi

pi-1 pi

Page 8: Speech Recognition Models of the Interdependence Among  Prosody, Syntax,  and  Segmental Acoustics

Prosody dependent pronunciation modeling

p(Qi|wi) => p(Qi,Hi|wi,pi)

1. Phrasal pitch accent affects phones in lexically stressed syllable

above ax b ah vabove* ax b* ah* v*

2. IP boundary affects phones in phrase-final rhyme

above% ax b ah% v% above*% ax b* ah*% v*%

Hi Qi

wpi

wi

Qi

Page 9: Speech Recognition Models of the Interdependence Among  Prosody, Syntax,  and  Segmental Acoustics

Prosody dependent acoustic modeling

• Prosody dependent allophone models

Λ(q) => Λ(q,h):– Acoustic-phonetic

observation PDFb(X|q) => b(X|q,h)

– Duration PMFd(q) => d(q,h)

– Acoustic-prosodic observation PDF

f(Y|q,h)

Yk Xk

qkhk

qk

Xk

Page 10: Speech Recognition Models of the Interdependence Among  Prosody, Syntax,  and  Segmental Acoustics

How Prosody Improves Word Recognition

• Discriminant function, prosody-independent – WT = true word sequence

– Wi = competing false word sequence– O = sequence of acoustic spectra

(WT;O) = EWT,O { log p(WT|O) }

= - EWT,O { log ( i i ) }

i = X

p(O|Wi) p(Wi)

p(O|WT) p(WT)

Page 11: Speech Recognition Models of the Interdependence Among  Prosody, Syntax,  and  Segmental Acoustics

How Prosody Improves Word Recognition

• Discriminant function, prosody-dependent – PT = True prosody

– Pi = Optimum prosody for false word sequence Wi

P(WT;O) = EWT,O { log p’(WT|O) }

= - EWT,O { log ( i i’ ) }

i ’ = Xp(O|Wi,Pi) p(Wi,Pi)

p(O|WT,PT) p(WT,PT)

Page 12: Speech Recognition Models of the Interdependence Among  Prosody, Syntax,  and  Segmental Acoustics

How Prosody Improves Word Recognition

• Acoustically likely prosody must be…• unlikely to co-occur with…• an acoustically likely incorrect word string…• most of the time.

P(WT;O) > (WT;O) IFF

i < i p(O|Wi,Pi) p(Wi,Pi)

p(O|WT,PT) p(WT,PT)

p(O|Wi) p(Wi)

p(O|WT) p(WT)

Page 13: Speech Recognition Models of the Interdependence Among  Prosody, Syntax,  and  Segmental Acoustics

The Corpus• The Boston University Radio News Corpus

– Stories read 7 professional radio announcers– 5k vocabulary– 25k word tokens– 3 hours clean speech– No disfluency– Expressive and well-behaved prosody

• 85% utterances are selected randomly as training, 5% for development-test and the rest 10% for testing.

• Small by ASR standards, but is the largest ToBI-transcribed English corpus

Page 14: Speech Recognition Models of the Interdependence Among  Prosody, Syntax,  and  Segmental Acoustics

“Toneless ToBI” Prosodic Transcription

• Tagged Transcription: Wanted*% chief* justice* of the Massachusetts* supreme court*%– % is an intonational phrase boundary– * denotes pitch accented word

• Lexicon:– Each word has four entries

• wanted, wanted*, wanted%, wanted*%– IP boundary applies to phones in rhyme of final syllable

• wanted% w aa n t ax% d%– Accent applies to phones in lexically stressed syllable

• wanted* w* aa* n* t ax d

Page 15: Speech Recognition Models of the Interdependence Among  Prosody, Syntax,  and  Segmental Acoustics

The problem: Data sparsity• Boston Radio News corpus

– 7 talkers; Professional radio announcers– 24944 words prosodically transcribed– Insufficient data to train triphones:

• Hierarchically clustered states: HERest fails to converge (insufficient data).

• Fixed number of triphones (3/monophone): WER increases (monophone: 25.1%, triphone: 36.2%)

• Switchboard– Many talkers; Conversational telephone speech– About 1700 words with full prosodic transcription – Insufficient to train HMM, but sufficient to test

Page 16: Speech Recognition Models of the Interdependence Among  Prosody, Syntax,  and  Segmental Acoustics

Proposed solution: Factored models

1. Factored Acoustic Model: p(X,Y|Q,A,B) = i p(di|qi,bi) t p(xt|qi) p(yt|qi,ai)

– prosody-dependent allophone qi

– pitch accent type ai € {Accented,Unaccented}

– intonational phrase position bi € {Final,Nonfinal}

2. Factored Language Model: p(W,P,S) = p(W) p(S|W) p(P|S)

Page 17: Speech Recognition Models of the Interdependence Among  Prosody, Syntax,  and  Segmental Acoustics

Acoustic factor #1: Are the MFCCs Prosody-Dependent?

R Vowel?

L Stop?

N

N-VOW

N STOP+N

Yes

YesNo

No

WER: 36.2%

R Vowel?

Pitch Accent?

N

N-VOW

N N*

Yes

YesNo

No

WER: 25.4%

BUT: WER of baseline Monophone system = 25.1%

Clustered Triphones Prosody-Dependent Allophones

Page 18: Speech Recognition Models of the Interdependence Among  Prosody, Syntax,  and  Segmental Acoustics

Prosody-dependent allophones: ASR clustering matches EPG

Consonant Clusters

Phrase Initial

Phrase Medial

Phrase Final

Accented Class 1 Class 3Unaccented Class 2

Fougeron & Keating(1997)EPG Classes:1. Strengthened2. Lengthened 3. Neutral

Page 19: Speech Recognition Models of the Interdependence Among  Prosody, Syntax,  and  Segmental Acoustics

Acoustic factor #2: Pitch

MFCCMFCC MFCC

Q(t)

F0(t)F0(t-1)F0(t-2) F0(t+1) F0(t+2)

Q(t+1)Q(t-1)

G(F0)G(F0) G(F0)

MFCC Stream

Transformed Pitch Stream

A(t) A(t+1)A(t-1)

Phoneme State

Accented?

Page 20: Speech Recognition Models of the Interdependence Among  Prosody, Syntax,  and  Segmental Acoustics

Acoustic-prosodic observations: Y(t) = ANN(logf0(t-5),…,logf0(t+5))

Page 21: Speech Recognition Models of the Interdependence Among  Prosody, Syntax,  and  Segmental Acoustics

Acoustic Factor #3: Duration

• Normalized phoneme duration is highly correlated with phrase position

• Solution: Semi-Markov model (aka HMM with explicit duration distributions, EDHMM)

P(x(1),…,x(T)|q1,…,qN) = d p(d1|q1)…p(dN|qN)

p(x(1)…x(d1)|q1) p(x(d1+1)…x(d1+d2)|q2) …

Page 22: Speech Recognition Models of the Interdependence Among  Prosody, Syntax,  and  Segmental Acoustics

Phrase-final vs. Non-final Durations learned by the EDHMM

/AA/ phrase-medial and phrase-final

/CH/ phrase-medial and phrase-final

Page 23: Speech Recognition Models of the Interdependence Among  Prosody, Syntax,  and  Segmental Acoustics

A factored language model

Prosodically tagged words:cats* climb trees*%

1. Unfactored: Prosody and word string jointly modeled:

p( trees*% | cats* climb )

2. Factored: • Prosody depends on syntax:

p( w*% | N V N, w* w )• Syntax depends on words:

p( N V N | cats climb trees )

pi-1,wi-1 pi,wi

si-1 si

wi-1 wi

pi-1 pi

Unfactored

Factored

Page 24: Speech Recognition Models of the Interdependence Among  Prosody, Syntax,  and  Segmental Acoustics

Result: Syntactic mediation of prosody reduces perplexity and WER

Factored Model:Reduces Perplexity by 35%Reduces WER by 4%

Syntactic Tags:For pitch accent:

• POS sufficient

For IP boundary: • Parse information useful if

available

si-1 si

wi-1 wi

pi-1 pi

Page 25: Speech Recognition Models of the Interdependence Among  Prosody, Syntax,  and  Segmental Acoustics

Syntactic factors: POS, Syntactic phrase boundary depth

05

1015202530354045

AccentPrediction Error

BoundaryPrediction Error

ChancePOSPOS + Phrase

Page 26: Speech Recognition Models of the Interdependence Among  Prosody, Syntax,  and  Segmental Acoustics

Results: Word Error Rate (Radio News Corpus)

2020.5

2121.5

2222.5

2323.5

2424.5

25

Word Error Rate

BaselinePD AcousticPD LanguagePD Both

Page 27: Speech Recognition Models of the Interdependence Among  Prosody, Syntax,  and  Segmental Acoustics

Results: Pitch Accent Error Rate

05

1015202530354045

Chance RecognizerError Rate

Radio News,Words Unknown

Radio News,WordsRecognizedRadio News,Words Known

Switchboard,Words Known

Page 28: Speech Recognition Models of the Interdependence Among  Prosody, Syntax,  and  Segmental Acoustics

Results: Intonational Phrase Boundary Error Rate

0

5

10

15

20

25

Chance RecognizerError Rate

Radio News,WordsRecognizedRadio News,Words Known

Switchboard,Words Known

Page 29: Speech Recognition Models of the Interdependence Among  Prosody, Syntax,  and  Segmental Acoustics

Conclusions• Learn from sparse data: factor the model

– F0 stream: depends on pitch accent– Duration PDF: depends on phrase position– POS: predicts pitch accent– Syntactic phrase boundary depth: predicts intonational

phrase boundaries• Word Error Rate: reduced 12% only if both

syntactic and acoustic dependencies modeled• Accent Detection Error:

– 17% same corpus words known – 21% different corpus or words unknown

• Boundary Detection Error: – 7% same corpus words known– 15% different corpus or words unknown

Page 30: Speech Recognition Models of the Interdependence Among  Prosody, Syntax,  and  Segmental Acoustics

Current Work: Switchboard1. Different statistics (pa=0.32 vs. pa=0.55)2. Different phenomena (Disfluency)

Page 31: Speech Recognition Models of the Interdependence Among  Prosody, Syntax,  and  Segmental Acoustics

Current Work: Switchboard• About 200 short utterances transcribed, and one full conversation.

Available at: http://prosody.beckman.uiuc.edu/resources.htm• Transcribers agree as well or better on Switchboard than on Radio

News– 95% agreement on whether or not a pitch accent exists– 90% agreement on the type of pitch accent (H vs. L)– 90% agreement on whether or not a phrase boundary exists– 88% agreement on the type of phrase boundary

• Average intonational phrase length is much longer – 4-5 words in Radio News– 10-12 words in Switchboard

• Intonational Phrases are broken up into many smaller “intermediate phrases:”– Intermediate phrase length = 4 words in Radio News; same length in

Switchboard• Fewer words are pitch accented: One per 4 words in Switchboard, vs.

one per 2 words in Radio News• 10% of all words are in the reparandum, edit, or alteration of a

DISFLUENCY


Recommended