+ All Categories
Home > Documents > Linear Prediction

Linear Prediction

Date post: 21-Mar-2016
Category:
Upload: urbano
View: 34 times
Download: 0 times
Share this document with a friend
Description:
Linear Prediction. Linear Prediction (Introduction) :. The object of linear prediction is to estimate the output sequence from a linear combination of input samples, past output samples or both : The factors a(i) and b(j) are called predictor coefficients. - PowerPoint PPT Presentation
Popular Tags:
39
1 Linear Linear Prediction Prediction
Transcript
Page 1: Linear Prediction

1

Linear PredictionLinear Prediction

Page 2: Linear Prediction

2

Linear Prediction Linear Prediction (Introduction)(Introduction)::

The object of linear prediction is to The object of linear prediction is to estimate the output sequence from a estimate the output sequence from a linear combination of input samples, linear combination of input samples, past output samples or both :past output samples or both :

The factors The factors a(i)a(i) and and b(j)b(j) are called are called predictor coefficients.predictor coefficients.

p

i

q

j

inyiajnxjbny10

)()()()()(ˆ

Page 3: Linear Prediction

3

Linear Prediction Linear Prediction (Introduction)(Introduction)::

Many systems of interest to us are Many systems of interest to us are describable by a linear, constant-describable by a linear, constant-coefficient difference equation :coefficient difference equation :

If If Y(z)/X(z)=H(z),Y(z)/X(z)=H(z), where where H(z)H(z) is a ratio of is a ratio of polynomials polynomials N(z)/D(z),N(z)/D(z), then then

Thus the predicator coefficient given us immediate Thus the predicator coefficient given us immediate access to the poles and zeros of access to the poles and zeros of H(z). H(z).

q

j

p

i

jnxjbinyia00

)()()()(

p

i

iq

j

j ziazDzjbzN00

)()( and )()(

Page 4: Linear Prediction

4

Linear Prediction Linear Prediction (Types of (Types of

System Model)System Model):: There are two important variants :There are two important variants :

All-pole model (in statistics, All-pole model (in statistics, autoregressive (AR)autoregressive (AR) model ) : model ) :

The numerator The numerator N(z)N(z) is a constant. is a constant. All-zero model (in statistics, All-zero model (in statistics, moving-moving-

averageaverage (MA) model ) : (MA) model ) : The denominator The denominator D(z)D(z) is equal to unity. is equal to unity.

The mixed pole-zero model is called the The mixed pole-zero model is called the autoregressive moving-averageautoregressive moving-average (ARMA) (ARMA) model.model.

Page 5: Linear Prediction

5

Linear Prediction Linear Prediction (Derivation of (Derivation of

LP equations)LP equations):: Given a zero-mean signal Given a zero-mean signal y(n), y(n), in the AR in the AR

model :model :

The error is :The error is :

To derive the predicator we use the To derive the predicator we use the orthogonality orthogonality principleprinciple, the principle states that the desired , the principle states that the desired coefficients are those which make the error coefficients are those which make the error orthogonal to the samples orthogonal to the samples y(n-1), y(n-2),…, y(n-p).y(n-1), y(n-2),…, y(n-p).

p

i

inyiany1

)()()(ˆ

p

i

inyia

nynyne

0

)()(

)(ˆ)()(

Page 6: Linear Prediction

6

Linear Prediction Linear Prediction (Derivation of (Derivation of

LP equations)LP equations):: Thus we require that Thus we require that

Or,Or,

Interchanging the operation of averaging and Interchanging the operation of averaging and summing, and representing < > by summing summing, and representing < > by summing over n, we haveover n, we have

The required predicators are found by solving The required predicators are found by solving these equations.these equations.

p..., 2, 1,jfor 0)()( nejny

p1,...,j ,0)()()(0

n

p

i

jnyinyia

Page 7: Linear Prediction

7

Linear Prediction Linear Prediction (Derivation of (Derivation of

LP equations)LP equations):: The orthogonality principle also states that The orthogonality principle also states that

resulting minimum error is given byresulting minimum error is given by

Or,Or,

We can minimize the error over all time :We can minimize the error over all time :

wherewhere

Eriap

ii

0

)(

)()()(2 nenyneE

Enyinyian

p

i

)()()(0

, ...,p,jria ji

p

i

21 ,0)(0

Page 8: Linear Prediction

8

Linear Prediction Linear Prediction (Applications)(Applications):: Autocorrelation matching :Autocorrelation matching :

We have a signal y(n) with known We have a signal y(n) with known autocorrelation . We model this autocorrelation . We model this with the AR system shown below :with the AR system shown below :

)(nryy

p

i

ii zazA

zH

1

1)()(

)(neσ

1-A(z)

)(nz

Page 9: Linear Prediction

9

Linear Prediction Linear Prediction (Order of Linear (Order of Linear

Prediction)Prediction):: The choice of predictor order depends on The choice of predictor order depends on

the analysis bandwidth. The rule of thumb is the analysis bandwidth. The rule of thumb is ::

For a normal vocal tract, there is an average of For a normal vocal tract, there is an average of about one formant per kilohertz of BW.about one formant per kilohertz of BW.

One formant require two complex conjugate One formant require two complex conjugate poles.poles.

Hence for every formant we require two Hence for every formant we require two predicator coefficients, or two coefficients per predicator coefficients, or two coefficients per kilohertz of bandwidth.kilohertz of bandwidth.

Page 10: Linear Prediction

10

Linear Prediction Linear Prediction (AR Modeling of (AR Modeling of

Speech Signal)Speech Signal):: True Model:True Model:

DTImpulse

generator

G(z)GlottalFilter

UncorrelatedNoise

generator

H(z)Vocal tract

Filter

R(z)LP

Filter

Voiced

Unvoiced

Pitch Gain

Gain

V

U

U(n)

Voiced

Volume

velocity

s(n)

Speech

Signal

Page 11: Linear Prediction

11

Linear Prediction Linear Prediction (AR Modeling of (AR Modeling of

Speech Signal)Speech Signal):: Using LP analysis :Using LP analysis :

DTImpulse

generator

WhiteNoise

generator

All-PoleFilter(AR)

Voiced

Unvoiced

Pitch

Gain

estimate

V

U

H(z)

s(n)

Speech

Signal

Page 12: Linear Prediction

12

3.3 LINEAR PREDICTIVE CODING 3.3 LINEAR PREDICTIVE CODING MODEL FOR SREECH MODEL FOR SREECH

RECOGNITIONRECOGNITION

)(nu

G

)(zA)(ns

Page 13: Linear Prediction

13

3.3.1 The LPC Model3.3.1 The LPC Model

.)(

1

1

1)(

)()(

)()()(

,)()()(

),(...)2()1()(

1

1

1

21

zAzazGUzSzH

zGUzSzazS

nGuinsans

pnsansansans

p

i

ii

p

i

ii

p

ii

p

Convert this to equality by including an excitation term:

Page 14: Linear Prediction

14

3.3.2 LPC Analysis Equations3.3.2 LPC Analysis Equations

.1)()()(

)()()()()(

).()(

).()()(

1

1

~

1

~

1

p

k

kk

p

kk

p

kk

p

kk

zazSzEzA

knsansnsnsne

knsans

nGuknsans

The prediction error:

Error transfer function:

Page 15: Linear Prediction

15

3.3.2 LPC Analysis Equations3.3.2 LPC Analysis Equations

.)()(

)(

)()()()(

2

1

2

m

p

knknn

mnn

n

n

kmsamsE

meE

mnememnsmS

We seek to minimize the mean squared error signal:

Page 16: Linear Prediction

16

pikiai

kmSimSki

kmSimSamsims

pkaE

p

knkn

mnnn

m mnn

p

kknn

k

n

,...,2,1),()0,(

)()(),(

)()()()(

,...,2,1,0

1

1

Terms of short-term covariance:

(*)

With this notation, we can write (*) as:

A set of P equations, P unknowns

Page 17: Linear Prediction

17

3.3.2 LPC Analysis Equations3.3.2 LPC Analysis Equations

p

knkn

mnn

p

kk

mnn

ka

kmsmsamsE

1

1

2

).,0()0,0(

)()()(

The minimum mean-squared error can be expressed as:

Page 18: Linear Prediction

18

3.3.3 The Autocorrelation Method3.3.3 The Autocorrelation Method

.01

),()(),(

01

),()(),(

)(

.,010),().(

)(

)(1

0

1

0

1

0

2

pkpi

kimsmski

pkpi

kmsimski

meE

otherwiseNmmwnms

ms

kiN

mnnn

pN

mnnn

pN

mnn

n

w(m): a window zero outside 0≤m≤N-1The mean squared error is:

And:

Page 19: Linear Prediction

19

3.3.3 The Autocorrelation Method3.3.3 The Autocorrelation Method

)(),(

:functionation autocorrel simple toreducesfunction covariance thek,-i offunction aonly is),( Since

.01

),()(),()(1

0

kirki

ki

pkpi

kimsmski

nn

n

kin

mnnn

Page 20: Linear Prediction

20

3.3.3 The Autocorrelation Method3.3.3 The Autocorrelation Method

.

)()3()2()1(

)0(...)3()2()1()3(...)0()1()2()2(...)1()0()1()1(...)2()1()0(

:as formmatrix in expressed becan and

1),(|)(|

:)()( i.e. symmetric, isfunction ation autocorrel theSince

2

1

1

prrrr

a

a

a

rprprprprrrrprrrrprrrr

piirakir

sokrkr

n

n

n

n

pnnnn

nnnn

nnnn

nnnn

p

knkn

nn

Page 21: Linear Prediction

21

3.3.3 The Autocorrelation Method3.3.3 The Autocorrelation Method

Page 22: Linear Prediction

22

3.3.3 The Autocorrelation Method3.3.3 The Autocorrelation Method

Page 23: Linear Prediction

23

3.3.3 The Autocorrelation Method3.3.3 The Autocorrelation Method

Page 24: Linear Prediction

24

3.3.4 The Covariance Method3.3.4 The Covariance Method

1

1

0

1

0

2

.01

),()(),(

, variablesof changeby or,01

),()(),(

:as defined ),(with

)(

:directlyspeech unweighted theuse to and 10 error to computing of interval thechange

iN

imnnn

n

N

mnn

n

N

mnn

pkpi

kimsmski

pkpi

kmsimski

ki

meE

Nm

Page 25: Linear Prediction

25

3.3.4 The Covariance Method3.3.4 The Covariance Method

.

)0,(

)0,3()0,2()0,1(

),()3,()2,()1,(

),3()3,3()2,3()1,3(),2()3,2()2,2()1,2(

),1()3,1()2,1()1,1(

3

2

1

pa

a

a

a

ppppp

pp

p

n

n

n

n

pnnnn

nnnn

nnnn

nnnn

The resulting covariance matrix is symmetric, but not Toeplitz,and can be solved efficiently by a set of techniques called Cholesky decomposition

Page 26: Linear Prediction

26

3.3.6 Examples of LPC Analysis3.3.6 Examples of LPC Analysis

Page 27: Linear Prediction

27

3.3.6 Examples of LPC Analysis3.3.6 Examples of LPC Analysis

Page 28: Linear Prediction

28

3.3.7 LPC Processor for Speech Recognition3.3.7 LPC Processor for Speech Recognition

Page 29: Linear Prediction

29

3.3.7 LPC Processor for Speech Recognition3.3.7 LPC Processor for Speech Recognition

).1()()(

.0.19.0,1)(~~

1~

nsansns

azazH

Preemphasis: typically a first-order FIR, To spectrally flatten the signalMost widely the following filter is used:

Page 30: Linear Prediction

30

3.3.7 LPC Processor for Speech Recognition3.3.7 LPC Processor for Speech RecognitionFrame Blocking:

.1,...,1,01,...,1,0

),()(~

LNn

nMsnx

Page 31: Linear Prediction

31

3.3.7 LPC Processor for Speech Recognition3.3.7 LPC Processor for Speech Recognition

Hamming Window:Hamming Window:

,,...,1,0),()()(

.10,1

2cos46.054.0)(

.10),()()(

~1

0

~

~

pmmnxnxmr

NnN

nnw

Nnnwnxnx

mN

n

WindowingWindowing

Autocorrelation Autocorrelation analysisanalysis

Page 32: Linear Prediction

32

3.3.7 LPC Processor for Speech Recognition3.3.7 LPC Processor for Speech Recognition

1ifor ommitted is (*)in summation the:note,)1(

1(*),|)(|)(

)0(

)1(2)(

)1()1()(

)(

)1(1

1

)1(

)0(

ii

i

ijii

ij

ij

ii

i

iL

j

iji

EkE

k

k

piEjirirk

rE

LPC Analysis, to find LPC coefficients, LPC Analysis, to find LPC coefficients, reflection coefficients (PARCOR), the log reflection coefficients (PARCOR), the log area ratio coefficients, the cepstral area ratio coefficients, the cepstral coefficients, …coefficients, … Durbin’s methodDurbin’s method

Page 33: Linear Prediction

33

3.3.7 LPC Processor for Speech Recognition3.3.7 LPC Processor for Speech Recognition

.11

logtscoefficienratioarealog

tscoefficienPARCOR

1,tscoefficienLPC )(

m

mm

m

pmm

kk

g

k

pma

Page 34: Linear Prediction

34

3.3.7 LPC Processor for Speech Recognition3.3.7 LPC Processor for Speech Recognition

LPC parameter conversion to cepstral LPC parameter conversion to cepstral coefficientscoefficients

,

1

model LPCin gain term theis ln

1

1,

,

1

1

220

m

kkmkm

kmk

m

kmm

pmacmkc

pmacmkac

c

Page 35: Linear Prediction

35

3.3.7 LPC Processor for Speech Recognition3.3.7 LPC Processor for Speech Recognition

m

mjm

j

m

mjm

j

ecjmes

eceS

)()(log

|)(|log

Parameter weightingParameter weighting Low-order cepstral coefficients are sensitive to overall Low-order cepstral coefficients are sensitive to overall

spectral slopespectral slope High-order cepstral coefficients are sensitive to noiseHigh-order cepstral coefficients are sensitive to noise The weighting is done to minimize these sensitivitiesThe weighting is done to minimize these sensitivities

Page 36: Linear Prediction

36

3.3.7 LPC Processor for Speech Recognition3.3.7 LPC Processor for Speech Recognition

,1

).(

,|)(|log

, Qmcwc

jmcc

eces

mmm

mm

m

mjm

j

QmQmQwm

1,sin

21

Page 37: Linear Prediction

37

3.3.7 LPC Processor for Speech Recognition3.3.7 LPC Processor for Speech Recognition

))(),...,(),(),(),...,(),((

Finally)()()( :optionallyor

)()()(

dowlength win-finite aover fit polynomial orthogonalan by eApproximat

)(|),(|log

:derivative time theoftion representa seriesFourier

2121 tctctctctctco

KtcKtctc

ktctct

tc

et

tctes

t

QQt

mmm

K

Kkmm

m

mj

m

mj

Temporal cepstral derivativeTemporal cepstral derivative

Page 38: Linear Prediction

38

3.3.9 Typical LPC Analysis Parameters3.3.9 Typical LPC Analysis Parameters

N number of samples in the analysis frame N number of samples in the analysis frame M number of samples shift between framesM number of samples shift between framesP LPC analysis orderP LPC analysis orderQ dimension of LPC derived cepstral vectorQ dimension of LPC derived cepstral vectorK number of frames over which cepstral K number of frames over which cepstral

time derivatives are computedtime derivatives are computed

Page 39: Linear Prediction

39

333333KK

121212121212QQ

1010101088pp

100 (10 100 (10 msec)msec)

80 (10 80 (10 msec)msec)

100 (15 100 (15 msec)msec)MM

300 (30 300 (30 msec)msec)

240 (30 240 (30 msec)msec)

300 (45 300 (45 msec)msec)NN

parameter kHzFs 67.6 kHzFs 8 kHzFs 10

Typical Values of LPC Analysis Parameters for Typical Values of LPC Analysis Parameters for Speech-Recognition SystemSpeech-Recognition System


Recommended