+ All Categories
Home > Documents > Signal Processing Columbia

Signal Processing Columbia

Date post: 08-Nov-2014
Category:
Upload: anderson-neo
View: 66 times
Download: 2 times
Share this document with a friend
Description:
digital signal processing slides all ... before interview review
Popular Tags:
448

Click here to load reader

Transcript
Page 1: Signal Processing Columbia

2012-09-05Dan Ellis 1

ELEN E4810: Digital Signal ProcessingTopic 1: Introduction

1. Course overview2. Digital Signal Processing3. Basic operations & block diagrams4. Classes of sequences

Page 2: Signal Processing Columbia

2012-09-05Dan Ellis 2

1. Course overview

Digital signal processing:Modifying signals with computers

Web site: http://www.ee.columbia.edu/~dpwe/e4810/

Book:Mitra “Digital Signal Processing” (3rd ed., 2005)

Instructor: [email protected]

Page 3: Signal Processing Columbia

2012-09-05Dan Ellis 3

Grading structure Homeworks: 20%

Mainly from Mitra Wednesday-Wednesday schedule Collaborate, don’t copy

Midterm: 20% One session

Final exam: 30% Project: 30%

Page 4: Signal Processing Columbia

2012-09-05Dan Ellis 4

Course project

Goal: hands-on experience with DSP Practical implementation Work in pairs or alone Brief report, optional presentation Recommend MATLAB Ideas on website Don’t copy! Cite your sources!

Page 5: Signal Processing Columbia

2012-09-05Dan Ellis 5

Example past projects

Solo Singing Detection Guitar Chord Classifier Speech/Music Discrimination Room sonar Construction equipment monitoring

DTMF decoder Reverb algorithms Compression algorithms

on w

eb s

ite

W

Page 6: Signal Processing Columbia

2012-09-05Dan Ellis 6

MATLAB Interactive system for numerical

computation Extensive signal processing library Focus on algorithm, not implementation Access:

Columbia Site License: https://portal.seas.columbia.edu/matlab/

Student Version (need Sig. Proc. toolbox) Engineering Terrace 251 computer lab

Page 7: Signal Processing Columbia

2012-09-05Dan Ellis 7

Course at a glance

Page 8: Signal Processing Columbia

2012-09-05Dan Ellis 8

2. Digital Signal Processing

Signals:Information-bearing function

E.g. sound: air pressure variation at a point as a function of time p(t)

Dimensionality:Sound: 1-DimensionGreyscale image i(x,y) : 2-DVideo: 3 x 3-D: r(x,y,t) g(x,y,t) b(x,y,t)

Page 9: Signal Processing Columbia

2012-09-05Dan Ellis 9

Example signals

Noise - all domains Spread-spectrum phone - radio ECG - biological Music Image/video - compression ….

Page 10: Signal Processing Columbia

2012-09-05Dan Ellis 10

Signal processing

Modify a signal to extract/enhance/ rearrange the information

Origin in analog electronics e.g. radar Examples…

Noise reduction Data compression Representation for recognition/

classification…

Page 11: Signal Processing Columbia

2012-09-05Dan Ellis 11

Digital Signal Processing DSP = signal processing on a computer Two effects: discrete-time, discrete level

x(t)

x[n]

Page 12: Signal Processing Columbia

2012-09-05Dan Ellis 12

DSP vs. analog SP

Conventional signal processing:

Digital SP system:

Processorp(t) q(t)

Processorp(t) q(t)A/D D/Ap[n] q[n]

Page 13: Signal Processing Columbia

2012-09-05Dan Ellis 13

Digital vs. analog

Pros Noise performance - quantized signal Use a general computer - flexibility, upgrde Stability/duplicability Novelty

Cons Limitations of A/D & D/A Baseline complexity / power consumption

Page 14: Signal Processing Columbia

2012-09-05Dan Ellis 14

DSP example

Speech time-scale modification:extend duration without altering pitch

M

Page 15: Signal Processing Columbia

2012-09-05Dan Ellis 15

3. Operations on signals Discrete time signal often obtained by

sampling a continuous-time signal

Sequence x[n] = xa(nT), n=…-1,0,1,2… T= samp. period; 1/T= samp. frequency

Page 16: Signal Processing Columbia

2012-09-05Dan Ellis 16

Sequences

Can write a sequence by listing values:

Arrow indicates where n=0 Thus,

x[n] = . . . ,0.2, 2.2, 1.1, 0.2,3.7, 2.9, . . .↑

Page 17: Signal Processing Columbia

2012-09-05Dan Ellis 17

Left- and right-sided

x[n] may be defined only for certain n: N1 ≤ n ≤ N2: Finite length (length = …) N1 ≤ n: Right-sided (Causal if N1 ≥ 0) n ≤ N2: Left-sided (Anticausal)

Can always extend with zero-padding

Right-sidedLeft-sided

Page 18: Signal Processing Columbia

2012-09-05Dan Ellis 18

Operations on sequences

Addition operation:

Adder

Multiplication operation

MultiplierA

x[n] y[n]

x[n] y[n]

w[n] y[n] = x[n] + w[n]

y[n] = A x[n]

Page 19: Signal Processing Columbia

2012-09-05Dan Ellis 19

More operations

Product (modulation) operation:

Modulator

E.g. Windowing: Multiplying an infinite-length sequence by a finite-length window sequence to extract a region

x[n] y[n]

w[n] y[n] = x[n] w[n]

Page 20: Signal Processing Columbia

2012-09-05Dan Ellis 20

Time shifting

Time-shifting operation: where N is an integer If N > 0, it is delaying operation

Unit delay

If N < 0, it is an advance operation Unit advance

y[n]x[n]

y[n]x[n]

Page 21: Signal Processing Columbia

2012-09-05Dan Ellis 21

Combination of basic operations

Example

y[n] = 1x[n] + 2x[n 1]+ 3x[n 2] + 4x[n 3]

Page 22: Signal Processing Columbia

2012-09-05Dan Ellis 22

Up- and down-sampling

Certain operations change the effective sampling rate of sequences by adding or removing samples

Up-sampling = adding more samples = interpolation

Down-sampling = discarding samples = decimation

Page 23: Signal Processing Columbia

2012-09-05Dan Ellis 23

Down-sampling

In down-sampling by an integer factor M > 1, every M-th sample of the input

sequence is kept and M - 1 in-between samples are removed:

xd[n]= x[nM ]

xd[n]M

Page 24: Signal Processing Columbia

2012-09-05Dan Ellis 24

Down-sampling

An example of down-sampling

y[n]= x[3n]3

Page 25: Signal Processing Columbia

2012-09-05Dan Ellis 25

Up-sampling

Up-sampling is the converse of down-sampling: L-1 zero values are inserted between each pair of original values.

L

xu[n] =

x[n/L] n = 0,±L, 2L, . . .

0 otherwise

Page 26: Signal Processing Columbia

2012-09-05Dan Ellis 26

Up-sampling

An example of up-sampling

3

not inverse of downsampling!

Page 27: Signal Processing Columbia

2012-09-05Dan Ellis 27

Complex numbers .. a mathematical convenience that lead

to simple expressions A second “imaginary” dimension (j≡√-1)

is added to all values. Rectangular form: x = xre + j·xim

where magnitude |x| = √(xre2 + xim

2) and phase θ = tan-1(xim/xre)

Polar form: x = |x| ejθ = |x|cosθ + j· |x|sinθ! (! ! ! )

e j = cos + j sin

Page 28: Signal Processing Columbia

2012-09-05Dan Ellis 28

Complex math When adding, real

and imaginary parts add: (a+jb) + (c+jd) = (a+c) + j(b+d)

When multiplying, magnitudes multiply and phases add: rejθ·sejφ = rsej(θ+φ)

Phases modulo 2π

x

Page 29: Signal Processing Columbia

2012-09-05Dan Ellis 29

Complex conjugate Flips imaginary part / negates phase:

Conjugate x* = xre – j·xim = |x| ej(–µ)

Useful in resolving to real quantities: x + x* = xre + j·xim + xre – j·xim = 2xre

x·x* = |x| ej(µ) |x| ej(–µ) = |x|2

Page 30: Signal Processing Columbia

2012-09-05Dan Ellis 30

Classes of sequences

Useful to define broad categories…

Finite/infinite (extent in n)

Real/complex: x[n] = xre[n] + j·xim[n]

Page 31: Signal Processing Columbia

2012-09-05Dan Ellis 31

Classification by symmetry Conjugate symmetric sequence:

Conjugate antisymmetric:xca[n] = –xca*[-n] = –xre[-n] + j·xim[-n]

if x[n] = xre[n] + j·xim[n]then xcs[n] = xcs*[-n] = xre[-n] – j·xim[-n]

Page 32: Signal Processing Columbia

2012-09-05Dan Ellis 32

Conjugate symmetric decomposition Any sequence can be expressed as

conjugate symmetric (CS) / antisymmetric (CA) parts: x[n] = xcs[n] + xca[n]where: xcs[n] = 1/2(x[n] + x*[-n]) = xcs*[-n] xca[n] = 1/2(x[n] – x*[-n]) = -xca*[-n]

When signals are real,CS → Even (xre[n] = xre[-n]), CA → Odd

Page 33: Signal Processing Columbia

2012-09-05Dan Ellis 33

Basic sequences Unit sample sequence:

Shift in time: ±[n - k]

Can express any sequence with ±:Æ0,Æ1,Æ2..= Æ0±[n] + Æ1±[n-1] + Æ2±[n-2]..

1

–4 –3 –2 –1 0 1 2 3 4 5 6n

1

k–2 k–1 k k+1 k+2 k+3 n

Page 34: Signal Processing Columbia

2012-09-05Dan Ellis 34

More basic sequences

Unit step sequence:

Relate to unit sample:

µ[n] =1, n 00, n < 0

[n] = µ[n]µ[n 1]µ[n] = [k]

k=

n

Page 35: Signal Processing Columbia

2012-09-05Dan Ellis 35

Exponential sequences Exponential sequences are

eigenfunctions of LTI systems General form: x[n] = A·Æn

If A and Æ are real (and positive):

|Æ| > 1 |Æ| < 1

Page 36: Signal Processing Columbia

2012-09-05Dan Ellis 36

Complex exponentials x[n] = A·Æn

Constants A, Æ can be complex :A = |A|ej¡ ; Æ = e(æ + j!)

→ x[n] = |A| eæn ej(!n + ¡)

scale varyingmagnitude

varyingphase

Page 37: Signal Processing Columbia

2012-09-05Dan Ellis 37

Complex exponentials Complex exponential sequence can

‘project down’ onto real & imaginary axes to give sinusoidal sequences

xre[n] = e-n/12cos(πn/6) xim[n] = e-n/12sin(πn/6) M

x[n] = exp ( 112 + j 6)n

xre[n] xim[n]

e j = cos + j sin

Page 38: Signal Processing Columbia

2012-09-05Dan Ellis 38

Periodic sequences A sequence satisfying is called a periodic sequence with a

period N where N is a positive integer and k is any integer.

Smallest value of N satisfyingis called the fundamental period

Page 39: Signal Processing Columbia

2012-09-05Dan Ellis 39

Periodic exponentials Sinusoidal sequence and

complex exponential sequence are periodic sequences of period N only if with N & r positive integers Smallest value of N satisfying is the fundamental period of the

sequence r = 1 → one sinusoid cycle per N samples

r > 1 → r cycles per N samples M

oN = 2 r

Page 40: Signal Processing Columbia

2012-09-05Dan Ellis 40

Symmetry of periodic sequences An N-point finite-length sequence xf[n]

defines a periodic sequence:

Symmetry of xf [n] is not defined because xf [n] is undefined for n < 0

Define Periodic Conjugate Symmetric:

x[n] = xf [nN ]

xpcs[n] =1/2 (x[n] + x[nN ])

=1/2xf [n] + xf [N n]

1 n < N

“n modulo N” nN = n + rN

s.t. 0 nN < N, r Z

Page 41: Signal Processing Columbia

2012-09-05Dan Ellis 41

Sampling sinusoids

Sampling a sinusoid is ambiguous:

x1 [n] = sin(!0n) x2 [n] = sin((!0+2πr)n) = sin(!0n) = x1 [n]

Page 42: Signal Processing Columbia

2012-09-05Dan Ellis 42

Aliasing E.g. for cos(!n), ! = 2ºr ± !0

all (integer) r appear the same after sampling

We say that a larger ! appears aliased to a lower frequency

Principal value for discrete-time frequency: 0 ≤ !0 ≤ º

( i.e. less than 1/2 cycle per sample)

Page 43: Signal Processing Columbia

2012-09-12Dan Ellis 1

ELEN E4810: Digital Signal ProcessingTopic 2: Time domain

1. Discrete-time systems2. Convolution3. Linear Constant-Coefficient Difference

Equations (LCCDEs)4. Correlation

Page 44: Signal Processing Columbia

2012-09-12Dan Ellis 2

1. Discrete-time systems A system converts input to output:

E.g. Moving Average (MA):

x[n] DT System y[n]

(M = 3)

x[n]

y[n]+z-1

z-1

1/M

1/M

1/Mx[n-1]

x[n-2]

y[n] = 1M

x[n k]k=0

M 1

y[n] = f (x[n]) n

A

Page 45: Signal Processing Columbia

2012-09-12Dan Ellis 3

Moving Average (MA)

x[n]

y[n]+z-1

z-1

1/M

1/M

1/Mx[n-1]

x[n-2]

n

x[n]

-1 1 2 3 4 5 6 7 8 9

A

n

x[n-1]

-1 1 2 3 4 5 6 7 8 9

n

x[n-2]

-1 1 2 3 4 5 6 7 8 9

n

y[n]

-1 1 2 3 4 5 6 7 8 9

y[n] = 1M

x[n k]k=0

M 1

Page 46: Signal Processing Columbia

2012-09-12Dan Ellis 4

MA Smoother MA smoothes out rapid variations

(e.g. “12 month moving average”) e.g. signal noise

y[n] = 15

x[n k]k=0

45-ptmovingaverage

x[n] = s[n] + d[n]

Page 47: Signal Processing Columbia

2012-09-12Dan Ellis 5

Accumulator Output accumulates all past inputs:

x[n] y[n]+z-1

y[n-1]

y[n] = x[]=

n

= x[]=

n1

+ x[n]

= y[n 1]+ x[n]

Page 48: Signal Processing Columbia

2012-09-12Dan Ellis 6

Accumulator

n

x[n]

-1 1 2 3 4 5 6 7 8 9

A

n

y[n-1]

-1 1 2 3 4 5 6 7 8 9

n

y[n]

-1 1 2 3 4 5 6 7 8 9

x[n] y[n]+z-1

y[n-1]

M

Page 49: Signal Processing Columbia

2012-09-12Dan Ellis 7

Classes of DT systems Linear systems obey superposition:

if input x1[n] → output y1[n], x2 → y2 ... given a linear combination

of inputs: then output

for all Æ, Ø, x1, x2

i.e. same linear combination of outputs

x[n] DT system y[n]

Page 50: Signal Processing Columbia

2012-09-12Dan Ellis 8

Linearity: Example 1 Accumulator:

Linear

y[n] = x[]=

n

y[n] = x1[]+ x2[]( )=

n

= x1[]( ) + x2[]( )= x1[] + x2[]= y1[n]+ y2[n]

x[n] = x1[n]+ x2[n]

Page 51: Signal Processing Columbia

2012-09-12Dan Ellis 9

Linearity Example 2: “Teager Energy operator”:

Nonlinear

y[n] = x2[n] x[n 1] x[n +1]

x[n] = x1[n]+ x2[n]

y[n] = x1[n]+x2[n]( )2

x1[n 1]+x2[n 1]( )

x1[n +1]+x2[n +1]( )

y1[n]+ y2[n]

Page 52: Signal Processing Columbia

2012-09-12Dan Ellis 10

Linearity Example 3: ‘Offset’ accumulator:

but

Nonlinear

.. unless C = 0

y[n] =C + x[]=

n

y1[n] =C + x1[]=

n

y[n] =C + x1[]+x2[]( )=

n

y1[n]+y2[n]

Page 53: Signal Processing Columbia

2012-09-12Dan Ellis 11

Property: Shift (time) invariance Time-shift of input

causes same shift in output i.e. if x1[n] → y1[n]

then

i.e. process doesn’t depend on absolute value of n

x[n] = x1[n n0 ]

y[n] = y1[n n0 ]

Page 54: Signal Processing Columbia

2012-09-12Dan Ellis 12

Shift-invariance counterexample Upsampler: L

Not shift invariant

y[n] =

x[n/L] n = 0,±L,±2L, . . .

0 otherwise

y1[n] = x1[n/L] (n = r · L)x[n] = x1[n n0]

y[n] = x[n/L] = x1[n/L n0]

= x1

n L · n0

L

= y1[n L · n0] = y1[n n0]

x[n] y[n]

Page 55: Signal Processing Columbia

2012-09-12Dan Ellis 13

Another counterexample

Hence If

then

Not shift invariant - parameters depend on n

y[n] = n x[n] scaling by time index

y1[n n0 ] = n n0( ) x1[n n0 ]x[n] = x1[n n0 ]

y[n] = n x1[n n0 ]

Page 56: Signal Processing Columbia

2012-09-12Dan Ellis 14

Linear Shift Invariant (LSI) Systems which are both linear and

shift invariant are easily manipulated mathematically

This is still a wide and useful class of systems

If discrete index corresponds to time, called Linear Time Invariant (LTI)

Page 57: Signal Processing Columbia

2012-09-12Dan Ellis 15

Causality If output depends only on past and

current inputs (not future), system is called causal

Formally, if x1[n] → y1[n] & x2[n] → y2[n] Causal

x1[n] = x2[n] n < N y1[n] = y2[n] n < N

Page 58: Signal Processing Columbia

2012-09-12Dan Ellis 16

Causality example Moving average:

y[n] depends on x[n-k], k ≥ 0 → causal ‘Centered’ moving average

.. looks forward in time → noncausal .. Can make causal by delaying

y[n] = 1M

x[n k]k=0

M 1

yc n[ ] = y n + M 1( ) /2[ ]

=1M

x[n] + x[n k] + x[n + k]k=1(M 1) /2( )

Page 59: Signal Processing Columbia

2012-09-12Dan Ellis 17

Impulse response (IR) Impulse

(unit sample sequence) Given a system:

if x[n] = ±[n] then y[n] = h[n]

LSI system completely specified by h[n]

n

±[n]-1 1 2 3 4 5 6 7

1

-2-3

x[n] DT system y[n]

“impulse response”

Page 60: Signal Processing Columbia

2012-09-12Dan Ellis 18

Impulse response example Simple system: α1x[n]

y[n]+z-1

z-1

α2

α3

n

x[n]-1 1 2 3 4 5 6 7

1

-2-3

n

y[n]-1 1 2 3 4 5 6 7

α1

-2-3

α2α3

x[n] = ±[n] impulse

y[n] = h[n] impulse response

Page 61: Signal Processing Columbia

2012-09-12Dan Ellis 19

2. Convolution Impulse response:

Shift invariance:

+ Linearity:

Can express any sequence with ±s: x[n] = x[0]±[n] + x[1]±[n-1] + x[2]±[n-2]..

±[n] LSI h[n]

±[n-n0] LSI h[n-n0]

α·±[n-k] + β·±[n-l]

LSI α·h[n-k] + β·h[n-l]

Page 62: Signal Processing Columbia

2012-09-12Dan Ellis 20

Convolution sum Hence, since

For LSI,

written as Summation is symmetric in x and h

i.e. l = n – k →

Convolutionsum

x[n] = x[k][n k]k=

y[n] = x[k]h[n k]k=

*

y[n] = x[n] h[n]

*

x[n] h[n] = x[n l]h[l]l=

= h[n] x[n]*

Page 63: Signal Processing Columbia

2012-09-12Dan Ellis 21

Convolution properties LSI System output y[n] = input x[n]

convolved with impulse response h[n] → h[n] completely describes system

Commutative: Associative:

Distributive:

x[n] h[n] = h[n] x[n]* *

(x[n] h[n]) y[n] = x[n] (h[n] y[n])****

h[n] (x[n] + y[n]) = h[n] x[n] + h[n] y[n]** *

Page 64: Signal Processing Columbia

2012-09-12Dan Ellis 22

Interpreting convolution Passing a signal through a (LSI) system

is equivalent to convolving it with the system’s impulse response

x[n] h[n] y[n] = x[n] h[n]∗

x[n]=0 3 1 2 -1 h[n] = 3 2 1

n-1 1 2 3 4 5 6 7-2-3n-1 1 2 3 5 6 7-2-3

y[n] = x[k]h[n k]k=

= h[k]x[n k]k=

→ →

Page 65: Signal Processing Columbia

2012-09-12Dan Ellis 23

Convolution interpretation 1

Time-reverse h, shift by n, take inner product against fixed x

k-1 1 2 3 5 6 7-2-3

x[k]

k-1 1 2 3 4 5 6 7-2-3

h[0-k]

k-1 1 2 3 4 5 6 7-2-3

h[1-k]

k-1 1 2 3 4 5 6 7-2-3

h[2-k]

n-1 1 2 3 5 7-2-3

y[n]0

9 9 11

2-1

y[n] = x[k]h[n k]k=

=g[k]

=g[k-1]

call h[-n] = g[n]

Page 66: Signal Processing Columbia

2012-09-12Dan Ellis 24

Convolution interpretation 2

Shifted x’sweighted bypoints in h

Conversely, weighted, delayedversions of h ... n-1 1 2 3 5 7-2-3

y[n]0

9 9 11

2-1

n-1 1 2 3 5 6 7-2-3

x[n]

n-1 1 2 3 4 6 7-2-3

x[n-1]

n-1 1 2 3 4 5 7-2-3

x[n-2]k

-11

23

45

67

-2-3

h[k]

y[n] = h[k]x[n k]k=

Page 67: Signal Processing Columbia

2012-09-12Dan Ellis 25

Matrix interpretation

Diagonals in X matrix are equal

y[n] = x[n k]h[k]k=

y[0]y[1]y[2]...

=

x[0]x[1]x[2]...

x[1]x[0]x[1]...

x[2]x[1]x[0]...

h[0]h[1]h[2]

Page 68: Signal Processing Columbia

2012-09-12Dan Ellis 26

Convolution notes Total nonzero length of convolving N

and M point sequences is N+M-1 Adding the indices of the terms within

the summation gives n :

i.e. summation indices move in opposite senses

y[n] = h[k]x[n k]k=

k + n k( ) = n

Page 69: Signal Processing Columbia

2012-09-12Dan Ellis 27

Convolution in MATLAB The M-file conv implements the

convolution sum of two finite-length sequences

If

then conv(a,b) yields

a = [0 3 1 2 -1]

b = [3 2 1]

[0 9 9 11 2 0 -1]

M

Page 70: Signal Processing Columbia

2012-09-12Dan Ellis 28

Connected systems Cascade connection:

Impulse response h[n] of the cascade of two systems with impulse responses h1[n] and h2[n] is

By commutativity,

*

*

h[n] = h1[n]

Page 71: Signal Processing Columbia

2012-09-12Dan Ellis 29

Inverse systems ±[n] is identity for convolution

i.e. Consider

h2[n] is the inverse system of h1[n]

x[n] ±[n] = x[n]*

x[n] y[n] z[n]

= x[n]z[n] = h2[n] y[n] = h2[n] h1[n] x[n]* **

if *

h2[n] h1[n] = [n]

Page 72: Signal Processing Columbia

2012-09-12Dan Ellis 30

Inverse systems Use inverse system to recover input x[n]

from output y[n] (e.g. to undo effects of transmission channel)

Only sometimes possible - e.g. cannot ‘invert’ h1[n] = 0

In general, attempt to solve *

h2[n] h1[n] = [n]

Page 73: Signal Processing Columbia

2012-09-12Dan Ellis 31

Inverse system example Accumulator:

Impulse response h1[n] = μ[n] ‘Backwards difference’

.. has desired property:

Thus, ‘backwards difference’ is inverse system of accumulator.

µ[n]µ[n 1] = [n]

n-1 1 2 3 5 6 7-2-3 4

n-1 1 2 3 5 6 7-2-3 4

Page 74: Signal Processing Columbia

2012-09-12Dan Ellis 32

Parallel connection

Impulse response of two parallel systems added together is:

Page 75: Signal Processing Columbia

2012-09-12Dan Ellis 33

3. Linear Constant-Coefficient Difference Equation (LCCDE) General spec. of DT, LSI, finite-dim sys:

Rearrange for y[n] in causal form:

WLOG, always have d0 = 1

defined by dk,pk

order = max(N,M)

dk y[n k]k=0

N

= pk x[n k]k=0

M

y[n] = dkd0y[n k]

k=1

N

+ pkd0x[n k]

k=0

M

Page 76: Signal Processing Columbia

2012-09-12Dan Ellis 34

Solving LCCDEs “Total solution”

Complementary Solution

satisfies

Particular Solutionfor given forcing function x[n]

y[n] = yc[n]+ yp[n]

dk y[n k]k=0

N

= 0

Page 77: Signal Processing Columbia

2012-09-12Dan Ellis 35

Complementary Solution General form of unforced oscillation

i.e. system’s ‘natural modes’ Assume yc has form

Characteristic polynomialof system - depends only on dk

yc[n] = n

dknk

k=0

N

= 0

nN d0N + d1

N1 +…+ dN1 + dN( ) = 0

dkNk

k=0

N

= 0

Page 78: Signal Processing Columbia

2012-09-12Dan Ellis 36

Complementary Solution factors into roots λi , i.e.

Each/any λi satisfies eqn. Thus, complementary solution:

Any linear combination will work→ αis are free to match initial conditions

dkNk

k=0

N

= 0

( 1)( 2 )...= 0

yc[n] =11n +22

n +33n + ...

Page 79: Signal Processing Columbia

2012-09-12Dan Ellis 37

Complementary Solution Repeated roots in chr. poly:

Complex λis → sinusoidal

yc[n] =11n +2n1

n +3n21

n

+...+LnL11

n + ...

( 1)L ( 2 )...= 0

yc[n] = iin

Page 80: Signal Processing Columbia

2012-09-12Dan Ellis 38

Particular Solution Recall: Total solution Particular solution reflects input ‘Modes’ usually decay away for large n

leaving just yp[n] Assume ‘form’ of x[n], scaled by β:

e.g. x[n] constant → yp[n] = β x[n] = λ0

n → yp[n] = β · λ0n (λ0 ∉ λi)

or = β nL λ0n (λ0 ∈ λi)

y[n] = yc[n]+ yp[n]

Page 81: Signal Processing Columbia

2012-09-12Dan Ellis 39

LCCDE example

Need input: x[n] = 8μ[n] Need initial conditions:

y[-1] = 1, y[-2] = -1

x[n] y[n]+

y[n]+ y[n 1] 6y[n 2] = x[n]

Page 82: Signal Processing Columbia

2012-09-12Dan Ellis 40

LCCDE example Complementary solution:

α1, α2 are unknown at this point

→roots λ1 = -3, λ2 = 2

y[n]+ y[n 1] 6y[n 2] = 0; y[n] =n

n2 2 + 6( ) = 0 + 3( ) 2( ) = 0

yc[n] =1 3( )n +2 2( )n

Page 83: Signal Processing Columbia

2012-09-12Dan Ellis 41

LCCDE example Particular solution: Input x[n] is constant = 8μ[n] assume yp[n] = β, substitute in:

(‘large’ n)

y[n]+ y[n 1] 6y[n 2] = x[n] + 6 = 8µ[n]4 = 8 = 2

Page 84: Signal Processing Columbia

2012-09-12Dan Ellis 42

Total solution

Solve for unknown αis by substituting initial conditions into DE at n = 0, 1, ...

n = 0

LCCDE example

from ICs

=1 3( )n +2 2( )n +

y[n] = yc[n]+ yp[n]

y[n]+ y[n 1] 6y[n 2] = x[n]

y[0]+ y[1] 6y[2] = x[0]

1 +2 + +1+ 6 = 81 +2 = 3

Page 85: Signal Processing Columbia

2012-09-12Dan Ellis 43

LCCDE example n = 1

solve: α1 = -1.8, α2 = 4.8 Hence, system output:

Don’t find αis by solving with ICs at n = -1,-2 (ICs may not reflect natural modes;

Mitra example 2.37/38 is wrong)

y[1]+ y[0] 6y[1] = x[1]

1(3)+2 (2)+ +1 +2 + 6 = 821 + 32 =18

y[n] = 1.8(3)n + 4.8(2)n 2 n 0

M

Page 86: Signal Processing Columbia

2012-09-12Dan Ellis 44

LCCDE solving summary Difference Equation (DE):

Ay[n] + By[n-1] + ... = Cx[n] + Dx[n-1] + ...Initial Conditions (ICs): y[-1] = ...

DE RHS = 0 with y[n]=λn → roots λigives complementary soln

Particular soln: yp[n] ~ x[n]solve for βλ0

n “at large n” αis by substituting DE at n = 0, 1, ...

ICs for y[-1], y[-2]; yt=yc+yp for y[0], y[1]

yc[n] = iin

Page 87: Signal Processing Columbia

2012-09-12Dan Ellis 45

LCCDEs: zero input/zero state Alternative approach to solving

LCCDEs is to solve two subproblems: yzi[n], response with zero input (just ICs) yzs[n], response with zero state (just x[n])

Because of linearity, y[n] = yzi[n]+yzs[n] Both subproblems are ‘real’ But, have to solve for αis twice

(then sum them)

Page 88: Signal Processing Columbia

2012-09-12Dan Ellis 46

Impulse response of LCCDEs

Impulse response:

i.e. solve with x[n] = δ[n] → y[n] = h[n](zero ICs)

With x[n] = δ[n], ‘form’ of yp[n] = βδ[n]

→ solve y[n] for n = 0,1, 2... to find αis

δ[n] LCCDE h[n]

Page 89: Signal Processing Columbia

2012-09-12Dan Ellis 47

y[0]+ y[1] 6y[2] = x[0]

LCCDE IR example e.g.

(from before); x[n] = δ[n]; y[n] = 0 for n<0 n = 0:

n = 1: n = 2:

thus

1

n ≥ 0Infinite length

y[n]+ y[n 1] 6y[n 2] = x[n]

yc[n] =1 3( )n +2 2( )n

h[n] = 0.6 3( )n + 0.4 2( )nM

yp[n] = βδ[n]

⇒ α1 + α2 + β = 1

⇒ α1 = 0.6, α2 = 0.4, β = 0

α1(–3) + α2(2) + 1 = 0α1(9) + α2(4) – 1 – 6 = 0

Page 90: Signal Processing Columbia

2012-09-12Dan Ellis 48

System property: Stability Certain systems can be unstable e.g.

Output grows without limit in some conditions

x[n] y[n]+z-12

n

y[n]

-1 1 2 3 4

...

Page 91: Signal Processing Columbia

2012-09-12Dan Ellis 49

Stability Several definitions for stability; we use

Bounded-input, bounded-output (BIBO) stable

For every bounded input output is also subject to a finite bound,

x[n] < Bx n

y[n] < By n

Page 92: Signal Processing Columbia

2012-09-12Dan Ellis 50

Stability example MA filter:

→ BIBO Stable

y[n] = 1M

x[n k]k=0

M 1

y[n] = 1M

x[n k]k=0

M 1

1M

x[n k]k=0

M 1

1MM Bx By

Page 93: Signal Processing Columbia

2012-09-12Dan Ellis 51

Stability & LCCDEs LCCDE output is of form:

αs and βs depend on input & ICs,but to be bounded for any input we need |λ| < 1

y[n] =11n +22

n + ...+ 0n + ...

Page 94: Signal Processing Columbia

2012-09-12Dan Ellis 52

4. Correlation Correlation ~ identifies similarity

between sequences:

Note:

Crosscorrelation

of x against y “lag”

call m = n – ℓ

M

rxy[] =

n=x[n]y[n ]

ryx[] =

n=y[n]x[n ]

=

m=y[m + ]x[m] = rxy[]

Page 95: Signal Processing Columbia

2012-09-12Dan Ellis 53

Correlation and convolution Correlation:

Convolution:

Hence:

Correlation may be calculated by convolving with time-reversed sequence

rxy[n] =

k=x[k]y[k n]

x[n] y[n] =

k=x[k]y[n k]

rxy[n] = x[n] y[n]

Page 96: Signal Processing Columbia

2012-09-12Dan Ellis 54

Autocorrelation Autocorrelation (AC) is correlation of

signal with itself:

Note: Energy of sequence x[n]

rxx[] = x[n]x[n ]n=

= rxx[]

rxx[0] = x 2[n]n=

= x

Page 97: Signal Processing Columbia

2012-09-12Dan Ellis 55

Correlation maxima Note:

Similarly:

From geometry,

when x//y, cosθ = 1, else cosθ < 1

angle betweenx and y

rxx[] rxx[0]rxx[]rxx[0]

1

rxy[] xy rxy[]

rxx[0]ryy[0]1

xy = xi yii = x y cos

xi2

Page 98: Signal Processing Columbia

2012-09-12Dan Ellis 56

AC of a periodic sequence Sequence of period N: Calculate AC over a finite window:

if M >> N

˜ x [n] = ˜ x [n + N]

r˜ x x [] = 12M +1

˜ x [n] ˜ x [n ]n=M

M

= 1N

˜ x [n] ˜ x [n ]n=0

N1

Page 99: Signal Processing Columbia

2012-09-12Dan Ellis 57

AC of a periodic sequence

i.e AC of periodic sequence is periodic

Average energy persample or Power of x

r˜ x x [0] = 1N

˜ x 2[n]n=0

N1

= P˜ x

r˜ x x [+ N] = 1N

˜ x [n] ˜ x [n N ]n=0

N1

= r˜ x x []

Page 100: Signal Processing Columbia

2012-09-12Dan Ellis 58

What correlations look like AC of any x[n]

AC of periodic

Cross correlation

r˜ x x []

rxx[]

rxy[]

Page 101: Signal Processing Columbia

2012-09-12Dan Ellis 59

What correlation looks like

Page 102: Signal Processing Columbia

2012-09-12Dan Ellis

Correlation in action Close mic vs.

video camera mic

60

Short-timecross-correlation

Page 103: Signal Processing Columbia

2012-09-24Dan Ellis 1

ELEN E4810: Digital Signal ProcessingTopic 3: Fourier domain

1. The Fourier domain2. Discrete-Time Fourier Transform (DTFT)3. Discrete Fourier Transform (DFT)4. Convolution with the DFT

Page 104: Signal Processing Columbia

2012-09-24Dan Ellis 2

1. The Fourier Transform Basic observation (continuous time):

A periodic signal can be decomposed into sinusoids at integer multiples of the fundamental frequency

i.e. if we can approach with

Harmonicsof the

fundamental

˜ x (t) = ˜ x (t +T )

˜ x

x(t) M

k=0

ak cos

2k

Tt + k

Page 105: Signal Processing Columbia

2012-09-24Dan Ellis

1.5 1 0.5 0 0.5 1 1.5 1

0.5

0

0.5

1

3

Fourier Series For a square wave,

i.e.

M

M

k=0

ak cos

2k

Tt + k

x(t) = cos

2

Tt

1

3cos

2

T3t

+

15

cos

2

T5t

. . .

k = 0; ak =

(1)

k12 1

k k = 1, 3, 5, . . .

0 otherwise

Page 106: Signal Processing Columbia

2012-09-24Dan Ellis 4

Fourier domain x is equivalently described

by its Fourier Series parameters:

Complex form:

Negative ak is equivalent to phase of º

k1 2 3 5 6 74

ak1.0

k1 2 3 5 6 74

¡kπ

ak = (1)k12

1k

k = 1, 3, 5, . . .

x(t) M

k=M

ckej 2kT t

Page 107: Signal Processing Columbia

2012-09-24Dan Ellis 5

Fourier analysis How to find ?

Inner product with complex sinusoids:

but ej = cos + jsin

|ck|, argck

=1T

x(t) cos(

2k

Tt)dt j

x(t) sin(

2k

Tt)dt

x(t) M

k=M

ckej 2kT t

ck =1T

T/2

T/2x(t)ej 2k

T tdt

Page 108: Signal Processing Columbia

2012-09-24Dan Ellis 6

Fourier analysis Consider

.. so ck should = 0 except k = ±l

Then

ck =1T

x t( ) cos 2ktT dt j x t( ) sin 2ktT dt( )

=1T

cos 2ltT cos 2ktT dt j cos 2ltT sin 2ktT dt( )

0 even·odd

x(t) = cos

l2

Tt

x(t) M

k=M

ckej 2kT t

Page 109: Signal Processing Columbia

2012-09-24Dan Ellis 7

Fourier analysis Works if k, l are positive integers,∴

(say T=2π)

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1

-1

-0.5

0

0.5

1

cos(1•t)

cos(2•t)

t / /

= 14

cos(k + l)t + cos(k l)tdt

= 14

sin(k+l)t

k+l + sin(kl)tkl

= 12 (sinc(k + l) + sinc(k l))

12

cos(kt) · cos(lt)dt =

1 k = ±l

0 otherwise

Page 110: Signal Processing Columbia

2012-09-24Dan Ellis 8

sinc

= 1 when x = 0= 0 when x = r·º, r ≠ 0, r = ±1, ±2, ±3,...

sincx = sin xx

1.0

/ 2/ 3/ 4/<4/ <3/ <2/ </sin(x)

sin(x)/x

y=x

x

Page 111: Signal Processing Columbia

2012-09-24Dan Ellis 9

Fourier Analysis

Thus,

because real & imag sinusoids in pick out the corresponding sinusoidal components linearly combined

in

ck =1T

T/2

T/2x(t)ej 2k

T tdt

ej 2kT t

x(t) =M

k=M

ckej 2kT t

Page 112: Signal Processing Columbia

2012-09-24Dan Ellis 10

Fourier Transform Fourier series for periodic signals

extends naturally to Fourier Transform for any (CT) signal (not just periodic):

Discrete index k → continuous freq. Ω

Inverse FourierTransform (IFT)

FourierTransform (FT)

x(t) =12

X(j)ejtd

X(j) =

x(t)ejtdt

Page 113: Signal Processing Columbia

2012-09-24Dan Ellis 11

Fourier Transform Mapping between two continuous

functions:

2π ambiguity

0 0.002 0.004 0.006 0.008time / sec

level / dB

-0.01

0

0.01

0.02x(t)

0 2000 4000 6000 8000freq / Hz

-80

-60

-40

-20

0|X(1)|

0 2000 4000 6000 8000freq / Hz

0

//2

<//2

/

</

argX(1)

Page 114: Signal Processing Columbia

2012-09-24Dan Ellis 12

Fourier Transform of a sine Assume Now, since

...we know ...where ±(x) is the Dirac delta function

(continuous time) i.e.

→ ↔

f(x)xx0

δ(x-x0)

x(t) = e j0t

x x0( ) f x( )dx = f x0( )

x(t) = Ae j0t

X() = A(0 )

x(t) =1

2p

Z •

•X(W)e jWt

dW

X(W) = 2pd(WW0)

Page 115: Signal Processing Columbia

2012-09-24Dan Ellis 13

Fourier TransformsTime Frequency

Fourier Series (FS)

Continuous periodic x(t)

Discrete infinite ck

Fourier Transform (FT)

Continuous infinite x(t)

Continuous infinite X(Ω)

Discrete-Time FT (DTFT)

Discrete infinite x[n]

Continuous periodic X(ejω)

Discrete FT (DFT)

Discrete finite/pdc x[n]

Discrete finite/pdc X[k]

~

~

Page 116: Signal Processing Columbia

2012-09-24Dan Ellis 14

2. Discrete Time FT (DTFT) FT defined for discrete sequences:

Summation (not integral) Discrete (normalized)

frequency variable !

Argument is ej!, not j!

DTFT

X(e j ) = x[n]e jnn=

Page 117: Signal Processing Columbia

2012-09-24Dan Ellis 15

DTFT example e.g. x[n] = Æn·μ[n], |Æ| < 1

⇒n-1 1 2 3 4 5 6 7

X(e j ) = nµ[n]e jnn=

= e j( )nn=0

= 11e j

S = cnn=0

cS = cnn=1

S cS = c0 =1

S =11 c

Im

Re

n

1 - _e-jt

0 /

|X(ejt)|

t

2/ 3/ 4/ 5/

1

2

3 /argX(ejt)

t

</

/ 2/ 3/ 4/ 5/

( |c | < 1 )

Page 118: Signal Processing Columbia

2012-09-24Dan Ellis 16

Periodicity of X(ej!) X(ej!) has periodicity 2º in ! :

Phase ambiguity of ej! makes it implicit

X(e j (+2 ) ) = x[n] e j (+2 )n

= x[n] e jn e j2n = X(e j )

0 /

|X(ejt)|

t

2/ 3/ 4/ 5/

1

2

3 /argX(ejt)

t

</

/ 2/ 3/ 4/ 5/

Page 119: Signal Processing Columbia

2012-09-24Dan Ellis 17

Inverse DTFT (IDTFT) Same basic form as other IFTs:

Note: continuous, periodic X(ej!) discrete, infinite x[n] ...

IDTFT is actually forward Fourier Series (except for sign of !)

IDTFT

x[n] = 12

X(e j )e jnd

Page 120: Signal Processing Columbia

2012-09-24Dan Ellis 18

IDTFT Verify by substituting in DTFT:

= 0 unlessn = l

i.e. = δ[n-l]

x[n] = 12

X(e j )e jnd

= 12

x[l]e jll( )e jnd

= x[l]l

12

e j (nl )d

= x[l]sinc

l (n l)= x[n]

Page 121: Signal Processing Columbia

2012-09-24Dan Ellis 19

sinc again

Same as ∫cos imag jsin part cancels∴

12

ej(nl)dw =

12

ej(nl)

j(n l)

=12

ej(nl) ej(nl)

j(n l)

=12

2j sin(n l)

j(n l)

= sinc (n l)

Page 122: Signal Processing Columbia

2012-09-24Dan Ellis 20

x[n] = ±[n] ⇒

i.e. x[n] X(ej!)

±[n] ↔ 1

DTFTs of simple sequences

(for all !)

n-2 2 31-1-3 !-º º

x[n] X(ejω)

X(e j ) = x[n]e jnn=

= e j 0 =1

Page 123: Signal Processing Columbia

2012-09-24Dan Ellis 21

DTFTs of simple sequences :

⇒ over -º < ! < º but X(ej!) must be periodic in ! ⇒

If !0 = 0 then x[n] = 1 ∀ n so

IDTFT

x[n] = e j0n

x[n] = 12

X(e j )e jnd

X(e j ) = 2 ( 0 )

e j0n 2 ( 0 2k)k

1 2 ( 2k)k

X(ej!)

0 2º 4º

Page 124: Signal Processing Columbia

2012-09-24Dan Ellis 22

µ[n] 11 e j

+ ( + 2k)k

DTFTs of simple sequences From before:

μ[n] tricky - not finite

( |α | < 1)

DTFT of 1/2

nµ[n] 11e j

Page 125: Signal Processing Columbia

2012-09-24Dan Ellis 23

DTFT properties Linear:

Time shift:

Frequency shift:‘delay’

in frequency

g[n]+ h[n] G(e j )+ H (e j )

g[n n0 ] e jn0G(e j )

e j0ng[n] G e j 0( )( )

Page 126: Signal Processing Columbia

2012-09-24Dan Ellis 24

DTFT example x[n] = ±[n] + Æn μ[n-1] ↔ ?

= ±[n] + Æ(Æn-1 μ[n-1])

⇒ x[n] = Æn μ[n]

X(e j ) =1+ e j1 11e j

=1+ e j

1e j=1e

j +e j

1e j

= 11e j

Page 127: Signal Processing Columbia

2012-09-24Dan Ellis 25

DTFT symmetry If x[n] ↔ X(ej!) then... x[-n] ↔ X(e-j!) x*[n] ↔ X*(e-j!)

Rex[n] ↔ XCS(ej!)

jImx[n] ↔ XCA(ej!)

xcs[n] ↔ ReX(ej!) xca[n] ↔ jImX(ej!)

conjugate symmetry cancels Im parts on IDTFT

from summation

(e-j!)* = ej!

= 12X e j( ) + X * e j( )[ ]

= 12X e j( ) X * e j( )[ ]

X(e j ) = x[n]e jnn=

Page 128: Signal Processing Columbia

2012-09-24Dan Ellis

When x[n] is pure real, ⇒ X(ej!) = X*(e-j!)xcs[n] ≡ xev[n] = xev[-n] ↔ XR(ej!) = XR(e-j!)xca[n] ≡ xod[n] = -xod[-n] ↔ XI(ej!) = -XI(e-j!)

26

DTFT of real x[n]

x[n] real, even ↔ X(ej!) even, real

Real

n

Imag

xre[n]

xim[n]

XCS

Page 129: Signal Processing Columbia

2012-09-24Dan Ellis 27

DTFT and convolution Convolution:

Convolution becomes

multiplication

x[n] = g[n] h[n]

X(e j ) = g[n]h[n]( )e jnn=

= g[k]h[n k]k( )n e jn

= g[k]e jk h[n k]e j (nk )n( )k=G(e j ) H (e j )

g[n] h[n] G(e j )H (e j )

Page 130: Signal Processing Columbia

2012-09-24Dan Ellis 28

Convolution with DTFT Since we can calculate a convolution by:

finding DTFTs of g, h → G, H multiply them: G·H IDTFT of product is result,

g[n]

h[n]y[n]

DTFT

DTFTIDTFT

G(e j )

g[n] h[n] G(e j )H (e j )

H (e j )

Y (e j )

M

g[n] h[n]

Page 131: Signal Processing Columbia

2012-09-24Dan Ellis 29

x[n] = Æn·μ[n] ⇒ h[n] = ±[n] - Ʊ[n-1]

⇒ y[n] = x[n] ∗ h[n] ⇒

⇒ y[n] = ±[n] i.e. ...

DTFT convolution example

X(e j ) = 11e j

H (e j ) =1 e j1( ) 1

Y (e j ) = H (e j )X(e j )

= 11e j

1e j( ) =1

Page 132: Signal Processing Columbia

2012-09-24Dan Ellis 30

DTFT modulation Modulation:

Could solve if g[n] was just sinusoids...

⇒Dual of convolution in time

x[n] = g[n] h[n]

X(e j ) = 12

G(e j )e jnd

n h[n]e jn

= 12

G(e j ) h[n]e j ( )nn[ ]d

g[n] h[n] 12

G(e j )H (e j ( ) )d

Page 133: Signal Processing Columbia

2012-09-24Dan Ellis 31

Parseval’s relation “Energy” in time and frequency domains

are equal:

If g = h, then g·g* = |g|2 = energy...

g[n]h*[n]n = 1

2G(e j )H * (e j )d

Page 134: Signal Processing Columbia

2012-09-24Dan Ellis 32

Energy density spectrum Energy of sequence

By Parseval

Define Energy Density Spectrum (EDS)

g = g[n]n 2

g = 12

G(e j ) 2d

Sgg (ej ) = G(e j ) 2

Page 135: Signal Processing Columbia

2012-09-24Dan Ellis 33

EDS and autocorrelation Autocorrelation of g[n]:

⇒ If g[n] is real, G(e-j!) = G*(ej!), so

Mag-sq of spectrum is DTFT of autoco

no phaseinfo.

rgg[] = g[n]g[n ]n=

= g[n] g[n]

DTFT rgg[] =G(e j )G(e j )

DTFT rgg[] = G(e j ) 2 = Sgg (ej )

Page 136: Signal Processing Columbia

2012-09-24Dan Ellis 34

3. Discrete FT (DFT)

A finite or periodic sequence has only N unique values, x[n] for 0 ≤ n < N

Spectrum is completely defined by N distinct frequency samples

Divide 0..2º into N equal steps, !k = 2ºk/N

Discrete FT (DFT)

Discrete finite/pdc x[n]

Discrete finite/pdc X[k]

Page 137: Signal Processing Columbia

2012-09-24Dan Ellis 35

Uniform sampling of DTFT spectrum:

DFT:

where i.e. 1/Nth of a revolution

DFT and IDFT

X[k] = X(e j )=2k

N= x[n]e

j2kN

n

n=0

N1

X[k] = x[n]WNkn

n=0

N1

WN = e j2

N

12º/N

WN

Page 138: Signal Processing Columbia

2012-09-24Dan Ellis 36

IDFT Inverse DFT IDFT Check:

Sum of complete setof rotated vectors= 0 if l ≠ n; = N if l = n

reim

WNWN

2

x[n] = 1N

X[k]WNnk

k=0

N1

x[n] = 1N

x[l]WNkl

l( )WNnk

k

= 1N

x[l] WNk ( ln )

k=0

N1

l=0

N1

= x[n]

0 ≤ n < N

or finite geometric series= (1-WN

lN)/(1-WNl)

Page 139: Signal Processing Columbia

2012-09-24Dan Ellis 37

DFT examples Finite impulse

⇒ Periodic sinusoid:

x[n] = 1 n = 00 n =1..N 1

X[k] = x[n]WNkn

n=0

N1 =WN0 =1 k

X[k] = 12 WN

rn +WNrn( )n=0

N1 WNkn

= N /2 k = r,k = N r0 o.w.

(0 ≤ k < N)

x[n] = cos

2rn

N

=

12

Wrn

N + W rnN

(r Z)

Page 140: Signal Processing Columbia

2012-09-24Dan Ellis 38

DFT: Matrix form as a matrix multiply:

i.e.

X[k] = x[n] WNkn

n=0

N1

X = DN x

X[0]X[1]X[2]

...X[N 1]

=

1 1 1 · · · 11 W 1

N W 2N · · · W (N1)

N

1 W 2N W 4

N · · · W 2(N1)N

......

.... . .

...1 W (N1)

N W 2(N1)N · · · W (N1)2

N

x[0]x[1]x[2]

...x[N 1]

Page 141: Signal Processing Columbia

2012-09-24Dan Ellis 39

Matrix IDFT If then i.e. inverse DFT is also just a matrix,

=1/NDN*

X = DN x

x = DN1 X

D1N =

1N

1 1 1 · · · 11 W1

N W2N · · · W(N1)

N

1 W2N W4

N · · · W2(N1)N

......

.... . .

...1 W(N1)

N W2(N1)N · · · W(N1)2

N

Page 142: Signal Processing Columbia

2012-09-24Dan Ellis 40

DFT and MATLAB MATLAB is concerned with sequences

not continuous functions like X(ej!) Instead, we use the DFT to sample

X(ej!) on an (arbitrarily-fine) grid: X = freqz(x,1,w); samples the DTFT

of sequence x at angular frequencies in w X = fft(x); calculates the N-point DFT

of an N-point sequence x

M

Page 143: Signal Processing Columbia

2012-09-24Dan Ellis 41

DFT and DTFT

DFT ‘samples’ DTFT at discrete freqs:

DTFT

DFT

• continuous freq !• infinite x[n], -∞<n<∞

• discrete freq k=N!/2º

• finite x[n], 0≤n<N

!X(ej!)X[k]

k=1...

X(e j ) = x[n]e jnn=

X[k] = x[n]WNkn

n=0

N1

X[k] = X(e j )=2k

N

Page 144: Signal Processing Columbia

2012-09-24Dan Ellis 42

DTFT from DFT N-point DFT completely specifies the

continuous DTFT of the finite sequence

“periodicsinc”

interpolation

X(e j ) = 1N

X[k]WNkn

k=0

N1

e jn

n=0

N1

= 1N

X[k]k=0

N1

e j 2kN( )n

n=0

N1

= 1N

X[k]k=0

N1

sin Nk2

sin k2

e j(N1)2 k

k = 2kN

Page 145: Signal Processing Columbia

2012-09-24Dan Ellis 43

Periodic sinc

= N when Δ!k = 0; = (-)N when Δ!k/2 = º = 0 when Δ!k/2 = r·º/N, r = ±1, ± 2, ...other values in-between...

e jkn

n=0

N 1

=1 e jNk

1 e jk

=e jNk /2

e jk /2e jNk /2 e jNk /2

e jk /2 e jk /2

= e j(N 1)2 k sin N

k2

sin k2pure phase

pure real

Page 146: Signal Processing Columbia

2012-09-24Dan Ellis 44

Periodic sinc

DFT→ DTFT= interpolation

by periodicsinc

X[k]→X(ej!)

sin Nxsin x

(N = 8)

2//sinx

sinNx

sinNx/sinx

x</0

N

-N

-1 0 k = 1 t = 2//N

k = 3 t = 6//N

k = 4 t = 8//N

-0.5

0

0.5

1

1.5

X(ejt0) = Y X[k]· sin N 6tk/2N sin 6tk/2

X[3]· sin N 6t3/2N sin 6t3/2

X[k] = X(ej2/k/N)

t0

freq

Page 147: Signal Processing Columbia

2012-09-24Dan Ellis 45

DFT from overlength DTFT If x[n] has more than N points, can still

form

IDFT of X[k] will give N point

How does relate to ?

X[k] = X(e j )=2k

N

x[n]

x[n] x[n]

Page 148: Signal Processing Columbia

2012-09-24Dan Ellis 46

DFT from overlength DTFT

=1 for n-l = rN, r∈I= 0 otherwise

all values shifted by exact multiples of N ptsto lie in 0 ≤ n < N

DTFTx[n] sampleX(ejω)

IDFTX[k]0 ≤ n < N-A ≤ n < B

˜ x [n]

0 ≤ n < N

x[n] =1N

N1

k=0

=x[]W k

N

Wnk

N

=

=x[]

1N

N1

k=0

W k(n)N

x[n] =

r=x[n rN ]

Page 149: Signal Processing Columbia

2012-09-24Dan Ellis 47

DFT from DTFT example If x[n] = 8, 5, 4, 3, 2, 2, 1, 1 (8 point) We form X[k] for k = 0, 1, 2, 3

by sampling X(ej!) at ! = 0, º/2, º, 3º/2 IDFT of X[k] gives 4 pt Overlap only for r = -1: (N = 4)

x[n] =

8 5 4 3+ + + +2 2 2 1

= 10 7 5 4

x[n] =

r=x[n rN ]

Page 150: Signal Processing Columbia

2012-09-24Dan Ellis 48

DFT from DTFT example x[n]

x[n+N] (r = -1)

is the time aliased or ‘folded down’

version of x[n].

n-1 1 2 3 4 5 6 7 8

n-1 1 2 3 4 5-2-3-4-5

n1 2 3

˜ x [n]

˜ x [n]

Page 151: Signal Processing Columbia

2012-09-24Dan Ellis 49

Properties: Circular time shift DFT properties mirror DTFT, with twists: Time shift must stay within N-pt ‘window’

Modulo-N indexing keeps index between 0 and N-1:

0 ≤ n0 < N

g[n n0N ] W kn0N G[k]

g[n n0N ] =

g[n n0] n n0

g[N + n n0] n < n0

Page 152: Signal Processing Columbia

2012-09-24Dan Ellis 50

Circular time shift Points shifted out to the right don’t

disappear – they come in from the left

Like a ‘barrel shifter’:

5-pt sequence

‘delay’ by 2g[n]

1 2 3 4 n

g[<n-2>5]

1 2 3 4 n

origin pointer

Page 153: Signal Processing Columbia

2012-09-24Dan Ellis 51

Circular time reversal Time reversal is tricky in ‘modulo-N’

indexing - not reversing the sequence:

Zero point stays fixed; remainder flips

1 2 3 4 n5 6 7 8 9 10 11-7 -6 -5 -4 -3 -2 -1

˜ x n[ ]5-pt sequence made periodic

Time-reversedperiodic sequence

1 2 3 4 n5 6 7 8 9 10 11-7 -6 -5 -4 -3 -2 -1

˜ x n N[ ]

Page 154: Signal Processing Columbia

2012-09-24Dan Ellis 52

Duality DFT and IDFT are very similar

both map an N-pt vector to an N-pt vector

Duality: if then

i.e. if you treat DFT sequence as a time sequence, result is almost symmetric

Circulartime reversal

g n[ ] G k[ ]

G n[ ] N g k N[ ]

Page 155: Signal Processing Columbia

2012-09-24Dan Ellis 53

4. Convolution with the DFT IDTFT of product of DTFTs of two N-pt

sequences is their 2N-1 pt convolution IDFT of the product of two N-pt DFTs

can only give N points! Equivalent of 2N-1 pt result time aliased:

i.e. must be, because G[k]H[k] are exact

samples of G(ej!)H(ej!) This is known as circular convolution

yc n[ ] = yl[n+ rN ]r=

(0 ≤ n < N)

Page 156: Signal Processing Columbia

2012-09-24Dan Ellis 54

Circular convolution Can also do entire convolution with

modulo-N indexing Hence, Circular Convolution:

Written as

g[n] h[n]N

g m[ ]h n m N[ ]m=0

N1

G[k]H[k]

Page 157: Signal Processing Columbia

2012-09-24Dan Ellis 55

Circular convolution example 4 pt sequences: g[n]=1 2 0 1

n1 2 3

h[n]=2 2 1 0

n1 2 3

1

2

0

1

nh[<n - 0>4]1 2 3

nh[<n - 1>4]1 2 3

nh[<n - 2>4]1 2 3

nh[<n - 3>4]1 2 3

n

g[n] h[n]=4 7 5 4

1 2 3

4

check: g[n] h[n]=2 6 5 4 2 1 0

*

g m[ ]h n m N[ ]m=0

N1

Page 158: Signal Processing Columbia

2012-09-24Dan Ellis 56

DFT properties summary Circular convolution

Modulation

Duality

Parseval

G n[ ] N g k N[ ]

g m[ ]h n m N[ ]m=0

N1 G[k]H[k]

g n[ ] h n[ ] 1N G[m]H[ k m N ]m=0

N1

x n[ ] 2n=0

N1 = 1N X k[ ] 2

k=0

N1

Page 159: Signal Processing Columbia

2012-09-24Dan Ellis 57

Linear convolution w/ the DFT DFT → fast circular convolution .. but we need linear convolution Circular conv. is time-aliased linear

conv.; can aliasing be avoided? e.g. convolving L-pt g[n] with M-pt h[n]:

y[n] = g[n] h[n] has L+M-1 nonzero pts Set DFT size N ≥ L+M-1 → no aliasing

*

Page 160: Signal Processing Columbia

2012-09-24Dan Ellis 58

Linear convolution w/ the DFT Procedure (N = L + M - 1):

pad L-pt g[n] with (at least) M-1 zeros→ N-pt DFT G[k], k = 0..N-1

pad M-pt h[n] with (at least) L-1 zeros→ N-pt DFT H[k], k = 0..N-1

Y[k] = G[k]·H[k], k = 0..N-1 IDFTY[k]

n

g[n]

L

n

h[n]

M

n

yc[n]

N

= yL n+ rN[ ]r=

= yL n[ ] (0 ≤ n < N)

Page 161: Signal Processing Columbia

2012-09-24Dan Ellis 59

Overlap-Add convolution Very long g[n] → break up into

segments, convolve piecewise, overlap → bound size of DFT, processing delay

Make

Called Overlap-Add (OLA) convolution...

gi[n] =

g[n] i · N n < (i + 1) · N

0 otherwise

g[n] =

i gi[n] h[n] g[n] =

i h[n] gi[n]

Page 162: Signal Processing Columbia

2012-09-24Dan Ellis 60

Overlap-Add convolution

n

g[n]

n

g0[n]

n

g1[n]

n

g2[n]

N 2N 3N

n

g0[n] h[n]

n

n

h[n]

n

g1[n] h[n]

g2[n] h[n]

nN 2N 3Nh[n] g[n]

valid OLA sum

*

L

L

*

*

*

Page 163: Signal Processing Columbia

2012-10-03Dan Ellis 1

ELEN E4810: Digital Signal ProcessingTopic 4: The Z Transform

1. The Z Transform

2. Inverse Z Transform

Page 164: Signal Processing Columbia

2012-10-03Dan Ellis 2

1. The Z Transform Powerful tool for analyzing & designing

DT systems Generalization of the DTFT:

z is complex... z = ej! → DTFT z = r·ej! →

G(z) = Z g n[ ] = g[n]znn=

Z Transform

g n[ ]rne jnnDTFT of r-n·g[n]

Page 165: Signal Processing Columbia

2012-10-03Dan Ellis 3

Region of Convergence (ROC) Critical question:

Does summationconverge (to a finite value)?

In general, depends on the value of z → Region of Convergence:

Portion of complex z-planefor which a particular G(z)will converge

G(z) = x[n]znn=

z-plane

Rez

Imz

ROC|z| > λ

λ

Page 166: Signal Processing Columbia

2012-10-03Dan Ellis 4

ROC Example e.g. x[n] = ∏nµ[n]

ß converges only for |∏z-1| < 1i.e. ROC is |z| > |∏|

|∏| < 1 (e.g. 0.8) - finite energy sequence |∏| > 1 (e.g. 1.2) - divergent sequence,

infinite energy, DTFT does not existbut still has ZT when |z| > 1.2 (in ROC)

X(z) = n znn=0

= 11 z1

(see previous slide)

n-1 1 2 3 4-2

Page 167: Signal Processing Columbia

2012-10-03Dan Ellis 5

About ROCs ROCs always defined in terms of |z| → circular regions on z-plane (inside circles/outside circles/rings)

If ROC includes unit circle (|z| = 1), → g[n] has a DTFT (finite energy sequence) z-plane

Rez

Imz

Unit circlelies in ROC

→ DTFT OK

1

Page 168: Signal Processing Columbia

2012-10-03Dan Ellis 6

Another ROC example Anticausal (left-sided) sequence:

Same ZT as ∏nµ[n], different sequence?

x n[ ] = nµ n 1[ ] n-1

1 2 3 4-2-3-4-5

X z( ) = nµ n 1[ ]( )znn= nznn=

1 = mzmm=1

= 1z 11 1z

=1

1 z1

ROC:|λ| > |z|

Page 169: Signal Processing Columbia

2012-10-03Dan Ellis 7

ROC is necessary! To completely define a ZT, you must

specify the ROC:x[n] = ∏nµ[n]

ROC |z| > |∏|x[n] = -∏nµ[-n-1]

ROC |z| < |∏| A single G(z) can describe several

sequences with different ROCs

X(z) = 11 z1

X(z) = 11 z1

n-1

1 2 3 4-2-3-4 z-plane

Re

Im

|λ|

n-1 1 2 3 4-2-3-4

DTFTs?

Page 170: Signal Processing Columbia

2012-10-03Dan Ellis 8

Rational Z-transforms G(z) can be any function;

rational polynomials are important class:

By convention, expressed in terms of z-1 – matches ZT definition

(Reminiscent of LCCDE expression...)

G z( ) =P z( )D z( )

= p0 + p1z1 +…+ pM 1z

(M 1) + pM zM

d0 + d1z1 +…+ dN1z

(N1) + dNzN

Page 171: Signal Processing Columbia

2012-10-03Dan Ellis 9

Factored rational ZTs Numerator, denominator can be

factored:

≥ are roots of numerator→ G(z) = 0 → ≥ are the zeros of G(z)

∏ are roots of denominator→ G(z) = ∞ → ∏ are the poles of G(z)

G z( ) =p0=1

M 1 z1( )

d0=1N 1 z

1( )=zM p0=1

M z ( )zNd0=1

N z ( )

Page 172: Signal Processing Columbia

2012-10-03Dan Ellis 10

Pole-zero diagram Can plot poles and zeros on

complex z-plane:

z-plane

Rez

Imz

1× o

o

o

o

×

×

poles ∏ (cpx conj for real g[n])

zeros ≥

Page 173: Signal Processing Columbia

2012-10-03Dan Ellis 11

Z-plane surface G(z): cpx function of a cpx variable

Can calculate value over entire z-plane

Slice between surface and unit cylinder (|z| = 1 ⇒ z = ej!) is G(ej!), the DTFT

M

ROCnot

shown!!

Page 174: Signal Processing Columbia

2012-10-03Dan Ellis 12

Z-plane and DTFT Unwrapping the cylindrical slice gives

the DTFT:

|G(ej!)|z = ej!

0 º! / rad/samp

Page 175: Signal Processing Columbia

2012-10-03Dan Ellis 13

ZT is Linear

Thus, if

then

G(z) = Z g n[ ] = g[n]znn Z Transform

y[n] =11nµ[n]+22

nµ[n]

Y (z) = 1

1 1z1 + 2

1 2z1

y[n] = Æg[n] + Øh[n]⇒Y(z) = ß(Æg[n]+Øh[n])z-n

= ßÆg[n]z-n + ßØh[n]z-n = ÆG(z)+ØH(z)

ROC:|z|>|λ1|,|λ2|

Linear

Page 176: Signal Processing Columbia

2012-10-03Dan Ellis 14

ZT of LCCDEs LCCDEs have solutions of form:

Hence ZT

Each term ∏in in g[n] corresponds to a

pole ∏i of G(z) ... and vice versa LCCDE sol’ns are right-sided⇒ ROCs are |z| > |∏i|

yc[n] =iinµ n[ ] + ...

Yc z( ) =i

1 i z1 +

(same∏s)

outsidecircles

Page 177: Signal Processing Columbia

2012-10-03Dan Ellis 15

ROCs and sidedness Two sequences have:

Each ZT pole → region in ROC outside or inside |∏| for R/L sided term in g[n] Overall ROC is intersection of each term’s

G(z) = 11 z1

z-plane

Re

Im

|λ|

ROC |z| > |∏| → g[n] = ∏nµ[n]n

ROC |z| < |∏| → g[n] = -∏nµ[-n-1] nLEFT-SIDED

RIGHT-SIDED( |∏| < 1 )

Page 178: Signal Processing Columbia

2012-10-03Dan Ellis 16

G(z) = 11 1z

1 + 11 2z

1

ROC intersections Consider

with |∏1| < 1 , |∏2| > 1 ... Two possible sequences for ∏1 term...

Similarly for ∏2 ...

→ 4 possible g[n] seq’s and ROCs ...

no ROC specified

∏1nµ[n]

n-∏1nµ[-n-1]

n or

n-∏2nµ[-n-1]

n∏2

nµ[n]or

Page 179: Signal Processing Columbia

2012-10-03Dan Ellis 17

ROC intersections: Case 1

ROC: |z| > |∏1| and |z| > |∏2|

ImRe

|λ1| |λ2|

Im

G(z) = 11 1z

1 + 11 2z

1

n

g[n] = ∏1nµ[n] + ∏2

nµ[n]

both right-sided:

Page 180: Signal Processing Columbia

2012-10-03Dan Ellis 18

ROC intersections: Case 2

ImRe

|λ1| |λ2|

Im

G(z) = 11 1z

1 + 11 2z

1

n

g[n] = -∏1nµ[-n-1] - ∏2

nµ[-n-1]both left-sided:

ROC: |z| < |∏1| and |z| < |∏2|

Page 181: Signal Processing Columbia

2012-10-03Dan Ellis 19

ROC intersections: Case 3

ROC: |z| > |∏1| and |z| < |∏2|

ImRe

|λ1| |λ2|

Im

G(z) = 11 1z

1 + 11 2z

1

n

g[n] = ∏1nµ[n] - ∏2

nµ[-n-1] two-sided:

Page 182: Signal Processing Columbia

2012-10-03Dan Ellis 20

ROC intersections: Case 4

ROC: |z| < |∏1| and |z| > |∏2| ?

Re|λ1| |λ2|

Im

G(z) = 11 1z

1 + 11 2z

1

n

g[n] = -∏1nµ[-n-1] + ∏2

nµ[n] two-sided:

no ROC⇒ ...

Page 183: Signal Processing Columbia

2012-10-03Dan Ellis 21

ROC intersections Note: Two-sided exponential

No overlap in ROCs→ ZT does not exist(does not converge for any z)

g n[ ] = n

= nµ n[ ] + nµ n 1[ ]

< n <

ROC|z| > |Æ|

ROC|z| < |Æ|

Re|α|

Im

n

Page 184: Signal Processing Columbia

2012-10-03Dan Ellis 22

Some common Z transforms g[n] G(z) ROC ±[n] 1 ∀z µ[n] |z| > 1 Ænµ[n] |z| > |Æ|

rncos(!0n)µ[n] |z| > r

rnsin(!0n)µ[n] |z| > r

11z1

11z1

1r cos 0( )z1

12r cos 0( )z1+r2z2

r sin 0( )z1

12r cos 0( )z1+r2z2

sum of rej!0n + re-j!0n

poles at z = re±j!0

××

“conjugate polepair”

Page 185: Signal Processing Columbia

2012-10-03Dan Ellis 23

Z Transform properties g[n] ↔ G(z) w/ROC Rg

Conjugation g*[n] G*(z*) Rg

Time reversal g[-n] G(1/z) 1/Rg

Time shift g[n-n0] z-n0G(z) Rg (0/∞?)

Exp. scaling Æng[n] G(z/Æ) |Æ|Rg

Diff. wrt z ng[n] Rg (0/∞?)

z dG(z)dz

Page 186: Signal Processing Columbia

2012-10-03Dan Ellis 24

Z Transform properties g[n] G(z) ROC

Convolution g[n] h[n] G(z)H(z)

Modulation g[n]h[n]

Parseval:

∗at least

Rg∩Rh

12j G v( )H z

v( )v1dvC

g n[ ]h* n[ ]n=

= 12j G v( )H * 1

v( )v1dvC

at least RgRh

Page 187: Signal Processing Columbia

2012-10-03Dan Ellis 25

ZT Example ; can express as

Hence,

x n[ ] = rn cos 0n( )µ n[ ]

12µ n[ ] re j0( )n + re j0( )n

= v n[ ] + v* n[ ]

v[n] = 1/2µ[n]Æn ; Æ = rej!0

→ V(z) = 1/(2(1- rej!0 z-1))ROC: |z| > r

X z( ) =V z( ) +V * z*( )

= 12

11re j0 z1

+ 11re j0 z1( )

= 1r cos 0( )z1

12r cos 0( )z1+r2z2

Page 188: Signal Processing Columbia

2012-10-03Dan Ellis 26

Another ZT example

y n[ ] = n+1( ) nµ n[ ]= x[n]+ nx[n] where x[n] = Æn µ[n]

X z( ) = 11z1

z dX z( )dz

= z ddz

11z1

=

z1

(1z1)2

Y z( ) = 11z1

+ z1

(1z1)2= 1(1z1)2

repeatedroot - IZT

( |z| > |Æ| )

ROC |z| > |Æ|

Page 189: Signal Processing Columbia

2012-10-03Dan Ellis 27

2. Inverse Z Transform (IZT) Forward z transform was defined as:

3 approaches to inverting G(z) to g[n]: Generalization of inverse DTFT Power series in z (long division) Manipulate into recognizable

pieces (partial fractions)

G(z) = Z g n[ ] = g[n]znn=

the usefulone

Page 190: Signal Processing Columbia

2012-10-03Dan Ellis 28

IZT #1: Generalize IDTFT If then

so

Any closed contour around origin will do Cauchy: g[n] = ß[residues of G(z)zn-1]

z = re j

IDTFT

z = rej! ⇒ d! = dz/jz

Counterclockwise closed contour at |z| = r

within ROC

Re

Im

g[n]rn = 12

G

rej

ejnd

= 12j

C G (z) zn1rndz

G(z) = G(rej) =

g[n]rnejn = DTFTg[n]rn

Page 191: Signal Processing Columbia

2012-10-03Dan Ellis 29

IZT #2: Long division Since if we could express G(z) as a simple

power series G(z) = a + bz-1 + cz-2 ... then can just read off g[n] = a, b, c, ... Typically G(z) is right-sided (causal)

and a rational polynomial

Can expand as power series through long division of polynomials

G(z) = g[n]znn=

G z( ) = P(z)D(z)

Page 192: Signal Processing Columbia

2012-10-03Dan Ellis 30

IZT #2: Long division Procedure:

Express numerator, denominator in descending powers of z (for a causal fn)

Find constant to cancel highest term → first term in result

Subtract & repeat → lower terms in result Just like long division for base-10

numbers

Page 193: Signal Processing Columbia

2012-10-03Dan Ellis 31

IZT #2: Long division e.g.

H z( ) = 1+ 2z1

1+ 0.4z1 0.12z2

1+ 0.4z1 0.12z2

1+ 2z1)

1+1.6z1 0.52z2 + 0.4z3...

1+ 0.4z1 0.12z2

1.6z1 + 0.12z2

1.6z1 + 0.64z2 0.192z3

0.52z2 + 0.192z3...

Result

Page 194: Signal Processing Columbia

2012-10-03Dan Ellis 32

IZT#3: Partial Fractions Basic idea: Rearrange G(z) as sum of

terms recognized as simple ZTs

especially

or sin/cos forms

i.e. given products rearrange to sums

11z1

nµ n[ ]

P(z)1z1( ) 1 z1( )

A1z1

+ B1 z1

+

Page 195: Signal Processing Columbia

2012-10-03Dan Ellis 33

Partial Fractions Note that:

Can do the reverse i.e.go from to

if order of P(z) is less than D(z)

A1z1

+ B1 z1

+ C1z1

=

A 1 z1( ) 1z1( ) + B 1z1( ) 1z1( ) +C 1z1( ) 1 z1( )1z1( ) 1 z1( ) 1z1( )order 3 polynomial

order 2 polynomialu + vz-1 + wz-2

P(z)=1

N (1 z1)

1 z

1=1

Nelse cancelw/ long div.

Page 196: Signal Processing Columbia

2012-10-03Dan Ellis 34

Partial Fractions Procedure:

where i.e. evaluate F(z) at the pole but multiplied by the pole term → dominates = residue of pole

F(z) = P(z)=1

N (1 z1)

= 1 z

1=1

N

= 1 z1( )F z( ) z=

f n[ ] = ( )nµ n[ ]=1

N

order N-1

(cancels term indenominator)

no repeatedpoles!

Page 197: Signal Processing Columbia

2012-10-03Dan Ellis 35

Partial Fractions Example Given (again)

factor:

where:

H z( ) = 1+ 2z1

1+ 0.4z1 0.12z2

= 1+ 2z1

1+ 0.6z1( ) 1 0.2z1( )= 11+ 0.6z1

+ 21 0.2z1

1 = 1+ 0.6z1( )H z( )z=0.6

= 1+ 2z1

1 0.2z1 z=0.6= 1.75

2 = 1+ 2z11+ 0.6z1 z=0.2

= 2.75

Page 198: Signal Processing Columbia

2012-10-03Dan Ellis 36

Partial Fractions Example Hence

If we know ROC |z| > |Æ| i.e. h[n] causal:

= –1.75 1 -0.6 0.36 -0.216 ... +2.75 1 0.2 0.04 0.008 ...

= 1 1.6 -0.52 0.4 ...

H z( ) = 1.751+ 0.6z1

+ 2.751 0.2z1

h n[ ] = 1.75( ) 0.6( )nµ n[ ] + 2.75( ) 0.2( )nµ n[ ]

same aslong division!

Page 199: Signal Processing Columbia

2012-10-10Dan Ellis 1

ELEN E4810: Digital Signal ProcessingTopic 5:

Transform-Domain Systems1. Frequency Response (FR)2. Transfer Function (TF)3. Phase Delay and Group Delay

Page 200: Signal Processing Columbia

2012-10-10Dan Ellis 2

1. Frequency Response (FR) Fourier analysis expresses any

signal as the sum of sinusoids e.g. IDTFT:

Sinusoids are the eigenfunctions of LSI systems (only scaled, not ‘changed’)

Knowing the scaling for every sinusoid fully describes system behavior

→ frequency responsedescribes how a system affects eachpure frequency

x[n] =12

X(ej)ejnd

Page 201: Signal Processing Columbia

2012-10-10Dan Ellis 3

Sinusoids as Eigenfunctions IR h[n] completely describes LSI system:

Complex sinusoid input i.e.

Output is sinusoid scaled by FT at

y[n] = x[n] h[n] =

m

h[m]x[nm]x[n] h[n]

x[n] = ej0n

y[n] =

m h[m]ej0(nm)

=

m h[m]ej0m · ej0n

y[n] = H(ej0) · x[n] = |H(ej0)| · ej(0n+(0))

H(ej)

= |H(ej)|ej()

0

Page 202: Signal Processing Columbia

2012-10-10Dan Ellis 4

System Response from H(ej!) If x[n] is a complex sinusoid at !0

then the output of a system with IR h[n]is the same sinusoid scaled by |H(ej!0)| and phase-shifted by argH(ej!0) = µ(!0) where H(ej!) = DTFTh[n]

(Any signal can be expressed as sines...) |H(ej!)| “magnitude response” → gain argH(ej!) “phase resp.” → phase shift

Page 203: Signal Processing Columbia

2012-10-10Dan Ellis 5

In practice signals are real e.g.

Real

Real Sinusoids

ω- ω0 ω0

|X(ejω)|A/2

x[n] =A cos(0n + )

=A

2

ej(0n+) + ej(0n+)

=A

2ejej0n +

A

2ejej0n

y[n] =A

2ejH(ej0)ej0n +

A

2ejH(ej0)ej0n

h[n] H(ej) = H(ej) = |H(ej)|ej()

y[n] = A|H(ej0)| cos (0n + + (0))

Page 204: Signal Processing Columbia

2012-10-10Dan Ellis 6

Real Sinusoids

A real sinusoid of frequency !0

passed through an LSI system with a real impulse response h[n]has its gain modified by |H(ej!0)| and its phase shifted by µ(!0).

A cos(0n + ) A|H(ej0)| cos (0n + + (0))h[n]

Page 205: Signal Processing Columbia

2012-10-10Dan Ellis 7

Transient / Steady State Most signals start at a finite time, e.g.

What is the effect?

Steady state- same as with pure sine input

Transient response- consequence of gating

y[n] = h[n] x[n] =n

m= h[m]ej0(nm)

=

m= h[m]ej0(nm)

m=n+1 h[m]ej0(nm)

= H(ej0)ej0n (

m=n+1 h[m]ej0m)ej0n

x[n] = ej0nµ[n]

Page 206: Signal Processing Columbia

2012-10-10Dan Ellis

-40 -30 -20 -10 0 10 20 30 40 time / n

Total outputSteady State

Transient

8

Transient / Steady State

FT of IR h[n]’s tail from time n onwards zero for FIR h[n] for n ≥ N tends to zero with large n for any ‘stable’ IR

transient y[n] = H(ej0)ej0n (

m=n+1 h[m]ej0m)ej0n

x[n] = ej0nµ[n]

Page 207: Signal Processing Columbia

2012-10-10Dan Ellis 9

FR example MA filter

n-1 1 2 3 4 5 6

1/M

y[n] =1M

M1

=0

x[n ]

=x[n] h[n]

H(ej) = DTFTh[n]

=

n=h[n]ejn =

1M

M1

n=0

ejn

=1M

1 ejM

1 ej=

1M

ej (M1)2

sin(M/2)sin(/2)

Page 208: Signal Processing Columbia

2012-10-10Dan Ellis

0

2

4

6

0

2

4

6

0 /</ 2/<2/-/

0

/

10

FR example MA filter:

Response to ...

(jumps at sign changes:r=⎣M!/2π⎦)

(M = 5)

H(ej) =1M

ej (M1)2

sin(M/2)sin(/2)

H(ej)

=

1M

sin(M/2)sin(/2)

() =(M 1)

2 + · r

x[n] = ej0n + ej1n

Page 209: Signal Processing Columbia

2012-10-10Dan Ellis

-20 0 20 40 60 80-2

-1

0

1

2

n

x[n]

y[n]

11

FR example MA filter input

output

M

y[n] = H(ej0)ej0n + H(ej1)ej1n

x[n] = ej0n + ej1n

0 = 0.1 H(ej0) 0.8ej0

1 = 0.5 H(ej1) ()0.2ej1

0

2

4

6

0

2

4

6

0 /</ 2/<2/-/

0

/

Page 210: Signal Processing Columbia

2012-10-10Dan Ellis 12

2. Transfer Function (TF)Linking LCCDE, ZT & Freq. Resp...

LCCDE:

Take ZT:

Hence:

or:Transferfunction

H(z)

N

k=0

dky[n k] =N

k=0

pkx[n k]

k

dkzkY (z) =

k

pkzkX(z)

Y (z) =P

k pkzk

Pk dkzk X(z)

Y (z) = H(z)X(z)

Page 211: Signal Processing Columbia

2012-10-10Dan Ellis 13

Transfer Function (TF) Alternatively,

ZT →

Note: same H(z)

e.g. FIR filter, h[n] = h0, h1,... hM-1

⇒ pk=hk, d0=1, DE is

... if systemhas DE form

... from IR

y[n] = h[n] x[n]Y (z) = H(z)X(z)

1 · y[n] =M1

k=0

hkx[n k]

=

Ppkzk

Pdkzkn h[n]zn

Page 212: Signal Processing Columbia

2012-10-10Dan Ellis 14

Transfer Function (TF) Hence, MA filter:

z-plane

Rez

Imz

1

zM=1 i.e. M roots of 1@ z=ej2πr/M

pole @ z=1cancels

ROC?

|H(ej!)|

h[n] =

1M 0 n M

0 otherwise

(ignore polesat z=0)

y[n] =1M

M1

=0

x[n ]

H(z) = 1M

M1=0 zn

= 1zM

M(1z1)

= zM1M ·zM1(z1)

Page 213: Signal Processing Columbia

2012-10-10Dan Ellis 15

TF example

factorize:ζ0 = 0.6+j0.8λ0 = 0.3λ1 = 0.5+j0.7 → ...

y[n] = x[n 1] 1.2x[n 2] + x[n 3]+ 1.3y[n 1] 1.04y[n 2] + 0.222y[n 3]

H(z) =z1(1 0z1)(1 0z1)

(1 0z1)(1 1z1)(1 1z1)

H(z) =Y (z)X(z)

=z1 1.2z2 + z3

1 1.3z1 + 1.04z2 0.222z3

Page 214: Signal Processing Columbia

2012-10-10Dan Ellis 16

TF example

Poles ∏i → ROC causal → ROC is |z| > max|∏i| includes u.circle → stable

Rez

Imz

×

×

λ0

λ1

λ1

ζ0

ζ∗0

*

H(z) =z1(1 0z1)(1 0z1)

(1 0z1)(1 1z1)(1 1z1)

ζ0 = 0.6+j0.8λ0 = 0.3λ1 = 0.5+j0.7

Page 215: Signal Processing Columbia

2012-10-10Dan Ellis 17

TF → FR DTFT H(ejω) = ZT H(z)|z = ejω i.e. Frequency Response is

Transfer Function eval’d on Unit Circle

H z( ) =p0 1 k z

1( )k=1

Md0 1 k z

1( )k=1

N=p0z

M z k( )k=1

Md0z

N z k( )k=1

N

H e j( ) = p0d0e j NM( ) e j k( )k=1

Me j k( )k=1

N

factor:

Page 216: Signal Processing Columbia

2012-10-10Dan Ellis 18

TF → FR

Magnituderesponse

ζk, λk are TF roots on z-plane

Phaseresponse

() = arg

p0

d0

+ · (N M)

+M

k=1

argej k

N

k=1

argej k

H(ej) =p0

d0ej(NM)

Mk=1

ej k

N

k=1 (ej k)

H(ej)

=p0

d0

Mk=1

ej k

N

k=1 |ej k|

Page 217: Signal Processing Columbia

2012-10-10Dan Ellis 19

FR: Geometric Interpretation Have

On z-plane:

H e j( ) = p0d0e j NM( ) e j k( )k=1

Me j k( )k=1

NConstant/linear part Product/ratio of terms

related to poles/zeros

Each (ej! - ∫) term corresponds to a vector from pole/zero ∫ to point ej! on the unit circle

Overall FR is product/ratio ofall these vectors

Imz

×

×

×

ej!−∏i

ej!−≥i

Rez

ej!

Page 218: Signal Processing Columbia

2012-10-10Dan Ellis 20

FR: Geometric Interpretation Magnitude |H(ej!)| is product of

lengths of vectors from zeros divided by product of lengths of vectors from poles

Phase µ(!) is sum of angles of vectors from zeros minus sum of angles of vectors from poles

Imz

×

×

×

ej!−∏i

ej!−≥i

Rez

ej!

≥i

∏i

Page 219: Signal Processing Columbia

2012-10-10Dan Ellis

-1 -0.5 0 0.5 1-1

-0.5

0

0.5

1

Real Part

Imag

inar

y Pa

rt

-0.8/

-0.6/ -0.4/

-0.2/

0.0/

0.2/

0.4/0.6/

0.8/

0

1

2magnitude

phase

0 0.5/ / 1.5/ 2/ -/

-//2

0

//2

/

t

21

FR: Geometric Interpretation Magnitude and phase of a single zero:

Pole is reciprocal mag. & negated phase

Page 220: Signal Processing Columbia

2012-10-10Dan Ellis

-1 -0.5 0 0.5 1-1

-0.5

0

0.5

1

Real Part

Imag

inar

y Pa

rt

-0.8/

-0.6/ -0.4/

-0.2/

0.0/

0.2/

0.4/0.6/

0.8/

0

1

2magnitude

0 0.5/ / 1.5/ 2/ -/

-//2

0

//2

/ phase

t

22

FR: Geometric Interpretation Multiple

poles, zeros:

H z( ) =z 0.8e j0.3( ) z 0.8e j0.3( )z 0.9e j0.3( ) z 0.9e j0.3( )

V

Page 221: Signal Processing Columbia

2012-10-10Dan Ellis 23

Geom. Interp. vs. 3D surface 3D magnitude surface for same system

Full surface Showing ROC

Page 222: Signal Processing Columbia

2012-10-10Dan Ellis 24

Geom. Interp: Observations Roots near unit circle → rapid changes in magnitude & phase zeros cause mag. minima (= 0 → on u.c.) poles cause mag. peaks (→ 1÷0=∞ at u.c.) rapid change in relative angle → phase

Pole and zero ‘near’ each other cancel out when seen from ‘afar’;affect behavior when z = ej! gets ‘close’

MPEZdemo

Page 223: Signal Processing Columbia

2012-10-10Dan Ellis 25

Filtering Idea: Separate information in frequency

with constructed H(ej!) e.g.

Construct a filter:|H(ej!1)| ∼ 1

|H(ej!2)| ∼ 0

Then

x n[ ] = A cos 1n( ) + B cos 2n( )interested in this part

don’t care aboutthis part

y n[ ] = h n[ ] x n[ ] A cos 1n+ 1( )( )

“filteredout”!

!2!1−!1−!2

H(ej!)

X(ej!)B/2 B/2A/2A/2

Page 224: Signal Processing Columbia

2012-10-10Dan Ellis 26

Filtering example Consider

filter ‘family’:3 pt FIR filters with h[n] = Æ Ø Æ

Frequency Response:

n-1 1 2 3 4-2

Æ ÆØ

H e j( ) = h n[ ]e jnn = + e j +e2 j

= e j + e j + e j( )( ) = e j + 2 cos( )

H e j( ) = + 2 cos

x[n]

+z-1

z-1y[n]

Æ

Æ

Ø

can set Æ and Øto obtain desired |H(ej!)| ...

Page 225: Signal Processing Columbia

2012-10-10Dan Ellis 27

Filtering example (cont’d) h[n] = Æ Ø Æ

Consider input as mix of sinusoidsat !1 = 0.1 rad/samp and !2 = 0.4 rad/samp

Solve

⇒ Ø = -12.46, Æ = 6.76 ...

want to removei.e. make H(ej!2) = 0

H e j( ) = + 2 cos

H(ej) = | + 2 cos |

=

1 = 1 = 0.10 = 2 = 0.4

Page 226: Signal Processing Columbia

2012-10-10Dan Ellis 28

Filtering example (cont’d) Filter

IR

Freq.resp

input/output

0 5 10 15 20 25 30 35 40 45 n

n

-15-10-5051015

0 0.2 0.4 0.6 0.8 1 t / rad-20

-10

0

10

20

0 20 40 60 80 100 120 140 160 180-2

-1

0

1

2

dB

Page 227: Signal Processing Columbia

2012-10-10Dan Ellis 29

3. Phase- and group-delay For sinusoidal input x[n] = cos!0n,

we saw

i.e.

or

where is phase delay

y n[ ] = H e j0( ) cos 0n+ 0( )( )gain phase shift

or time shift

cos 0 n+ 0( )0

cos 0 n p 0( )( )( )

p ( ) = ( )

subtraction so positive øp meansdelay (causal)

Page 228: Signal Processing Columbia

2012-10-10Dan Ellis 30

Phase delay example For our 3pt filter:

i.e. 1 sample delay (at all frequencies)(as observed)

n-1 1 2 3 4-2

Æ ÆØ

H e j( ) = e j + 2 cos( )

( ) =

p ( ) =

= +1

Page 229: Signal Processing Columbia

2012-10-10Dan Ellis 31

Group Delay Consider a modulated carrier

e.g. x[n] = A[n]·cos(!cn) with A[n] = Acos(!mn) and !m << !c

0 50 100 150 200 250 300 350 400 1

0.5

0

0.5

1

Page 230: Signal Processing Columbia

2012-10-10Dan Ellis 32

Group Delay

So: x[n] = Acos(!mn)·cos(!cn)

Now:

Assume |H(ej!)| ∼ 1 around !c±!mbut µ(!c-!m) = µl ; µ(!c+!m) = µu ...

!!c

X(ej!)!c

+!m

!c-!m

= A2 cos c m( )n+ cos c +m( )n[ ]

y n[ ] = h n[ ] x n[ ]

= A2

H e j cm( )( )cos c m( )n+H e j c+m( )( )cos c +m( )n

Page 231: Signal Processing Columbia

2012-10-10Dan Ellis 33

Group Delay

y n[ ] = A2

cos c m( )n+ l[ ]+cos c +m( )n+u[ ]

= A cos cn+u + l2

cos mn+u l

2

phase shiftof carrier

phase shiftof envelope

Page 232: Signal Processing Columbia

2012-10-10Dan Ellis 34

Group Delay If µ(!c) is locally linear i.e. " µ(!c+¢!) = µ(!c) + S¢!,

Then carrier phase shift

so carrier delay , phase delay

Envelope phase shift

→ delay group delay

S =d ( )d =c

l +u2

= c( )

c( )c

= p

u l2

=m S

g c( ) = d ( )d =c

!c

µ(!) S!

!c–!m !c+!m

µl

µu

Page 233: Signal Processing Columbia

2012-10-10Dan Ellis 35

Group Delay

If µ(!) is not linear around !c, A[n] suffers “phase distortion” → correction...

!!c

øp, phasedelay

øg, groupdelay

µ(!)

n0 50 100 150 200 250 300 350 400-1

-0.5

0

0.5

1 Envelope (group) delay

Carrier (phase) delay

M

Page 234: Signal Processing Columbia

2012-10-17Dan Ellis 1

ELEN E4810: Digital Signal ProcessingTopic 6:

Filters - Introduction1. Simple Filters2. Ideal Filters3. Linear Phase and FIR filter types

Page 235: Signal Processing Columbia

2012-10-17Dan Ellis 2

1. Simple Filters Filter = system for altering signal in

some ‘useful’ way LSI systems:

are characterized by H(z) (or h[n]) have different gains (& phase shifts)

at different frequencies can be designed systematically

for specific filtering tasks

Page 236: Signal Processing Columbia

2012-10-17Dan Ellis 3

FIR & IIR FIR = finite impulse response⇔ no feedback in block diagram

⇔ no poles (only zeros)

IIR = infinite impulse response⇔ feedback in block diagram

⇔ poles (and often zeros)

Page 237: Signal Processing Columbia

2012-10-17Dan Ellis 4

Simple FIR Lowpass hL[n] = 1/2 1/2 (2 pt moving avg.)

HL z( ) = 12 1+ z1( ) = z +1

2z 1ZP

zero atz = -1

HL ej( ) = e j 2 cos 2( )

n-1 1 2 3 4-2

1/2

hL[n]

|HL(ej!)|

1/2 sample delay

ej!/2+e-j!/2

Page 238: Signal Processing Columbia

2012-10-17Dan Ellis 5

Filters are often characterized by their cutoff frequency !c:

Cutoff frequency is most often defined as the half-power point, i.e.

If

then

H e jc( ) 2 = 12 max H e j( ) 2 H = 1

2 Hmax

Simple FIR Lowpass

!!c

|H(ej!)|

H e j( ) = cos 2( )

c = 2cos1 12 =

2

π

11/√2

≈ 0.707

π/2

Page 239: Signal Processing Columbia

2012-10-17Dan Ellis 6

deciBels Filter magnitude responses are

often described in deciBels (dB) dB is simply a scaled log value:

Half-power also known as 3dB point:

dB = 20 log10 level( ) =10 log10 power( )

H cutoff = 12 H max

dB H cutoff = dB H max + 20 log10 12( )

= dB H max 3.01

power = level2

Page 240: Signal Processing Columbia

2012-10-17Dan Ellis 7

We usually plot magnitudes in dB:

A gain of 0 corresponds to -∞ dB

deciBels

0 0.2 0.4 0.6 0.8 10

0.2

0.4

0.6

0.81/√2

1

t//

|H(e

jt)|

0 0.2 0.4 0.6 0.8 1-20

-15

-10

-5-3

0

t//

|H(e

jt)|

/ dB

Page 241: Signal Processing Columbia

2012-10-17Dan Ellis 8

Simple FIR Highpass hH[n] = 1/2 -1/2

3dB point !c = π/2 (again)

HH z( ) = 12 1 z

1( ) = z 12z 1

ZP

zero atz = 1

HH e j( ) = je j 2 sin 2( )

|HH(ej!)|

n-1 2 3 4-2

1/2

hH[n]

-1/2

!c

π/2

Page 242: Signal Processing Columbia

2012-10-17Dan Ellis 9

FIR Lowpass and Highpass Note:

hL[n] = 1/2 1/2 hH[n] = 1/2 -1/2 i.e.

i.e. 180° rotation of the z-plane, ⇒ π shift of frequency

response

hH n[ ] = (1)n hL n[ ] HH z( ) = HL z( ) 1

z

−1

j

-j

−1-z

1

-j

j

!|HL(ej!)|

!|HH(ej!)|

π

Page 243: Signal Processing Columbia

2012-10-17Dan Ellis

-1 0 1 -1

0

1

z-plane

freq t / /

|H(e

jt)|

|H(e

jt)|

freq t / /0 0.5 1

0

0.5

1

10-2 10-1 10010-2

10-1

100

_ _A1_A1

10

Simple IIR LowpassIIR → feedback, zeros and poles,

conditional stability, h[n] less useful

scale to makegain = 1 at ! = 0→ K = (1 - Æ)/2

pole-zerodiagram

frequencyresponse

FR on log-log axes

0dB

-20dB

-40dB

x[n]z-1

+z-1

+

y[n]K

HLP (z) = K1 + z1

1 z1

Page 244: Signal Processing Columbia

2012-10-17Dan Ellis 11

Simple IIR Lowpass

Cutoff freq. !c from

HLP z( ) = K 1+ z1

1z1

HLP e jc( ) 2 =max2

max = 1using K=(1-Æ)/2

1( )2

41+ e jc( ) 1+ e jc( )1e jc( ) 1e jc( )

= 12

cosc = 21+2

=1 sinc

coscDesign Equation

Page 245: Signal Processing Columbia

2012-10-17Dan Ellis

-1 0 1-1

-0.5

0

0.5

1

z-plane

Freq t / /Freq t / /

|H(e

jt)|

/ dB

|H(e

jt)|

10-2 10-1 100-40

-30

-20

-10

0

0 0.5 10

0.5

1

_ A 1_ A 1

12

Simple IIR Highpass

Pass ! = π → HHP(-1) = 1 → K = (1+Æ)/2

Design Equation:

(again)

=1 sinc

cosc

HHP (z) = K1z1

1 z1

Page 246: Signal Processing Columbia

2012-10-17Dan Ellis 13

Highpass and Lowpass Consider lowpass filter:

Then:

However, (unless H(ej!) is pure real - not for IIR)

• Highpass

• c/w (-1)nh[n]

just another z poly

HLP (ej) =

1 0 0 large

1HLP (ej) =

0 0 1 large

|1HLP (z)| = 1 |HLP (z)|

Page 247: Signal Processing Columbia

2012-10-17Dan Ellis

-1 0 1 1

0.5

0

0.5

1

z-plane

t / /

t / /

|H(e

jw)|

|H(e

jw)|

/ dB

0 0.5 10

0.5

1

10-2 10-1 100-40

-30

-20

-10

0

r

B

e

_

`

14

Simple IIR Bandpass

HBP z( ) =12

1 z2

1 1+( )z1 +z2

= K1+ z1( ) 1 z1( )

1 2r cos z1 + r2z2

where

Center freq3dB bandwidth

r = cos = 1+( )2

B = cos1 21+2

c = cos1

Des

ign

B1/√2

Page 248: Signal Processing Columbia

2012-10-17Dan Ellis 15

Simple Filter Example Design a second-order IIR bandpass

filter with !c = 0.4π, 3dB b/w of 0.1π

c = 0.4 = cosc = 0.3090

B = 0.1 21+2

= cos 0.1( ) = 0.7265

HBP z( ) =12

1 z2

1 1+( )z1 +z2

=0.1367 1 z2( )

1 0.5335z1 + 0.7265z2M

sensitive..

Page 249: Signal Processing Columbia

2012-10-17Dan Ellis

t / /

t / /

|H(e

jw)|

|H(e

jw)|

/ dB

-1 0 1-1

-0.5

0

0.5

1

0 0.5 10

0.5

1

10-2 10-1 100-40

-30

-20

-10

0

z-plane

16

Simple IIR Bandstop

Design eqns:

HBS z( ) =1+2

1 2z1 + z2

1 1+( )z1 +z2

zeros at !c (per 1 - 2r cosµ z-1 + r2z-2)

same poles as HBP

B = cos1 21+2

= 1cosB

1cos2 B

1

c = cos1 = coscB

Page 250: Signal Processing Columbia

2012-10-17Dan Ellis 17

Cascading Filters Repeating a filter (cascade connection)

makes its characteristics more abrupt:

Repeated roots in z-plane:

H(ej!)

H(ej!) H(ej!) H(ej!)

!

|H(ej!)|

!

|H(ej!)|3

1ZP

h

h3

Page 251: Signal Processing Columbia

2012-10-17Dan Ellis 18

1/81/4

Cascading Filters Cascade systems are higher order

e.g. longer (finite) impulse response:

In general, cascade filters will not be optimal (...) for a given order

n-1 2 3 4-2

1/2

h[n]

-1/2

n-1 2 3 4-2

-1/2

n-1 2 4-2-3/8

h[n]∗h[n] h[n]∗h[n]∗h[n]

Page 252: Signal Processing Columbia

2012-10-17Dan Ellis

10-2 10-1 100-60

-50

-40

-30

-20

-10

0

2nd order-6dB/oct

-12dB/oct

4th order

t//

|H(ejt)|

19

Cascading Filters Cascading filters improves rolloff slope:

But: 3dB cutoff frequency will change(gain at !c → 3N dB)

!c

Page 253: Signal Processing Columbia

2012-10-17Dan Ellis

Interlude: The Big Picture

20

IR

LCCDE

DTFT

ZT

y[n] = h[n] x[n] Y (ej) = H(ej)X(ej)

Y (z) = G

Mj=1(1 jz1)

Nk=1(1 kz1)

X(z)y[n] =M

j=0

pjx[n j]N

k=1

dky[n k]

h[k]

x[n – k]

0

1

2

3

0

0

H ej

H ej

DTFT

IDTFT

ZT

IZT

X(ej) =

n

x[n]ejn

x[n] =12

X(ej)ejnd

j

pjx[n j]

j

pjzjX(z)

X(z) =

nx[n]zn

X(ej) = X(z)|z=ejyc[n] + yp[n] =

i

ini + n

0 n 0

x[n] y[n]+z

p0

p1

p2

-d1

-d2

-1

z-1

z-1

z-1Rez

Imz

0

1

1

0

0

**

nµ[n] 1

1 z1 |z| > ||

Page 254: Signal Processing Columbia

2012-10-17Dan Ellis 21

2. Ideal filters Typical filter requirements:

gain = 1 for wanted parts (pass band) gain = 0 for unwanted parts (stop band)

“Ideal” characteristics would be like: no phase distortion etc.

What is this filter? can calculate IR h[n] as IDTFT of ideal

response...

!

|H(ej!)|“brickwallLP filter”

Page 255: Signal Processing Columbia

2012-10-17Dan Ellis 22

Ideal Lowpass Filter Given ideal H(ej!):

(assume µ(!) = 0)⇒

!−π π!c−!c

h n[ ] = IDTFT H e j( ) = 12 H e j( )e jnd

= 12 e jnd

c

c

h n[ ] = sincnn

Ideal lowpass filter

0

Page 256: Signal Processing Columbia

2012-10-17Dan Ellis 23

Ideal Lowpass Filter

Problems! doubly infinite (n = -∞..∞) no rational polynomial → very long FIR excellent frequency-domain characteristics↔ poor time-domain characteristics (blurring, ringing – a general problem)

h n[ ] = sincnn

-20 -15 -10 -5 0 5 10 15-0.05

0

0.05

0.1

0.15

0.2

n

h[n]

Page 257: Signal Processing Columbia

2012-10-17Dan Ellis 24

lower-order realization (less computation) better time-domain properties (less ringing) easier to design...

Practical filter specifications

|H(e

j !)|

/ dB !

-40

-11

Pass band Stop band

Tran

sitio

n

• allow PB ripples

• allow SB ripples

• allow transition band

Page 258: Signal Processing Columbia

2012-10-17Dan Ellis 25

|H(ej!)| alone can hide phase distortion differing delays for adjacent frequencies

can mangle the signal Prefer filters with a flat phase response

e.g. µ(!) = 0 “zero phase filter” A filter with constant delay øp = D at all

freq’s has µ(!) = −D! “linear phase”

Linear phase can ‘shift’ to zero phase

H e j( ) = e jD ˜ H ( )

3. Linear-phase Filters

pure-real (zero-phase)portion

Page 259: Signal Processing Columbia

2012-10-17Dan Ellis 26

Time reversal filtering

v[n] = x[n]∗h[n] → V(ej!) = H(ej!)X(ej!) u[n] = v[-n] → U(ej!) = V(e-j!) = V*(ej!) w[n] = u[n]∗h[n] → W(ej!) = H(ej!)U(ej!) y[n] = w[-n] → Y(ej!) = W*(ej!) = (H(ej!)(H(ej!)X(ej!))*)* → Y(ej!) = X(ej!)|H(ej!)|2

Achieves zero-phase result Not causal! Need whole signal first

H(z)x[n] Timereversal H(z) Time

reversal y[n]v[n] u[n] w[n]

if v real

Page 260: Signal Processing Columbia

2012-10-17Dan Ellis 27

Linear Phase FIR filters (Anti)Symmetric FIR filters are almost

the only way to get zero/linear phase 4 types: Odd length Even length

Symmetric

Anti-symmetric

Type 1 Type 2

Type 3 Type 4

n n

n n

0 NN/2

Page 261: Signal Processing Columbia

2012-10-17Dan Ellis

0.4 0.2 0 0.2 0.4 0.6 0.8 1-1

-0.5

0

0.5

1

t//

n=1

n=2

28

Linear Phase FIR: Type 1 Length L odd → order N = L - 1 even Symmetric → h[n] = h[N - n]

(h[N/2] unique)

H e j( ) = h n[ ]e jnn=0

N= e j

N2 h N

2[ ] + 2 h N2 n[ ]cosnn=1

N /2( )linear phaseD = -µ(!)/! = N/2 pure-real H(!) from cosine basis~

!H(!)~

Page 262: Signal Processing Columbia

2012-10-17Dan Ellis

0 2 4 6 81 3 5 7 n0

1

2

3

4

5

-10

0

10

20

-/ -0. 5/ 0 0.5/ /-/ -0. 5/ 0 0.5/ /-/

-0.5/

0

0.5/

/Impulse response Magnitude response Phase response

dB

29

Linear Phase FIR: Type 1

Where are the N zeros?

thus for a zero ≥

h n[ ] = h N n[ ] H z( ) = zNH 1z( )

H ( ) = 0 H 1( ) = 0

Reciprocal zeros(as well as cpx conj)

Reciprocalpair

Conjugatereciprocal

constellation

No reciprocal on u.circle

= -D!

-2º

Page 263: Signal Processing Columbia

2012-10-17Dan Ellis

0.4 0.2 0 0.2 0.4 0.6 0.8 1 1

0.5

0

0.5

1

t//

n - 1/2 = 1/2

n - 1/2 = 11/2

n - 1/2 = 21/2

30

Linear Phase FIR: Type 2 Length L even → order N = L - 1 odd Symmetric → h[n] = h[N - n]

(no unique point)

H e j( ) = e jN2 h N+1

2 n[ ]cos n 12( )n=1

N+1( ) /2Non-integer delay

of N/2 samples H(!) from double-length cosine basis~

alwayszeroat !=º

Page 264: Signal Processing Columbia

2012-10-17Dan Ellis

0 2 4 6 8 100

1

2

3

4

5

-/ -0.5/ 0 0.5/ /

-/ -0.5/ 0 0.5/ /

-20

0

20

-0.5/

0

0.5/

/

-/-2 -1 0 1 2

-1-0.5

00.5

1

Impulse Response Magnitude Response

Phase ResponsePole-zero diagram

31

Linear Phase FIR: Type 2

Zeros: at z = -1,

H z( ) = zNH 1z( )

H 1( ) = 1( )N H 1( ) H e j( ) = 0odd

LPF-like

dB

Page 265: Signal Processing Columbia

2012-10-17Dan Ellis

-0. 4 -0. 2 0 0.2 0.4 0.6 0.8 1-1

-0. 5

0

0.5

1

t//

n = 2

n = 1odd

functionszero att = 0, /

32

Linear Phase FIR: Type 3 Length L odd → order N = L - 1 even Antisymmetric → h[n] = –h[N - n] ⇒ h[N/2]= -h[N/2] = 0

H e j( ) = h N2 n[ ] e j

N2n( ) e j

N2+n( )( )n=1

N /2= je j

N2 2 h N

2 n[ ]n=1

N /2 sinn( )µ(!) = º/2 - !·N/2Antisymmetric ⇒º/2 phase shift in addition to linear phase

Page 266: Signal Processing Columbia

2012-10-17Dan Ellis

-/ -0.5/ 0 0.5/ /

-/ -0.5/ 0 0.5/ /-2 -1 0 1 2

Impulse Response Magnitude Response

Phase ResponsePole-zero diagram

0 2 4 6 8 10

-2

0

2

-20

0

20

-0.5/

0

0.5/

/

-/

33

Linear Phase FIR: Type 3

Zeros:

H z( ) = zNH 1z( )

H 1( ) = H 1( ) = 0 ; H 1( ) = H 1( ) = 0

Page 267: Signal Processing Columbia

2012-10-17Dan Ellis 34

Linear Phase FIR: Type 4 Length L even → order N = L - 1 odd Antisymmetric → h[n] = -h[N - n]

(no center point)

H e j( ) = je jN2 2 h N+1

2 n[ ]n=1

N /2 sin n 12( )

fractional-sampledelay

º/2 offset offset sine basis

0.4 0.2 0 0.2 0.4 0.6 0.8 1 1

0.5

0

0.5

1

t//

odd functions

n - 1/2 = 1/2

n - 1/2 = 11/2

Page 268: Signal Processing Columbia

2012-10-17Dan Ellis 35

Linear Phase FIR: Type 4

Zeros: (H(-1) OK because N is odd)

H 1( ) = H 1( ) = 0

-/ -0.5/ 0 0.5/ /

Impulse Response Magnitude Response

Phase ResponsePole-zero diagram

-20

0

20

-0.5/

0

0.5/

/

-/

0 2 4 6 8 10-4

-2

0

2

4

-2 0 2

-1

0

1

-/ -0.5/ 0 0.5/ /t

dB

t

Page 269: Signal Processing Columbia

2012-10-17Dan Ellis 36

4 Linear Phase FIR TypesOdd length Even length

Ant

isym

met

ricS

ymm

etric 1 2

3 4

h[n]

n

!H(!)~

ZP

h[n]

n

!H(!)~

ZPD D

º

h[n]

n

!H(!)~

ZPD

º h[n]

n

!H(!)~

ZPD

º

Page 270: Signal Processing Columbia

2012-10-31Dan Ellis 1

ELEN E4810: Digital Signal ProcessingTopic 7:

Filter types and structures1. More filter types2. Minimum and maximum phase3. Filter implementation structures

Page 271: Signal Processing Columbia

2012-10-31Dan Ellis 2

1. More Filter Types We have seen the basics of filters

and a range of simple examples Now look at a couple of other classes:

Comb filters - multiple pass/stop bands Allpass filters - only modify signal phase

Page 272: Signal Processing Columbia

2012-10-31Dan Ellis 3

Comb Filters Replace all system delays z-1 with

longer delays z-L

→ System that behaves ‘the same’ at a longer timescale

x[n] y[n]+z-L

z-L

z-L

z-L

Page 273: Signal Processing Columbia

2012-10-31Dan Ellis 4

Comb Filters ‘Parent’ filter impulse response h[n]

becomes comb filter output as:g[n] = h[0] 0 0 0 0 h[1] 0 0 0 0 h[2]..

Thus,

G z( ) = g n[ ]znn= h n[ ]znLn = H zL( )

L-1 zeros

Page 274: Signal Processing Columbia

2012-10-31Dan Ellis 5

Comb Filters Hence frequency response:

Low-pass response → pass ! = 0, 2 π/L, 4 π/L... cut ! = π/L, 3 π/L, 5 π/L...

G e j( ) = H e jL( ) parent frequency responsecompressed & repeated L times

H(ej!) G(ej!)

L copies of H(ej!)

useful to enhancea harmonic series

Page 275: Signal Processing Columbia

2012-10-31Dan Ellis 6

Allpass Filters Allpass filter has |A(ej!)|2 = K for all !

i.e. spectral energy is not changed Phase response is not zero (else trivial)

phase correction special effects e.g.

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−400

−300

−200

−100

0

Normalized Frequency (×π rad/sample)

Phas

e (d

egre

es)

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−20

−15

−10

−5

0

5

Normalized Frequency (×π rad/sample)

Mag

nitu

de (d

B)

Page 276: Signal Processing Columbia

2012-10-31Dan Ellis 7

Allpass Filters Allpass has special form of system fn:

AM(z) has poles λ where DM(λ) = 0→ AM(z) has zeros ≥ = 1/λ = λ-1

AM z( ) = ± dM + dM 1z1 + ...+ d1z

M 1( ) + zM

1+ d1z1 + ...+ dM 1z

M 1( ) + dM zM

= ±zMDM z1( )DM z( )

mirror-imagepolynomials

=

Page 277: Signal Processing Columbia

2012-10-31Dan Ellis 8

Allpass Filters

Any (stable) DM can be used:

Phase is always decreasing:

→ -Mº at ! = º

AM z( ) = ±zMDM z1( )DM z( )

polesfrom 1/DM(z)

reciprocalzeros

from DM(z-1)

M

peakgroupdelay

0 0.2 0.4 0.6 0.8 1!3!

!2!

!!

0

" / !

arg

H(z)

!3 !2 !1 0 1 2!1

0

1

Rez

Imz

Page 278: Signal Processing Columbia

2012-10-31Dan Ellis 9

Why do mirror-img poly’s give const gain? Conj-sym system fn can be factored as:

z = ej! → z-1 = e-j! also on u.circle...

Allpass Filters

AM z( ) =K z i

*1( )iz i( )

i

=K i

*1z i* z1( )i

z i( )i

ej!

e-j! ZP

λi

o

+ complexconjugate p/z

i*1

i*

Page 279: Signal Processing Columbia

2012-10-31Dan Ellis 10

2. Minimum/Maximum Phase In AP filters, reciprocal roots have..

same effect on magnitude (modulo const.) different effect on phase

In normal filters, can try substituting reciprocal roots reciprocal of stable pole will be unstable × reciprocals of zeros?

→ Variants of filters with same magnitude response, different phase

Page 280: Signal Processing Columbia

2012-10-31Dan Ellis 11

Minimum/Maximum Phase Hence:

reciprocalzero..

.. addedphase

lag

.. samemag..

0

1

2

3

0 0.2 0.4 0.6 0.8 t//-/

-0.5/

0

0.5/

1

2

3

0 0.2 0.4 0.6 0.8 t//-/

-0.5/

0

0.5/

a ab 1/b

0

H1(z) = z - bz - a H2(z) = b(z - 1/b)

z - a

|H(ejt)|

e(H(ejt))

Page 281: Signal Processing Columbia

2012-10-31Dan Ellis 12

Minimum/Maximum Phase For a given magnitude response

All zeros inside u.circle → minimum phase All zeros outside u.c. → maximum phase

(greatest phase dispersion for that order) Otherwise, mixed phase

i.e. for a given magnitude responseseveral filters & phase fns are possible;minimum phase is canonical, ‘best’

Page 282: Signal Processing Columbia

2012-10-31Dan Ellis 13

Minimum/Maximum Phase Note: Min. phase + Allpass = Max. phase

o

o

o

o

o

o

z ( ) z *( )z

z 1( ) z 1

*( )

z ( ) z *( )

z 1( ) z 1

*( )

z x =

pole-zerocancl’n

Page 283: Signal Processing Columbia

2012-10-31Dan Ellis 14

Inverse Systems hi[n] is called the inverse of hf[n] iff

Z-transform:

i.e. Hi(z) recovers x[n] from o/p of Hf(z)

hi n[ ] hf n[ ] = n[ ]

H f ej( ) Hi e

j( ) =1

Hf(z) Hi(z)x[n] y[n] w[n]

W z( ) = Hi z( )Y z( ) = Hi z( )H f z( )X z( ) = X z( )

w n[ ] = x n[ ]

Page 284: Signal Processing Columbia

2012-10-31Dan Ellis 15

What is Hi(z)?

Hi(z) is reciprocal polynomial of Hf(z)

Just swappoles andzeros:

Inverse Systems

Hi z( )H f z( ) =1 Hi z( ) =1/H f z( )

H f z( ) =P z( )D z( )

Hi z( ) =D z( )P z( )

poles of fwd→ zeros of bwd

zeros of fwd→ poles of bwd

o

o

o

Hf(z) Hi(z)

Page 285: Signal Processing Columbia

2012-10-31Dan Ellis 16

Inverse SystemsWhen does Hi(z) exist? Causal+stable → all Hi(z) poles inside u.c. → all zeros of Hf(z) must be inside u.c.→ Hf(z) must be minimum phase

Hf(z) zeros outside u.c. → unstable Hi(z) Hf(z) zeros on u.c. → unstable Hi(z)

→ only invert if min.phase, ⇒ Hf(ej!) ≠ 0

Hi ej( ) =1/H f e

j( ) =1/0= !lose...

Page 286: Signal Processing Columbia

2012-10-31Dan Ellis 17

System Identification

Inverse filtering = given y and H, find x System ID = given y (and ~x), find H Just run convolution backwards?

H(z)x[n] y[n]

y n[ ] = h k[ ]x n k[ ]k=0

y 0[ ] = h 0[ ]x 0[ ] h 0[ ]y 1[ ] = h 0[ ]x 1[ ] + h 1[ ]x 0[ ] h 1[ ]...

but: errorsaccumulate

deconvolution

Page 287: Signal Processing Columbia

2012-10-31Dan Ellis 18

System Identification

Better approach uses correlations;Cross-correlate input and output:

If rxx is ‘simple’, can recover h?[n]... e.g. (pseudo-) white noise:

H?(z)x[n] y[n] + noise

rxy [ ] = y [ ] x [ ] = h? [ ] x [ ] x [ ]= h? [ ] rxx [ ]

rxx [ ] [ ] h? n[ ] rxy [ ]

Page 288: Signal Processing Columbia

2012-10-31Dan Ellis 19

System Identification Can also work in frequency domain:

x[n] is not observable → Sxy unavailable, but Sxx(ej!) may still be known, so:

Use e.g. min.phase to rebuild H(ej!)...

Sxy z( ) = H? z( ) Sxx z( ) make a const.

Syy ej( ) =Y e j( )Y * e j( )

= H e j( )X e j( )H * e j( )X * e j( )= H e j( ) 2 Sxx e j( )

Page 289: Signal Processing Columbia

2012-10-31Dan Ellis 20

3. Filter Structures Many different implementations,

representations of same filter Different costs, speeds, layouts, noise

performance, ...

Page 290: Signal Processing Columbia

2012-10-31Dan Ellis 21

Block Diagrams Useful way to illustrate implementations Z-transform helps analysis:

Approach Output of summers as dummy variables Everything else is just multiplicative

Y z( ) =G1 z( ) X z( ) +G2 z( )Y z( )[ ]Y z( ) 1G1 z( )G2 z( )[ ] =G1 z( )X z( )

H z( ) =Y z( )X z( )

=G1 z( )

1G1 z( )G2 z( )

G1(z)+

G2(z)

x[n] y[n]

Page 291: Signal Processing Columbia

2012-10-31Dan Ellis 22

Block Diagrams More complex example:

x[n] +

y[n] + z-1

-Æ Ø

∞+ z-1

-± "

w1 w2

w3

+

W1 = X z1W3

W2 =W1 z1W2

W3 = z1W2 +W2

Y = z1W3 + W1

W2 = W11+z1

W3 =z1 +( )W11+z1

YX

= + z1 + ( ) + z2 ( )1+ z1 +( ) + z2 ( )

stackable2nd order section

Page 292: Signal Processing Columbia

2012-10-31Dan Ellis 23

Delay-Free Loops Can’t have them!

At time n = 0, setup inputs x and v ;need u for y, also y for u →can’t calculate

Algebra:

x +

u

v

x

y

+ u

v

+

11 BA

B1 BA

B1 BA

BA1 BA

can simplify...

y = B(v + Au) u = x + y

y = B(v + A(x + y))

y(1BA) = Bv + BAx

y =Bv + BAx

1BA

Page 293: Signal Processing Columbia

2012-10-31Dan Ellis 24

Equivalent Structures Modifications to block diagrams that do

not change the filter e.g. Commutation H = AB = BA

Factoring AB+CB = (A+C)·BA B B A≡

A(z)B(z) +

C(z)B(z)

x1

x2

y A(z) +

x1

x2

y

C(z)B(z)

fewer blocks less computation

Page 294: Signal Processing Columbia

2012-10-31Dan Ellis 25

Equivalent Structures Transpose

reverse paths adders↔nodes

input↔output

+

z-1

z-1

yx b1

b2

b3

+z-1

z-1

xy b1

b2

b3

+

Y = b1X + b2z1X + b3z

2X= b1X + z1 b2X + z1b3X( )

Page 295: Signal Processing Columbia

2012-10-31Dan Ellis 26

FIR Filter Structures Direct form “Tapped Delay Line”

Transpose

Re-use delay line if several inputs xi for single output y ?

z-1

+

z-1

+

z-1

+

z-1

+

xh0 h1 h2 h3 h4

y n[ ] = h0x n[ ] + h1x n 1[ ] + ...= hk x n k[ ]k=0

4y

z-1

+

z-1

+

z-1

+

z-1

+

xh4 h3 h2 h1 h0

y

Page 296: Signal Processing Columbia

2012-10-31Dan Ellis 27

FIR Filter Structures Cascade

factored into e.g. 2nd order sections

H z( ) = h0 + h1z1 + h2z

2 + h3z3

= h0 10z1( ) 11z1( ) 11*z1( )

= h0 10z1( ) 1 2Re 1 z1 + 1

2 z2( )

+

z-1

z-1

y

-2Re≥1

|≥1|2

z-1

x h0

≥0

+

Page 297: Signal Processing Columbia

2012-10-31Dan Ellis 28

FIR Filter Structures Linear Phase:

Symmetric filters with h[n] = (-)h[N - n]

Also Transpose form:gains first, feeding folded delay/sum line

n

y n[ ] = b0 x n[ ] + x n 4[ ]( )+b1 x n 1[ ] + x n 3[ ]( )+b2x n 2[ ]

z-1+

z-1x

b0 b1 b2

+

z-1 z-1

+ + y

...

half as many multiplies

Page 298: Signal Processing Columbia

2012-10-31Dan Ellis 29

IIR Filter Structures IIR: numerator + denominator

H z( ) = p0 + p1z1 + p2z

2 + ...1+ d1z

1 + d2z2 + ...

= P z( ) 1D z( )

+

z-1

z-1

p0

p1

p2

+

z-1

z-1

-d1

-d2

FIRall-pole

IIR

Page 299: Signal Processing Columbia

2012-10-31Dan Ellis 30

IIR Filter Structures Hence, Direct form I

Commutation → Direct form II (DF2)+

z-1

z-1

p0

p1

p2

z-1

z-1

-d1

-d2

+p0

p1

p2

+

z-1

z-1

-d1

-d2

• same signal∴ delay lines merge

• “canonical”= min. memory usage

Page 300: Signal Processing Columbia

2012-10-31Dan Ellis 31

IIR Filter Structures Use Transpose on FIR/IIR/DF2

“Direct Form II Transpose”

-d1

-d2

+z-1

z-1

x yp0

p1

p2

+

+

Page 301: Signal Processing Columbia

2012-10-31Dan Ellis 32

Factored IIR Structures Real-output filters have

conjugate-symm roots:

Can always group into 2nd order terms with real coefficients:

H z( ) = 11 ( + j)z1( ) 1 ( j)z1( )

H z( ) =p0 11z

1( ) 1 22z1 + (22 +2

2 )z2( )...11z

1( ) 1 22z1 + (22 + 2

2 )z2( )...

Ø

−Ø

Æ

real root

Page 302: Signal Processing Columbia

2012-10-31Dan Ellis 33

Cascade IIR Structure Implement as cascade of

second order sections (in DFII)

Second order sections (SOS): modular - any order from optimized block well-behaved, real coefficients (sensitive?)

+-2∞2

+z-12Æ2

++z-1

+

p0

-∞1+

z-1

Æ1

x y

∞22+±2

2−(Æ22+Ø2

2)

fwd gainfactored

out

z-1z-1

Page 303: Signal Processing Columbia

2012-10-31Dan Ellis 34

Second-Order Sections ‘Free’ choices:

grouping of pole pairs with zero pairs order of sections

Optimize numerical properties: avoid very large values (overflow) avoid very small values (quantization)

e.g. Matlab’s zp2sos attempt to put ‘close’ roots in same section intersperse gain & attenuation?

Page 304: Signal Processing Columbia

2012-10-31Dan Ellis 35

Second Order Sections Factorization affects intermediate values

Original System(2 pair poles, zeros)

Factorization 1 Factorization 2

Page 305: Signal Processing Columbia

2012-10-31Dan Ellis 36

Parallel IIR Structures Can express H(z) as sum of terms (IZT)

Or, second-order terms:

Suggests parallel realization...

H (z) = consts+ 1 z

1=1

N

= 1 z1( )F z( ) z=

H (z) = 0 + 0k + 1k z1

1+1k z1 +2k z

2k

Page 306: Signal Processing Columbia

2012-10-31Dan Ellis 37

Parallel IIR Structures Sum terms become

parallel paths Poles of each SOS

are from full TF System zeros arise

from output sum Why do this?

stability/sensitivity reuse common terms

+∞01

∞11

+

z-1

z-1

-Æ11

-Æ21

+∞0

+∞02

∞12

+

z-1

z-1

-Æ12

-Æ22

x y

Page 307: Signal Processing Columbia

2012-11-12Dan Ellis 1

ELEN E4810: Digital Signal ProcessingTopic 8:

Filter Design: IIR1. Filter Design Specifications2. Analog Filter Design3. Digital Filters from Analog Prototypes

Page 308: Signal Processing Columbia

2012-11-12Dan Ellis 2

1. Filter Design Specifications The filter design process:

Design ImplementAnalysis

Pro

blem

Solution

G(z)transferfunction

performanceconstraints

• magnitude response• phase response• cost/complexity

• FIR/IIR• subtype• order

• platform• structure• ...

Page 309: Signal Processing Columbia

2012-11-12Dan Ellis 3

Performance Constraints .. in terms of magnitude response:

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9-40

-30

-20

-10

0

frequency

gain

/ dB

passbandedge

frequency

Passband

Stopband

passbandripple

minimumstopband

attenuation

optimalfilter

will touchhere

stopbandedge

frequency

trans

ition

ban

d

G(ej )

Page 310: Signal Processing Columbia

2012-11-12Dan Ellis 4

“Best” filter:

improving one usually worsens others But: increasing filter order (i.e. cost)

can improve all three measures

Performance Constraints

smallestPassband Ripple

greatestMinimum SB Attenuation

narrowestTransition Band

Page 311: Signal Processing Columbia

2012-11-12Dan Ellis 5

Passband Ripple

Assume peak passband gain = 1then minimum passband gain =

Or, ripple

11+2

max = 20 log10 1+2 dB

PB rippleparameter

1

G(ej )Passbanddetail

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9-40

-30

-20

-10

0G(ej )

Page 312: Signal Processing Columbia

2012-11-12Dan Ellis 6

Stopband Ripple

Peak passband gain is A× larger than peak stopband gain

Hence, minimum stopband attenuation

SB rippleparameter

s = 20 log10 1A = 20 log10 A dB

0freq

Stopbanddetail

1A

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9-40

-30

-20

-10

0G(ej )

Page 313: Signal Processing Columbia

2012-11-12Dan Ellis 7

Filter Type Choice: FIR vs. IIR FIR

No feedback(just zeros)

Always stable Can be

linear phase High order

(20-2000) Unrelated to

continuous-time filtering

IIR Feedback

(poles & zeros) May be unstable Difficult to control

phase Typ. < 1/10th

order of FIR (4-20) Derive from

analog prototype

BUT

Page 314: Signal Processing Columbia

2012-11-12Dan Ellis 8

FIR vs. IIR If you care about computational cost→ use low-complexity IIR

(computation no object → linear phase FIR)

If you care about phase response→ use linear-phase FIR

(phase unimportant → go with simple IIR)

Page 315: Signal Processing Columbia

2012-11-12Dan Ellis 9

IIR Filter Design IIR filters are directly related to

analog filters (continuous time) via a mapping of H(s) (CT) to H(z) (DT) that

preserves many properties Analog filter design is sophisticated

signal processing research since 1940s→ Design IIR filters via analog prototype

need to learn some CT filter design

Page 316: Signal Processing Columbia

2012-11-12Dan Ellis 10

2. Analog Filter Design Decades of analysis of transistor-based

filters – sophisticated, well understood Basic choices:

ripples vs. flatness in stop and/or passband more ripples → narrower transition band

Family PassBd StopBdButterworth flat flatChebyshev I ripples flatChebyshev II flat ripples

Elliptical ripples ripples

Page 317: Signal Processing Columbia

2012-11-12Dan Ellis 11

CT Transfer Functions Analog systems: s-transform (Laplace)

frequency response still from a polynomialContinuous-time Discrete-time

Ha s( ) = ha t( )estdt

Hd z( ) = hd n[ ]znTransform

Frequencyresponse

Pole/zerodiagram

Ha j( )

Hd ej( )

s-planeRes

Imsj≠

stablepoles

stablepoles z-plane

Rez

Imz

1

ej!

s scales freq

Page 318: Signal Processing Columbia

2012-11-12Dan Ellis 12

Maximally flat in pass and stop bands Magnitude

response (LP):

≠ ≪ ≠c, |Ha(j≠)|2 →1

≠ = ≠c, |Ha(j≠)|2 = 1/2

Butterworth Filters

Ha j( ) 2 = 1

1+ c

( )2Nfilterorder

N

3dB point0 1 2 30

0.2

0.4

0.6

0.8

1

Ha(j)

/ c

N

N

Page 319: Signal Processing Columbia

2012-11-12Dan Ellis 13

Butterworth Filters ≠ ≫ ≠c, |Ha(j≠)|2 →(≠c/≠)2N

flat →

@ ≠ = 0 for n = 1 .. 2N-1

d n

dn Ha j( ) 2 = 0

Log-logmagnituderesponse

6N dB/octrolloff

10-1 100 101-70

-60

-50

-40

-30

-20

-10

0

Ha(j

)/ d

B

/ c

N

N

slope →-6N dB/oct

Page 320: Signal Processing Columbia

2012-11-12Dan Ellis

Ha(j )

p s

A

14

Butterworth Filters How to meet design specifications?

1

1+ pc( )2N

1

1+ 2

11+ s

c( )2N1A2

k1 = A2 1

=“discrimination”, ≪ 1

k = p

s=“selectivity”, < 1

N 12log10 A 21

2( )log10

sp( )

DesignEquation

Page 321: Signal Processing Columbia

2012-11-12Dan Ellis 15

Butterworth Filters ... but what is Ha(s)?

Traditionally, look it up in a table calculate N → normalized filter with ΩΩc = 1 scale all coefficients for desired ΩΩc

In fact,

where

Ha j( ) 2 = 11+ ( c

)2N

Ha s( ) = 1s pi( )i

pi =cej N +2 i1

2N i =1..Ns-plane

Res

ImsΩΩc×

×

××

sc

2N

= 1

odd-indexed uniform divisions of c-radius circle

Page 322: Signal Processing Columbia

2012-11-12Dan Ellis

1kHzp

5kHzs

A

0 dB= -1 dB

= -40 dB

16

Butterworth Example

1dB = 20 log1011+2

2 = 0.259

40dB = 20 log10 1A

A =100

Design a Butterworth filter with 1 dB cutoff at 1kHz and a minimum attenuation of 40 dB at 5 kHz

s p

= 5

N 12log10 99990.259

log10 5 N = 4 3.28

Page 323: Signal Processing Columbia

2012-11-12Dan Ellis

0 2000 4000 6000-60

-50

-40

-30

-20

-10

0

freq / Hzga

in /

dB

17

Butterworth Example Order N = 4 will satisfy constraints;

What are ΩΩc and filter coefficients? from a table, ΩΩ-1dB = 0.845 when ΩΩc = 1⇒ ΩΩc = 1000/0.845 = 1.184 kHz

from a table, get normalized coefficients for N = 4, scale by 1184·2º

Or, use Matlab: [B,A] = butter(N,Wc,’s’);

M

Page 324: Signal Processing Columbia

2012-11-12Dan Ellis 18

Chebyshev I Filter Equiripple in passband (flat in stopband)→ minimize maximum error

Chebyshevpolynomialof order N

0 0.5 1 1.5 2-40

-30

-20

-10

0

gain

/ dB

N

N

rippledepth

|H(j)|2 =1

1 + 2T 2N (

p)

TN () =

cos(N cos1 ) || 1cosh(N cosh1 ) || > 1

Page 325: Signal Processing Columbia

2012-11-12Dan Ellis 19

Chebyshev I Filter

-2

0

-1

1

2T5( )

-1

0

1

2

3T5

2( )

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1/(1+0.1 T52( ))

TN () =

cos(N cos1 ) || 1cosh(N cosh1 ) || > 1

|H(j)|2 =1

1 + 2T 2N (

p)

Page 326: Signal Processing Columbia

2012-11-12Dan Ellis 20

Chebyshev I Filter Design procedure:

desired passband ripple → " min. stopband atten., ΩΩp, ΩΩs → N :

1A2 = 1

1+ 2TN2 (s

p)

= 1

1+ 2 cosh N cosh1 s p( )[ ]2

N cosh1 A 21

( )cosh1 s

p( )1/k1, discrimination

1/k, selectivity

Page 327: Signal Processing Columbia

2012-11-12Dan Ellis 21

Chebyshev I Filter What is Ha(s)?

complicated, get from a table .. or from Matlab cheby1(N,r,Wp,’s’) all-pole; can inspect them:

..like squashed-in Butterworth (why?)-1 -0.5 0 0.5 1

-1

-0.5

0

0.5

1

Res

Ims

Page 328: Signal Processing Columbia

2012-11-12Dan Ellis 22

Chebyshev II Filter Flat in passband, equiripple in stopband

Filter has poles and zeros (some ) Complicated pole/zero pattern

Ha j( ) 2 = 1

1+2TN (

sp)

TN (s )

2

zeros on imaginary axis

constant

~1/TN(1/ΩΩ) 0 0.5 1 1.5 2-60

-50

-40

-30

-20

-10

0

gain

/ dB

N

N

peak ofstopband

ripples

Page 329: Signal Processing Columbia

2012-11-12Dan Ellis

0 0.5 1 1.5 2-60

-50

-40

-30

-20

-10

0

gain

/ dB

N N peakSB

ripples

PBrippledepth

23

Elliptical (Cauer) Filters Ripples in both passband and stopband

Complicated; not even closed form for Ν

very narrow transition band

Ha j( ) 2 = 11+2RN

2 ( p)

function; satisfiesRN(ΩΩ-1) = RN(ΩΩ)-1

zeros for ΩΩ<1 → poles for ΩΩ>1

Page 330: Signal Processing Columbia

2012-11-12Dan Ellis 24

Analog Filter Types Summary

N = 6

r = 3 dB

A = 40 dB

0 0.5 1 1.5 2-60

-50

-40

-30

-20

-10

0

0 0.5 1 1.5 2-60

-50

-40

-30

-20

-10

0

0 0.5 1 1.5 2-60

-50

-40

-30

-20

-10

0

0 0.5 1 1.5 2-60

-50

-40

-30

-20

-10

0

11c

gain

/ dB

1

gain

/ dB

1

gain

/ dB

1

gain

/ dB

Butterworth Chebyshev I

Chebyshev II Elliptical

1s

1p

r

r

A A

1p

Page 331: Signal Processing Columbia

2012-11-12Dan Ellis 25

Analog Filter Transformations All filters types shown as lowpass;

other types (highpass, bandpass..)derived via transformations

i.e.

General mapping of s-planeBUT keep LHHP & jΩΩ → jΩΩ; poles OK, frequency response ‘shuffled’

HLP s( )ˆ s = F1 s( )

HD ˆ s ( )

Desired alternateresponse; still arational polynomiallowpass

prototype

^

Page 332: Signal Processing Columbia

2012-11-12Dan Ellis 26

Lowpass-to-Highpass Example transformation:

take prototype HLP(s) polynomial replace s with

simplify and rearrange → new polynomial HHP(s)

H HP ˆ s ( ) = HLP s( ) s=p ˆ p

ˆ s

p ˆ pˆ s

^

Page 333: Signal Processing Columbia

2012-11-12Dan Ellis 27

Lowpass-to-Highpass What happens to frequency response?

Frequency axes inverted

s = j ˆ s = p ˆ pj = j p ˆ p

( )

ˆ = p ˆ p

= p ˆ = ˆ p

< p ˆ < ˆ pLP passband HP passband

> p ˆ > ˆ pLP stopband HP stopband

imaginary axisstays on self...

...freq.→freq.

p

^

p^

LPF

HPF

Page 334: Signal Processing Columbia

2012-11-12Dan Ellis 28

Transformation Example Design a Butterworth highpass filter

with PB edge -0.1dB @ 4 kHz (ΩΩp)and SB edge -40 dB @ 1 kHz (ΩΩs)

Lowpass prototype: make ΩΩp = 1

Butterworth -0.1dB @ ΩΩp=1, -40dB @ ΩΩs=4

^

^

s = ( )p ˆ pˆ s

= ( )4

N 12log10 A 21

2( )log10

sp( )

p@ 0.1dB1

1+ (p

c)10

=100.110

c = p /0.6866 =1.4564 N=5, 2N=10

Page 335: Signal Processing Columbia

2012-11-12Dan Ellis 29

Transformation Example LPF proto has

Map to HPF:

p =cej N +21

2N Res

ImsΩΩc×

×

××

HLP s( ) = c

N

s p( )=1N

H HP ˆ s ( ) = HLP s( ) s=p ˆ p

ˆ s

H HP ˆ s ( ) = cN

p ˆ pˆ s p( )=1

N= c

N ˆ s N

p ˆ p p ˆ s ( )=1

N

N zeros@ s = 0^

new poles @ s = ΩΩpΩΩp/pl^ ^

Page 336: Signal Processing Columbia

2012-11-12Dan Ellis 30

Transformation Example In Matlab:[N,Wc]=buttord(1,4,0.1,40,'s');[B,A] = butter(N, Wc, 's');[U,D] = lp2hp(B,A,2*pi*4000);

ΩΩp ΩΩs Rp Rs

-2 -1 0 1x 104

-1.5

-1

-0.5

0

0.5

1

1.5x 104

Res

Ims

0 1000 2000 3000 4000 5000-60

-50

-40

-30

-20

-10

0

/ Hz

gain

/ dB

Page 337: Signal Processing Columbia

2012-11-12Dan Ellis 31

3. Analog Protos → IIR Filters Can we map high-performance CT

filters to DT domain? Approach: transformation Ha(s)→G(z)

i.e. where s = F(z) maps s-plane ↔ z-plane:

G z( ) = Ha s( ) s=F z( )

s-plane

Res

Ims

z-plane

Rez

Imz

1

Ha(s0) G(z0)Every value of G(z)is a value of Ha(s)somewhere on the s-plane & vice-versa

s = F(z)z = F-1(s)

Page 338: Signal Processing Columbia

2012-11-12Dan Ellis 32

CT to DT Transformation Desired properties for s = F(z):

s-plane jΩΩ axis ↔ z-plane unit circle → preserves frequency response values

s-plane LHHP ↔ z-plane unit circle interior→ preserves stability of poles

s-plane

Res

Ims

jΩΩ

z-plane

Rez

Imz

1

ej!

LHHP↔UCI

Im↔u.c.

Page 339: Signal Processing Columbia

2012-11-12Dan Ellis 33

Bilinear Transformation Solution:

Hence inverse:

Freq. axis?

Poles?

BilinearTransform

z = 1+ s1 s

unique, 1:1 mapping

s = j z = 1+ j1 j

|z| = 1 i.e.on unit circle

s = + j z = 1+( )+ j1( ) j

z 2 = 1+ 2 + 2 +2

1 2 + 2 +2σ < 0 ↔ |z| < 1

s =1 z1

1 + z1=

z 1z + 1

Page 340: Signal Processing Columbia

2012-11-12Dan Ellis

-4 -2 0 2 4-4

-3

-2

-1

0

1

2

3

4

-4 -2 0 2 4-4

-3

-2

-1

0

1

2

3

4

34

Bilinear Transformation How can entire half-plane fit inside u.c.?

Highly nonuniform warping!

s-plane z-plane

Page 341: Signal Processing Columbia

2012-11-12Dan Ellis 35

Bilinear Transformation What is CT↔DT freq. relation ΩΩ↔! ?

i.e.

infinite range of CT frequency maps to finite DT freq. range

nonlinear; as ! → º

z = e j s = 1e j1+e j

= 2 j sin /22 cos /2 = j tan 2u.circle im.axis

= tan 2( )

= 2 tan1

< <

< <

dd pack it all in!

(CT)

(DT)

-5 0 5-

-0.5

0

0.5

Page 342: Signal Processing Columbia

2012-11-12Dan Ellis 36

Frequency Warping Bilinear transform makes

for all !, ΩΩ

G e j( ) = Ha j( )=2 tan1

Same gain & phase (", A...), in same ‘order’, but with warped frequency axis

0 1 2 3 4 5 6-60

-40

-20

0

-60 -40 -20 00

0.2

0.4

0.6

0.8

/

1

1

|Ha(j1)|

|G(ejt)|

t = 2tan<1(1)

tt

Page 343: Signal Processing Columbia

2012-11-12Dan Ellis 37

Design Procedure Obtain DT filter specs:

general form (LP, HP...), ‘Warp’ frequencies to CT:

Design analog filter for

→ Ha(s), CT filter polynomial Convert to DT domain:

→ G(z), rational polynomial in z Implement digital filter!

p , s, 11+ 2, 1A

p = tan p

2

s = tan s2

p ,s, 11+ 2, 1A

G z( ) = Ha s( ) s=1z11+z1

Old-style

Page 344: Signal Processing Columbia

2012-11-12Dan Ellis 38

Bilinear Transform Example DT domain requirements:

Lowpass, 1 dB ripple in PB, !p = 0.4º, SB attenuation ≥ 40 dB @ !s = 0.5º,attenuation increases with frequency

PB ripples,SB monotonic→ Chebyshev I

0.4 0. 15

-40

-10

|G(e

j)|

/ dB

Page 345: Signal Processing Columbia

2012-11-12Dan Ellis 39

Bilinear Transform Example Warp to CT domain:

Magnitude specs:1 dB PB ripple

40 dB SB atten.

p = tan p

2 = tan 0.2 = 0.7265 rad/sec

s = tan s2 = tan 0.25 =1.0 rad/sec

11+ 2

=101/20 = 0.8913 = 0.5087

1A =1040 /20 = 0.01 A =100

Page 346: Signal Processing Columbia

2012-11-12Dan Ellis

0 0.5 1 1.5 2-60

-50

-40

-30

-20

-10

0

0 0.2 0.4 0.6 0.8 1-60

-50

-40

-30

-20

-10

0

Ha(j)

/ dB

/

G(ej)

/ dB

CT DT

40

Bilinear Transform Example Chebyshev I design criteria:

Design analog filter, map to DT, check:

N cosh1 A 21

( )cosh1 s

p( )= 7.09 i.e. need N = 8

>> N=8;>> wp=0.7265;>> pbripple = 1.0;>> [B,A] = cheby1(N,pbripple,wp,'s');>> [b,a] = bilinear(B,A,.5);

M

freqs(B,A) freqz(b,a)

Page 347: Signal Processing Columbia

2012-11-12Dan Ellis 41

Other Filter Shapes Example was IIR LPF from LP prototype For other shapes (HPF, bandpass,...):

Transform LP→X in CT or DT domain...

DTspecs

CTspecs

HLP(s)

HD(s)

GLP(z)

GD(z)

Bilinearwarp

Analogdesign

CTtrans

DTtrans

Bilineartransform

Bilineartransform

Page 348: Signal Processing Columbia

2012-11-12Dan Ellis 42

DT Spectral Transformations Same idea as CT LPF→HPF mapping,

but in z-domain:

To behave well, should: map u.c. → u.c. (preserve G(ej!) values) map u.c. interior → u.c. interior (stability)

i.e. in fact, matches the definition of an

allpass filter ... replace delays with

z = F ˆ z ( )

GD ˆ z ( ) = GL z( ) z=F ˆ z ( ) = GL F ˆ z ( )( )

F ˆ z ( ) =1 ˆ z =1

F ˆ z ( ) < 1 ˆ z < 1

F ˆ z ( )

F ˆ z ( )1

Page 349: Signal Processing Columbia

2012-11-12Dan Ellis 43

DT Frequency Warping Simplest mapping

has effect of warping frequency axis:

z = F ˆ z ( ) = ˆ z 1ˆ z

tan 2( ) = 1+

1 tan ˆ 2( )

Æ > 0 :expand HF

Æ < 0 :expand LF

ˆ z e j ˆ z e j e j ˆ

1 e j ˆ

0 0.2 0.4 0.6 0.8 ^0

0.2

0.4

0.6

0.8 = 0.6

= -0.6

Page 350: Signal Processing Columbia

2012-11-12Dan Ellis 44

Another Design Example Spec:

Bandpass, from 800-1600 Hz (SR = 8kHz) Ripple = 1dB, min. stopband atten. = 60 dB 8th order, best transition band→ Use elliptical for best performance

Full design path: prewarp spec frequencies design analog LPF prototype analog LPF → BPF CT BPF → DT BPF (Bilinear)

Page 351: Signal Processing Columbia

2012-11-12Dan Ellis

Another Design Example Frequency prewarping

800 Hz @ SR = 8kHz= 800/8000 x 2 rad/sample→ L =0.2 , U =1600/8000 x 2 =0.4

L = tan L/2 = 0.3249, U = 0.7265 Note distinction between:

application’s CT domain (800-1600 Hz)and

CT prototype domain (0.3249-0.7265 rad/s)

45

Page 352: Signal Processing Columbia

2012-11-12Dan Ellis 46

Another Design Example Or, do it all in one step in Matlab:[b,a] = ellip(8,1,60, [800 1600]/(8000/2));

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1-1000

-500

0

500

1000

Normalized Frequency ( rad/sample)

Phas

e (d

egre

es)

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1-80

-60

-40

-20

0

Normalized Frequency ( rad/sample)

Mag

nitu

de (d

B)

Page 353: Signal Processing Columbia

2012-11-19Dan Ellis 1

ELEN E4810: Digital Signal ProcessingTopic 9:

Filter Design: FIR1. Windowed Impulse Response2. Window Shapes3. Design by Iterative Optimization

Page 354: Signal Processing Columbia

2012-11-19Dan Ellis 2

1. FIR Filter Design FIR filters

no poles (just zeros) no precedent in analog filter design

Approaches windowing ideal impulse response iterative (computer-aided) design

Page 355: Signal Processing Columbia

2012-11-19Dan Ellis 3

Least Integral-Squared Error Given desired FR Hd(ej!), what is the

best finite ht[n] to approximate it?

Can try to minimize Integral Squared Error (ISE) of frequency responses:

best in what sense?

= 12 Hd e

j( ) Ht ej( ) 2 d

= DTFTht[n]

Page 356: Signal Processing Columbia

2012-11-19Dan Ellis 4

Least Integral-Squared Error Ideal IR is hd[n] = IDTFTHd(ej!),

(usually infinite-extent) By Parseval, ISE

But: ht[n] only exists for n = -M..M ,

= hd n[ ] ht n[ ] 2n=

= hd n[ ] ht n[ ] 2

n=M

M

+ hd n[ ] 2

n<M , n>M

minimized by makinght[n] = hd[n], -M ≤ n ≤ M

not altered by ht[n]

Page 357: Signal Processing Columbia

2012-11-19Dan Ellis 5

Least Integral-Squared Error Thus, minimum mean-squared error

approximation in 2M+1 point FIR is truncated IDTFT:

Make causal by delaying by M points→ h′t[n] = 0 for n < 0

2M0 M–M Mn n

ht[n] h't[n]zero

ht[n] =

12

Hd(ej)ejnd M n M

0 otherwise

Page 358: Signal Processing Columbia

2012-11-19Dan Ellis

-20 -15 -10 -5 0 5 10 15-0.05

0

0.05

0.1

0.15

0.2

n

h[n]

6

Approximating Ideal Filters From topic 6,

ideal lowpass has:

and:

!−º º!c−!c

(doubly infinite)

HLP (ej) =

1 || < c

0 c < || <

hLP [n] =sincn

n

Page 359: Signal Processing Columbia

2012-11-19Dan Ellis 7

Approximating Ideal Filters Thus, minimum ISE causal

approximation to an ideal lowpass

2M+1 points

Causal shift

0 5 10 15 20 25-0.2

0

0.2

0.4

0.6

0.8

1

1.2h LP[n]

n

hLP [n] =

sin c(nM)

(nM) 0 n 2M

0 otherwise

Page 360: Signal Processing Columbia

2012-11-19Dan Ellis

0 0.2 0.4 0.6 0.8-0.2

0

0.2

0.4

0.6

0.8

1

1.2

Ht(ej)

129 point

25point

Gibbs ‘ears’

8

Gibbs Phenomenon Truncated ideal filters have Gibbs’ Ears:

Increasing filter length→ narrower ears(reduces ISE)but height the same

→ not optimal by minimax criterion

(~9% overshoot)

Page 361: Signal Processing Columbia

2012-11-19Dan Ellis

-20 -15 -10 -5 0 5 10 15-0.05

0

0.05

0.1

0.15

0.2

n

h[n]

9

Where Gibbs comes from Truncation of hd[n] to 2M+1 points is

multiplication by a rectangular window:

Multiplication in time domain is convolution in frequency domain:

wR[n]ht[n] = hd[n] · wR[n]

wR[n] =

1 M n M

0 otherwise

g[n] · h[n] 12

G(ej)H(ej())d

Page 362: Signal Processing Columbia

2012-11-19Dan Ellis 10

Where Gibbs comes from Thus, FR of truncated response

is convolution of ideal FR and FR of rectangular window (pd.sinc):

Hd(ej!) DTFTwR[n]periodic sinc...

Ht(ej!)

sin Nxsin x

convanim.mp4

Page 363: Signal Processing Columbia

2012-11-19Dan Ellis 11

Where Gibbs comes from Rectangular window:

Mainlobe width ( 1/L) determines transition band

Sidelobe heightdetermines ripples

“periodic sinc”

≈ invariantwith length

- -0.5 0 0.50

WR(ej )

2M + 1

Mainlobe

Sidelobes

/

wR[n] =

1 M n M

0 otherwise

WR(ej) =M

n=M ejn

= sin(2M+1) 2

sin 2

Page 364: Signal Processing Columbia

2012-11-19Dan Ellis 12

2. Window Shapes for Filters Windowing (infinite) ideal response→ FIR filter:

Rectangular window has best ISE error Other “tapered windows” vary in:

mainlobe → transition band width sidelobes → size of ripples near transition

Variety of ‘classic’ windows...

ht n[ ] = hd n[ ] w n[ ]

Page 365: Signal Processing Columbia

2012-11-19Dan Ellis

-10 -5 0 5 100

0.5

1

0 0.2 0.4 0.6 0.80

5

10

15

20

-10 -5 0 5 100

0.5

1

0 0.2 0.4 0.6 0.80

2

4

6

8

-10 -5 0 5 100

0.5

1

0 0.2 0.4 0.6 0.80

2

4

6

8

-10 -5 0 5 100

0.5

1

0 0.2 0.4 0.6 0.80

2

4

6

8

n

n

n

n

13

0.42 + 0.5cos(2 n2M +1)

+0.08cos(2 2n2M +1)

Window Shapes for FIR Filters Rectangular:

Hann:

Hamming:

Blackman:

w n[ ] =1 M n M

0.5 + 0.5cos(2 n2M +1)

0.54 + 0.46cos(2 n2M +1)

double widthmainlobe

reduced 1stsidelobe

triple widthmainlobe

bigsidelobes!

Page 366: Signal Processing Columbia

2012-11-19Dan Ellis

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9-80

-70

-60

-50

-40

-30

-20

-10

0Blackman

Hamming

Hann

RectangularW(e

j)

/ dB

14

Window Shapes for FIR Filters Comparison on dB scale:

2º 2M+1

Page 367: Signal Processing Columbia

2012-11-19Dan Ellis 15

Adjustable Windows So far, discrete main-sidelobe tradeoffs.. Kaiser window = parametric, continuous

tradeoff:

Empirically, for min. SB atten. of Æ dB:

w n[ ] =I0 1 ( nM )

2( )I0 ()

M n Mmodified zero-orderBessel function

requiredorder

transitionwidth

=

0.11( 8.7) > 500.58( 21)0.4 + 0.08( 21) 21 500 < 21

N = 82.3

Page 368: Signal Processing Columbia

2012-11-19Dan Ellis 16

Windowed Filter Example Design a 25 point FIR low-pass filter

with a cutoff of 600 Hz (SR = 8 kHz) No specific transition/ripple req’s → compromise: use Hamming window

Convert the frequency to radians/sample:

!2º

8 kHzº

4 kHz0.15 º600 Hz

H(ej!)

c = 6008000 2 = 0.15

(fs)

Page 369: Signal Processing Columbia

2012-11-19Dan Ellis 17

Windowed Filter Example1. Get ideal filter impulse response:

2. Get window: Hamming @ N = 25 → M = 12 (N = 2M+1)

3. Apply window:

c = 0.15

hd n[ ] = sin0.15nn

w n[ ] = 0.54 + 0.46cos 2 n25( ) 12 n 12

h n[ ] = hd n[ ] w n[ ]= sin0.15n

n 0.54 + 0.46cos 2n25( ) 12 n 12M

-20 -10 0 10 n

h[n]

0

0.05

0.1

0.15

Page 370: Signal Processing Columbia

2012-11-19Dan Ellis 18

Freq. Resp. (FR) Arithmetic Ideal LPF has pure-real FR i.e.

µ(!) = 0, H(ej!) = |H(ej!)|

→ Can build piecewise-constant FRs by combining ideal responses, e.g. HPF:

!−º º!c−!c

!

!1

-1

1

±[n]+

-hLP[n]=

hHP[n]

i.e. H(ej!) = 1

HLP(ej!) = 1 for |!| < !c

= ±[n] - (sin!cn)/ºnwou

ldn’

t wor

k if

phas

es w

ere

nonz

ero!

+

=

0

Page 371: Signal Processing Columbia

2012-11-19Dan Ellis 19

3. Iterative FIR Filter Design Can derive filter coefficients by

iterative optimization:

Gradient descent / nonlinear optimiz’n

Filter coefsh[n]

Goodness of fitcriterion→ error "

Estimate derivatives∂"/∂h[n]

Update filterto reduce "

Desiredresponse

H(ej!)

Page 372: Signal Processing Columbia

2012-11-19Dan Ellis 20

Error Criteria

= W ( ) D e j( ) H e j( )[ ]Rpd

errormeasurement

region errorweighting

desiredresponse

actualresponse

exponent:2 → least sq∞ → minimax

= W(!)·[D(ej!) – H(ej!)]

1 2 3 4

H(ej )

W( )

R

D(ej )

( )

Page 373: Signal Processing Columbia

2012-11-19Dan Ellis 21

Minimax FIR Filters Iterative design of FIR filters with:

equiripple (minimax criterion) linear-phase → symmetric IR h[n] = (–)h[-n]

Recall: symmetric FIR filters have FR with pure-real

i.e. combo of cosines of multiples of !

H e j( ) = e jM ˜ H ( )

˜ H ( ) = a k[ ]cos k( )k=0

M a[0] = h[M]a[k] = 2h[M - k]

nM

(type Iorder 2M)

Page 374: Signal Processing Columbia

2012-11-19Dan Ellis 22

Minimax FIR Filters Now, cos(k!) can be expressed as a

polynomial in cos(!)k and lower powers e.g. cos(2!) = 2(cos!)2 - 1

Thus, we can find Æs such that

Æ[k]s easily lead to a[k]s

Mth order polynomial in cos!

H() =M

k=0

a[k]cos(k) =M

k=0

[k](cos)k

˜ H ( ) = a k[ ]cos k( )k=0

M

Page 375: Signal Processing Columbia

2012-11-19Dan Ellis 23

Minimax FIR Filters

An Mth order polynomial has at most M - 1 maxima and minima:

has at most M-1 min/max (ripples)

˜ H ( ) = k[ ] cos( )kk=0

M Mth order polynomial in cos!

˜ H ( )

5th orderpolynomial

in cos!

1

2

3

4

-1 -0.5 0 0.5 1

-0.2

-0.1

0

0.1

0.2

0 0.2 0.4 0.6 0.8

-0.2

-0.1

0

0.1

0.2

cos

cos1

2

3

4

M

k=0

[k](cos )k H()

Page 376: Signal Processing Columbia

2012-11-19Dan Ellis 24

Key ingredient to Parks-McClellan: is the unique, best, weighted-

minimax order 2M approx. to D(ej!) has at least M+2 “extremal” freqs

over ! subset R error magnitude is equal at each extremal:

peak error alternates in sign:

Alternation Theorem

˜ H ( )

˜ H ( )

0 <1 < ...<M <M +1

i( ) = i

i( ) = i+1( )

Page 377: Signal Processing Columbia

2012-11-19Dan Ellis 25

Alternation Theorem Hence, for a frequency response:

If "(!) reaches a peak error magnitude " at some set of extremal frequences !i

And the sign of the peak error alternates

And we have at least M+2 of them

Then optimal minimax(10th order filter, M = 5)

0 p c

2 3

0

0.2

0.4

0.6

0.8

1

10 4 5 6

-

0

D(ej ) desired frq.resp.

H(ej ) response mag.

( )error

W( )scaled error

bound

band-edge extrema

local min/max extrema

Page 378: Signal Processing Columbia

2012-11-19Dan Ellis 26

Alternation Theorem By Alternation Theorem,

M+2 extrema of alternating signs ⇒ optimal minimax filter

But has at most M-1 extrema⇒ need at least 3 more from band edges 2 bands give 4 band edges⇒ can afford to “miss” only one

Alternation rules out transition band edges, thus have 1 or 2 outer edges

˜ H ( )

Page 379: Signal Processing Columbia

2012-11-19Dan Ellis 27

Alternation Theorem For M = 5 (10th order):

8 extrema (M+3, 4 band edges) - great!

7 extrema (M+2, 3 band edges) - OK!

6 extrema (M+1, only 2 transition band edges)→ NOT OPTIMAL

0 0.2 0.4 0.6 0.8

H1~

H2~

H3~

0

0.5

1

0 0.2 0.4 0.6 0.80

0.5

1

0 0.2 0.4 0.6 0.80

0.5

1

Page 380: Signal Processing Columbia

2012-11-19Dan Ellis 28

Parks-McClellan Algorithm To recap:

FIR CAD constraints D(ej!), W(!) → "(!)

Zero-phase FIR H(!) = ßkÆkcosk! → M-1 min/max

Alternation theorem optimal → ≥ M+2 pk errs, alter’ng sign

Hence, can spot ‘best’ filter when we see it – but how to find it?

~

Page 381: Signal Processing Columbia

2012-11-19Dan Ellis 29

Parks-McClellan Algorithm Alternation → [H(!)-D(!)]/W(!) must =

±" at M+2 (unknown) frequencies !i... Iteratively update h[n] with

Remez exchange algorithm: estimate/guess M+2 extremals !i solve for Æ[n], " ( → h[n] ) find actual min/max in "(!) → new !i repeat until |"(!i)| is constant

Converges rapidly!

~ ~

M

Page 382: Signal Processing Columbia

2012-11-19Dan Ellis 30

Parks-McClellan Algorithm In Matlab,

>> h=f irpm(10, [0 0.4 0.6 1], [1 1 0 0], [1 2]);

filter order (2M) band edges ÷ º

desired magnitudeat band edges error weights

per band

-5 0 5-0.2

0

0.2

0.4

0.6

0 0.5 10

0.5

1

-1 0 1 2

-1

-0.5

0

0.5

1h[n] H( )

n

~

z-plane

Page 383: Signal Processing Columbia

2012-11-28Dan Ellis 1

ELEN E4810: Digital Signal ProcessingTopic 10:

The Fast Fourier Transform1. Calculation of the DFT2. The Fast Fourier Transform algorithm3. Short-Time Fourier Transform

Page 384: Signal Processing Columbia

2012-11-28Dan Ellis 2

1. Calculation of the DFT Filter design so far has been oriented to

time-domain processing - cheaper! But: frequency-domain processing

makes some problems very simple:

use all of x[n], or use short-time windows Need an efficient way to calculate DFT

Fourier domainprocessingx[n] y[n]DFT IDFT

X[k] Y[k]

Page 385: Signal Processing Columbia

2012-11-28Dan Ellis 3

The DFT Recall the DFT:

discrete transform of discrete sequence

Matrix form:

X[k] = x[n]WNkn

n=0

N1

WN = e j2

N( )

⇒ WNr has only

N distinct values

WN@2º/N

Structure ⇒ opportunities

for efficiency

X[0]X[1]X[2]

...X[N 1]

=

1 1 1 · · · 11 W 1

N W 2N · · · W (N1)

N

1 W 2N W 4

N · · · W 2(N1)N

......

.... . .

...1 W (N1)

N W 2(N1)N · · · W (N1)2

N

x[0]x[1]x[2]

...x[N 1]

Page 386: Signal Processing Columbia

2012-11-28Dan Ellis 4

Computational Complexity

N complex multiplies + N-1 complex adds per point (k) × N points (k = 0.. N-1) cpx mult: (a+jb)(c+jd) = ac - bd + j(ad + bc)

= 4 real mults + 2 real adds cpx add = 2 real adds

N points: 4N2 real mults, 4N2-2N real adds

X[k] = x[n]WNkn

n=0

N1

Page 387: Signal Processing Columbia

2012-11-28Dan Ellis 5

Goertzel’s Algorithm Now:

i.e.where

X k[ ] = x [ ]WNk

=0

N1

=WN

kN x [ ] WN

k N( )

looks like aconvolution

X k[ ] = yk N[ ]yk n[ ] = xe n[ ] hk[n]

xe[n] = x[n] 0 ≤ n < N0 n = N

hk[n] = WN-kn n ≥ 0

0 n < 0+

z-1WN-k

xe[n]xe[N] = 0

yk[n]yk[-1] = 0yk[N] = X[k]

Page 388: Signal Processing Columbia

2012-11-28Dan Ellis 6

Goertzel’s Algorithm Separate ‘filters’ for each X[k]

can calculate for just a few values of k No large buffer, no coefficient table Same complexity for full X[k]

(4N2 mults, 4N2 - 2N adds) but: can halve multiplies by making the

denominator real:

H z( ) = 11WN

k z1= 1WN

kz1

1 2cos 2kN z1 + z2

evaluate onlyfor last step

2 real multsper step

Page 389: Signal Processing Columbia

2012-11-28Dan Ellis 7

2. Fast Fourier Transform FFT Reduce complexity of DFT

from O(N2) to O(N·logN) grows more slowly with larger N

Works by decomposing large DFT into several stages of smaller DFTs

Often provided as a highly optimized library

Page 390: Signal Processing Columbia

2012-11-28Dan Ellis 8

Decimation in Time (DIT) FFT Can rearrange DFT formula in 2 halves:

X k[ ] = x n[ ] WNnk

n=0

N1

= x 2m[ ] WN2mk + x 2m +1[ ] WN

2m+1( )k( )m=0

N21

= x 2m[ ] WN2

mk

m=0

N21

+WNk x 2m +1[ ] WN

2

mk

m=0

N21

N/2 pt DFT of x for even n N/2 pt DFT of x for odd nX0[<k>N/2] X1[<k>N/2]

Arrangeterms

in pairs...

Group termsfrom each

pair

k = 0.. N-1

Page 391: Signal Processing Columbia

2012-11-28Dan Ellis 9

Decimation in Time (DIT) FFT

We can evaluate an N-pt DFT as two N/2-pt DFTs (plus a few mults/adds)

But if DFTN• ~ O(N2)then DFTN/2• ~ O((N/2)2) = 1/4 O(N2)

⇒ Total computation ~ 2 1/4 O(N2)= 1/2 the computation (+") of direct DFT

DFTN x n[ ] = DFTN2x0 n[ ] +WN

kDFTN2x1 n[ ]

x[n] for even n x[n] for odd n

Page 392: Signal Processing Columbia

2012-11-28Dan Ellis 10

One-Stage DIT Flowgraph

Classic FFT structure

Even pointsfrom x[n]

Odd pointsfrom x[n]

Same as X[0..3]

except forfactors on

X1[•] terms

“twiddle factors”:always apply to

odd-terms outputNOT mirror-image

X k[ ] = X0 k N2

[ ] +WNkX1 k N

2[ ]

x[0]x[2]x[4]x[6]

x[1]x[3]x[5]x[7]

X[0]X[1]X[2]X[3]

X[4]X[5]X[6]X[7]

X0[0]X0[1]X0[2]X0[3]

X1[0]X1[1]X1[2]X1[3]

DFTN2

DFTN2

WN0

WN1

WN2

WN3

WN4

WN5

WN6

WN7

Page 393: Signal Processing Columbia

2012-11-28Dan Ellis 11

If decomposing one DFTN into two smaller DFTN/2’s speeds things up ...Why not further divide into DFTN/4’s ?

i.e.

make:

Similarly,

Multiple DIT Stages

X k[ ] = X0 k N2

[ ] +WNkX1 k N

2[ ]

0 ≤ k < N

X0 k[ ] = X00 k N4

[ ] +WN2

k X01 k N4

[ ]0 ≤ k < N/2

N/4-pt DFT of even points in even subset of x[n]

N/4-pt DFT of odd points from even subset

X1 k[ ] = X10 k N4

[ ] +WN2

k X11 k N4

[ ]

Page 394: Signal Processing Columbia

2012-11-28Dan Ellis 12

Two-Stage DIT Flowgraph

x[0]x[4]x[2]x[6]

x[1]x[5]x[3]x[7]

X[0]X[1]X[2]X[3]

X[4]X[5]X[6]X[7]

X0[0]X00

X01

X10

X11

X0[1]X0[2]X0[3]

X1[0]X1[1]X1[2]X1[3]

DFTN4

DFTN4

DFTN4

DFTN4

WN0

WN1

WN2

WN3

WN4

WN5

WN6

WN7WN/2

3

WN/20WN/23

WN/20

different from before same as before

Page 395: Signal Processing Columbia

2012-11-28Dan Ellis 13

Multi-stage DIT FFT Can keep doing this until we get down

to 2-pt DFTs:

→ N = 2M-pt DFT reduces to M stages of twiddle factors & summation (O(N2) part vanishes)

→ real mults < M·4N , real adds < 2M·2N→ complexity ~ O(N·M) = O(N·log2N)

DFT2X[0] = x[0] + x[1]X[1] = x[0] - x[1] -1 = W2

1

1 = W20

“butterfly” element

Page 396: Signal Processing Columbia

2012-11-28Dan Ellis 14

WNr+N

2 = e j2 r+N

2( )N

= e j2r

N e j2N /2

N

= WNr

FFT Implementation Details Basic butterfly (at any stage):

Can simplify:

WNr

WNr+N/2

XX[r]

XX[r+N/2]

XX0[r]

XX1[r]

••••••

2 cpx mults

XX[r]

XX[r+N/2]

XX0[r]

XX1[r]WN

r -1

just one cpx mult!

i.e. SUB rather than ADD

Page 397: Signal Processing Columbia

2012-11-28Dan Ellis 15

8-pt DIT FFT Flowgraph

-1’s absorbed into summation nodes WN

0 disappears ‘in-place’ algorithm: sequential stages

bit-r

ever

sed

inde

xing

W4

W4

W8W8W8

x[0]x[4]x[2]x[6]x[1]x[5]x[3]x[7]

000100010110001101011111

X[0]X[1]X[2]X[3]X[4]X[5]X[6]X[7]

---

--

--

--

-

-

-2

3

Page 398: Signal Processing Columbia

2012-11-28Dan Ellis 16

FFT for Other Values of N Having N = 2M meant we could divide

each stage into 2 halves = “radix-2 FFT” Same approach works for:

N = 3M radix-3 N = 4M radix-4 - more optimized radix-2 etc...

Composite N = a·b·c·d → mixed radix(different N/r point FFTs at each stage) .. or just zero-pad to make N = 2M

M

Page 399: Signal Processing Columbia

2012-11-28Dan Ellis 17

Inverse FFT Recall IDFT:

Thus:

Hence, use FFT to calculate IFFT:

x[n] = 1N

X[k]WNnk

k=0

N1

only differencesfrom forward DFT

Nx*[n] = X[k]WNnk( )*

k=0

N1

= X *[k]WNnk

k=0

N1

Forward DFT of x′[n] = X*[k]|k=ni.e. time sequence made from spectrum

DFTRex[n]Imx[n]

Re

Im

Re

Im

ReX[k]ImX[k]

1/N

-1/N-1

pure real flowgraph

x n> @ 1N X * k> @WN

nk

k 0

N 1

*

Page 400: Signal Processing Columbia

2012-11-28Dan Ellis 18

If x[n] is pure-real, DFT wastes mult’s Real x[n] → Conj. symm. X[k] = X*[-k] Given two real sequences, x[n] and w[n]

call y[n] = j·w[n] , v[n] = x[n] + y[n] N-pt DFT V[k] = X[k] + Y[k]

but: V[k]+V*[-k] = X[k]+X*[-k]+Y[k]+Y*[-k]⇒ X[k]=1/2(V[k]+V*[-k]) , W[k]=-j/2(V[k]-V*[-k])

i.e. compute DFTs of two N-pt real sequences with a single N-pt DFT

DFT of Real Sequences

X[k] -Y[k]

Page 401: Signal Processing Columbia

2012-11-28Dan Ellis 19

3. Short-Time Fourier Transform (STFT) Fourier Transform (e.g. DTFT) gives

spectrum of an entire sequence: How to see a time-varying spectrum? e.g. slow AM of a sinusoid carrier:

x n[ ] = 1 cos 2nN

cos0n

0 200 400 600 800 1000-2

-1

0

1

2

n

x[n]

Page 402: Signal Processing Columbia

2012-11-28Dan Ellis 20

Fourier Transform of AM Sine Spectrum of

whole sequence indicates modulation indirectly...

... as cancellationbetween closely-tuned sines

2cAcB= cA+B +cA-B

Nsin2ºknN

-Nsin2º(k-1)nN2

-Nsin2º(k+1)nN2

0 0.02 0.04 0.06 0.080

200

400

600N

N/2

X[k]

k/(N/2)

-1

-0.5

0

0.5

1

-1

-0.5

0

0.5

1

0 128 256 384 512 640 768 896-1

-0.5

0

0.5

1

M

Page 403: Signal Processing Columbia

2012-11-28Dan Ellis 21

Fourier Transform of AM Sine Sometimes we’d rather separate

modulation and carrier:x[n] = A[n]cos!0n A[n] varies on a

different (slower) timescale One approach:

chop x[n] into short sub-sequences .. .. where slow modulator is ~ constant DFT spectrum of pieces → show variation

!

A[n]

!0

Page 404: Signal Processing Columbia

2012-11-28Dan Ellis 22

FT of Short Segments Break up x[n] into successive, shorter

chunks of length NFT, then DFT each:

Shows amplitude modulationof !0 energy

0 128 256 384 512 640 768 896 1024 = N-2

-1

0

1

2

0 640

50

100

n

k k

x0[n]

x[n]

X0[k]

x1[n]

X1[k]

x2[n]

X2[k]

x3[n]

X3[k]

x4[n]

X4[k]

x5[n]

X5[k]

x6[n]

X6[k]

x7[n]

X7[k]

k0 = 0 · NFT2

NFT = N/8

Page 405: Signal Processing Columbia

2012-11-28Dan Ellis 23

The Spectrogram Plot successive DFTs in time-frequency:

This image is called the Spectrogram

time hopsize (between successive frames)= 128 points

128 256 384 512 640 768 896 1024

k

n

k kXi[k]

X[k,n]

00

5

10

15

200

406080100120

X0[k] X1[k] X2[k] X3[k] X4[k] X5[k] X6[k] X7[k]

Page 406: Signal Processing Columbia

2012-11-28Dan Ellis 24

Short-Time Fourier Transform Spectrogram = STFT magnitude

plotted on time-frequency plane STFT is (DFT form):

intensity as a function of time & frequency

X k,n0[ ] = x n0 + n[ ] w[n] e j2knNFT

n=0

NFT 1

frequency

index timeindex

NFT points of xstarting at n

window DFTkernel

Page 407: Signal Processing Columbia

2012-11-28Dan Ellis 25

STFT Window Shape w[n] provides ‘time localization’ of STFT

e.g. rectangularselects x[n], n0 ≤ n < n0+NW

But: resulting spectrum has same problems as windowing for FIR design:

nw[n]

X e j ,n0( ) = DTFT x n0 + n[ ] w n[ ] = e jn0X e j( )W e j ( )( )d

spectrum of short-time window

is convolved with (twisted) parent spectrum

DTFTform ofSTFT

Page 408: Signal Processing Columbia

2012-11-28Dan Ellis 26

STFT Window Shape e.g. if x[n] is a pure sinusoid,

Hence, use tapered window for w[n]

blurring (mainlobe)+ ghosting (sidelobes)

e.g. Hamming

w n[ ] =0.54 + 0.46cos(2 n

2M +1)

sidelobes< -40 dB

X(ej ) W(ej )

-10 -5 0 5 10n

w[n] W(ej )

Page 409: Signal Processing Columbia

2012-11-28Dan Ellis 27

STFT Window Length Length of w[n] sets temporal resolution

Window length ∝ 1/(Mainlobe width)

more time detail ↔ less frequency detail

short window measuresonly local properties

longer window averagesspectral character

shorter window→ more blurred

spectrum

0 200 400 600 800 1000-0.1

0

0.1

0.2

0 200 400 600 800 1000-0.1

0

0.1

0.2x[n] x[n]w [n] LS w [n]

-100 -50 0 50 1000

0.5

1

- -0.5 0 0.50

10

20

-100 -50 0 50 1000

0.5

1

- -0.5 0 0.50

10

20

n

n

wL[n]

wS[n] WS(ej )

WL(ej )

N1 pts

N2 pts

N1zero at 4π

N2zero at 4π

Page 410: Signal Processing Columbia

2012-11-28Dan Ellis 28

STFT Window Length Can illustrate time-frequency tradeoff

on the time-frequency plane:

Alternate tilingsof time-freq:

disks show ‘blurring’due to window length;area of disk is constant→ Uncertainty principle:

±f·±t ≥ k

half-length window → half as many DFT samples

0 100 200 3000

0.51

0

50

100

150

200

250

n

k

Page 411: Signal Processing Columbia

2012-11-28Dan Ellis 29

Spectrograms of Real Sounds

individual t-fcells merge

into continuousimage

time-domain

successiveshortDFTs

time / s

time / s

freq

/ Hz

intensity / dB

2.35 2.4 2.45 2.5 2.55 2.60

1000

2000

3000

4000

freq

/ Hz

0

1000

2000

3000

4000

0

0.1

-50

-40

-30

-20

-10

0

10

0 0.5 1 1.5 2 2.5

Page 412: Signal Processing Columbia

2012-11-28Dan Ellis 30

Narrowband vs. Wideband Effect of varying window length:

1.4 1.6 1.8 2 2.2 2.4 2.6

freq

/ Hz

time / s

level/ dB

0

1000

2000

3000

4000

freq

/ Hz

0

1000

2000

3000

4000

0

0.2

Win

dow

= 25

6 pt

Nar

row

band

W

indo

w =

48 p

t W

ideb

and

-50

-40

-30

-20

-10

0

10

M

Page 413: Signal Processing Columbia

2012-11-28Dan Ellis 31

Spectrogram in Matlab>> [d,sr]=wavread(’mpgr1_sx419.wav');

>> Nw=256;>> specgram(d,Nw,sr)>> caxis([-80 0])

>> colorbar

(hann) window length

actual sampling rate(to label time axis)

dB

Time

Freq

uenc

y

0.5 1 1.5 2 2.5 30

2000

4000

6000

8000

-80

-60

-40

-20

0

Page 414: Signal Processing Columbia

2012-11-28Dan Ellis

-60 -40 -20 0 -1

0

1-1

0

1

n Rex[n]

Imx[n]

32

STFT as a Filterbank Consider one ‘row’ of STFT:

where

Each STFT row is output of a filter (subsampled by the STFT hop size)

Xk n0[ ] = x n0 + n[ ] w n[ ] e j2knN

n=0

N1

= hk m[ ]x n0 m[ ]m=0

N1( )

hk n[ ] = w n[ ] e j2knN

just one freq.convolution

withcomplex IR

Page 415: Signal Processing Columbia

2012-11-28Dan Ellis 33

STFT as a Filterbank If

then

Each STFT row is the same bandpass response defined by W(ej!), frequency-shifted to a given DFT bin:

hk n[ ] = w ( )n[ ] e j2knN

Hk ej( ) =W e ( ) j 2kN( )( ) shift-in-!

A bank of identical, frequency-shiftedbandpass filters:“filterbank”

W(ej )H1(ej ) H2(ej ) • • •

Page 416: Signal Processing Columbia

2012-11-28Dan Ellis 34

STFT Analysis-Synthesis IDFT of STFT frames can reconstruct

(part of) original waveform e.g. if

then Can shift by n0, combine, to get x[n]:

Could divide by w[n-n0] to recover x[n]...

X k,n0[ ] = DFT x n0 + n[ ] w n[ ] IDFT X k,n0[ ] = x n0 + n[ ] w n[ ]

^

n0n

x[n] x[n]·w[n-n0]

Page 417: Signal Processing Columbia

2012-11-28Dan Ellis 35

STFT Analysis-Synthesis Dividing by small values of w[n] is bad

Prefer to overlap windows:

i.e. sample X[k,n0]at n0 = r·H where H = N/2 (for example)

Thenhopsize window length

ˆ x n[ ] = x n[ ]w n rH[ ]r= x n[ ]

w n rH[ ]r =1if

n

x[n]

x[n]·w[n-r·H]

Page 418: Signal Processing Columbia

2012-11-28Dan Ellis 36

STFT Analysis-Synthesis Hann or Hamming windows

with 50% overlap sum to constant

Can modify individual frames of X[k,n] and then reconstruct complex, time-varying modifications tapered overlap makes things OK

0.54 + 0.46cos(2 nN )( )

+ 0.54 + 0.46cos(2 nN2N )( ) =1.08

0 20 40 60 800

0.2

0.4

0.6

0.8

1

n

w[n] w[n-N/2]w[n] + w[n-N/2]

Page 419: Signal Processing Columbia

2012-11-28Dan Ellis 37

STFT Analysis-Synthesis e.g. Noise reduction:

STFT oforiginal speech

Speech corruptedby white noise

Energy thresholdmask

M100 200 300

20406080100120

r

k

Page 420: Signal Processing Columbia

2012-12-05Dan Ellis 1

ELEN E4810: Digital Signal ProcessingTopic 11:

Continuous Signals1. Sampling and Reconstruction2. Quantization

Page 421: Signal Processing Columbia

2012-12-05Dan Ellis 2

1. Sampling & Reconstruction DSP must interact with an analog world:

DSPAnti-aliasfilter

Sampleandhold

A to D

Reconstructionfilter

D to A

Sensor ActuatorWORLD

x(t) x[n] y[n] y(t)ADC DAC

Page 422: Signal Processing Columbia

2012-12-05Dan Ellis 3

Sampling: Frequency Domain Sampling: CT signal → DT signal by

recording values at ‘sampling instants’:

What is the relationship of the spectra? i.e. relate

and

g n[ ] = ga nT( )Discrete ContinuousSampling period T

→ samp.freq. ≠samp = 2º/T rad/sec

Ga j( ) = ga t( )e jtdt

G(e j ) = g n[ ]e jn

CTFT

DTFT

≠ in rad/second

! in rad/sample

Page 423: Signal Processing Columbia

2012-12-05Dan Ellis 4

Sampling DT signals have same ‘content’

as CT signals gated by an impulse train:

gp(t) = ga(t)·p(t) is a CT signal with the same information as DT sequence g[n]

t

t

t

p(t)

ga(t) gp(t)

‘sampled’ signal:still continuous- but discrete

valuesT 3T

= t nT( )n=

CT delta fn

×

Page 424: Signal Processing Columbia

2012-12-05Dan Ellis 5

Spectra of sampled signals Given CT CTFT Spectrum

Compare to DTFT

i.e.

by linearity

! ~ ≠T

≠ ~ !/T

º

º/T = ≠samp/2

gp(t) =

n=ga(nT ) · (t nT )

Gp(j) = Fgp(t) =

n

ga(nT )F(t nT )

Gp(j) =

n

ga(nT )ejnT

G(ej) =

n

g[n]ejn

G(ej) = Gp(j)|T=

Page 425: Signal Processing Columbia

2012-12-05Dan Ellis 6

Spectra of sampled signals Also, note that

is periodic, thus has Fourier Series:

But so

p t( ) = t nT( )n

p t( ) = 1T

e j2T( )kt

k=

ck = 1T

p t( )e j2kt /T dtT /2

T /2= 1T

F e j0t x t( ) = X j 0( )( ) shift infrequency

Gp j( ) = 1T Ga j k samp( )( )k

- scaled sum of replicas of Ga(j≠)shifted by multiples of sampling frequency ≠samp

Page 426: Signal Processing Columbia

2012-12-05Dan Ellis 7

CT and DT Spectra

G e j( ) =Gp j( )T== 1T Ga j

T k 2T( )( )k

G e jT( ) = 1T Ga j k samp( )( )kDTFT CTFT

ga t( )Ga j( )

p t( ) P j( )

gp t( )Gp j( )

g n[ ]G e j( )

×

=

So:

≠≠M−≠M

ga(t) is bandlimited @ ≠Μ

≠≠samp−≠samp

!

M

2≠samp−2≠samp

M/T shifted/scaled copies

2º 4º−2º−4º athave

samp 2T

T 2

Page 427: Signal Processing Columbia

2012-12-05Dan Ellis 8

Avoiding aliasing Sampled analog signal has spectrum:

ga(t) is bandlimited to ± ≠M rad/sec When sampling frequency ≠samp is large...→ no overlap between aliases→ can recover ga(t) from gp(t)

by low-pass filtering

≠≠M

−≠samp ≠samp−≠M≠samp− ≠M−≠samp + ≠M

Gp(j≠) Ga(j≠) “alias” of “baseband” signal

Page 428: Signal Processing Columbia

2012-12-05Dan Ellis 9

Aliasing & The Nyquist Limit If bandlimit ≠M is too large, or sampling

rate ≠samp is too small, aliases will overlap:

Spectral effect cannot be filtered out→ cannot recover ga(t)

Avoid by: i.e. bandlimit ga(t) at ≤

≠≠M−≠samp ≠samp

−≠M≠samp− ≠M

Gp(j≠)

samp M M samp 2M

Sampling theorum

Nyquistfrequency

samp

2

Page 429: Signal Processing Columbia

2012-12-05Dan Ellis 10

Anti-Alias Filter To understand speech, need ~ 3.4 kHz → 8 kHz sampling rate (i.e. up to 4 kHz) Limit of hearing ~20 kHz → 44.1 kHz sampling rate for CDs Must remove energy above Nyquist with

LPF before sampling: “Anti-alias” filter

M

ADCAnti-alias

filterSample& hold

A to D

‘space’ forfilter rolloff

Page 430: Signal Processing Columbia

2012-12-05Dan Ellis 11

Sampling Bandpass Signals Signal is not always in ‘baseband’

around ≠ = 0 ... may be at higher ≠:

If aliases from sampling don’t overlap, no aliasing distortion; can still recover:

Basic limit: ≠samp/2 ≥ bandwidth ¢≠

≠−≠H ≠L −≠L ≠H

A

Bandwidth ¢≠ = ≠H − ≠L

≠≠samp −≠samp 2≠samp

A/T

≠samp/2

Page 431: Signal Processing Columbia

2012-12-05Dan Ellis 12

make a continuous impulse train gp(t)

lowpass filter to extract baseband→ ga(t)

Reconstruction To turn g[n]

back to ga(t):

Ideal reconstruction filter is brickwall i.e. sinc - not realizable (especially analog!) use something with finite transition band...

^ tgp(t)

ng[n]

tga(t)

!

G(ej!)

Gp(j≠)

Ga(j≠)

º

≠samp/2

^

^ ^

^^

^

Page 432: Signal Processing Columbia

2012-12-05Dan Ellis 13

2. Quantization Course so far has been about

discrete-time i.e. quantization of time Computer representation of signals also

quantizes level (e.g. 16 bit integer word) Level quantization introduces an error

between ideal & actual signal → noise Resolution (# bits) affects data size → quantization critical for compression smallest data ↔ coarsest quantization

Page 433: Signal Processing Columbia

2012-12-05Dan Ellis 14

Quantization Quantization is performed in A-to-D:

Quantization has simple transfer curve:Quantized signal

ˆ x = Q x Quantization error

e = x ˆ x

t

x

n

x

T

A/D

-5 -4 -3 -2 -1 0 1 2 3 4 5-5-4-3-2-1012345

x

Qx

e = x - Qx

Page 434: Signal Processing Columbia

2012-12-05Dan Ellis 15

Quantization noise Can usually model quantization as

additive white noise: i.e. uncorrelated with self or signal x

+x[n] x[n]^

e[n]

-

bits ‘cut off’ by quantization;

hard amplitude limit

Page 435: Signal Processing Columbia

2012-12-05Dan Ellis 16

Quantization SNR Common measure of noise is

Signal-to-Noise ratio (SNR) in dB:

When |x| >> 1 LSB, quantization noise has ~ uniform distribution:

SNR =10 log10 x

2

e2 dB

signal power

noise power

(quantizer step = ")

e2 =

2

12-/2 +/2

P(e[n])

Page 436: Signal Processing Columbia

2012-12-05Dan Ellis 17

Quantization SNR Now, σx

2 is limited by dynamic range of converter (to avoid clipping)

e.g. b+1 bit resolution (including sign)output levels vary -2b·" .. (2b-1)"

! ! ! ! where full-scale range

i.e. ~ 6 dB SNR per bit

depends on signal

–RFS2... RFS

2 RFS 2b1

SNR = 10 log102

xR2

FS

22(b+1)·12

6b + 16.8 20 log10RFS

x

Page 437: Signal Processing Columbia

2012-12-05Dan Ellis 18

Coefficient Quantization Quantization affects not just signal

but filter constants too .. depending on implementation .. may have different resolution

Some coefficients are very sensitive to small changes e.g. poles near

unit circle

high-Q polebecomesunstable

z-plane

Page 438: Signal Processing Columbia

2012-12-05Dan Ellis 19

Project Presentations10:10 Will Silberman & Anton Mayer10:20 Yin Cui10:30 Joe Ellis, Miles Sherman, Amit Sarkar10:40 Andrea Zampieri & Francesco Vicario10:50 Binyan Chen & Chen-Hsin Wang11:00 Michael Carapezza & Ugne Klibaite11:10 Kapil Wattamwar, Ansh Johri, Nida Dangra11:20 Nayeon Kim & Ban Wang

Page 439: Signal Processing Columbia

2012-12-12Dan Ellis 1

ELEN E4810: Digital Signal Processing

Review Session1. Filter design2. Allpass & Minimum phase3. IIR filter design4. FIR filter design5. Implementations6. FFT

Page 440: Signal Processing Columbia

2012-12-12Dan Ellis

Filter Design Filters select frequency regions Performance Margins

2

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9-40

-30

-20

-10

0

frequency

gain

/ dB

passbandedge

frequency

Passband

Stopband

passbandripple

minimumstopband

attenuation

optimalfilter

will touchhere

stopbandedge

frequency

trans

ition

ban

d

G(ej )

Page 441: Signal Processing Columbia

2012-12-12Dan Ellis

Allpass Filters Constant gain, variable phase Mirror-image polynomial

Reciprocal poles/zeros

3

AM z( ) = ± dM + dM 1z1 + ...+ d1z

M 1( ) + zM

1+ d1z1 + ...+ dM 1z

M 1( ) + dM zM

= ±zMDM z1( )DM z( )

!3 !2 !1 0 1 2!1

0

1

RezImz

Page 442: Signal Processing Columbia

2012-12-12Dan Ellis

Minimum & Maximum Phase Min. phase + Allpass = Max. phase

4

o

o

o

o

o

o

z ( ) z *( )z

z 1( ) z 1

*( )

z ( ) z *( )

z 1( ) z 1

*( )

z x =

pole-zerocancl’n

Page 443: Signal Processing Columbia

2012-12-12Dan Ellis

Analog Filter Types flat | ripple x passband | stopband

5

0 0.5 1 1.5 2-60

-50

-40

-30

-20

-10

0

0 0.5 1 1.5 2-60

-50

-40

-30

-20

-10

0

0 0.5 1 1.5 2-60

-50

-40

-30

-20

-10

0

0 0.5 1 1.5 2-60

-50

-40

-30

-20

-10

0

11c

gain

/ dB

1

gain

/ dB

1

gain

/ dB

1

gain

/ dB

Butterworth Chebyshev I

Chebyshev II Elliptical

1s

1p

r

r

A A

1p

s-plane

Res

ImsΩΩc×

×

××

sc

2N

= 1

-1 -0.5 0 0.5 1-1

-0.5

0

0.5

1

Res

Ims

Page 444: Signal Processing Columbia

2012-12-12Dan Ellis

Bilinear Transform Analog IIR to Discrete-time IIR

“Pre-warp” to design6

s-plane

Res

Ims

jΩΩ

z-plane

Rez

Imz

1

ej!

LHHP↔UCI

Im↔u.c.

s =1 z1

1 + z1=

z 1z + 1

0 1 2 3 4 5 6-60

-40

-20

0

-60 -40 -20 00

0.2

0.4

0.6

0.8

/

1

1

|Ha(j1)|

|G(ejt)|

t = 2tan<1(1)

tt

G e j( ) = Ha j( )=2 tan1

Page 445: Signal Processing Columbia

2012-12-12Dan Ellis

FIR Filters Linear Phase FIR filters, e.g. h[n] = h[N-n]

Windowed ideal responses

7

Hd(ej!) DTFTwR[n] Ht(ej!)

1

h[n]

n

!H(!)~

ZPD

∠(!)

H e j( ) = h n[ ]e jnn=0

N= e j

N2 h N

2[ ] + 2 h N2 n[ ]cosnn=1

N /2( )linear phaseD = -µ(!)/! = N/2

Page 446: Signal Processing Columbia

2012-12-12Dan Ellis

Parks-McClellan FIR design Gradient descent to find best FIR filter At least M+2 alternating extrema

(order = 2M, length = order + 1)>> h=f irpm(10, [0 0.4 0.6 1], [1 1 0 0], [1 2]);

8

-5 0 5-0.2

0

0.2

0.4

0.6

0 0.5 10

0.5

1

-1 0 1 2

-1

-0.5

0

0.5

1h[n] H( )

n

~

z-plane

Page 447: Signal Processing Columbia

2012-12-12Dan Ellis

Implementations Polynomial indicates implementation Decompose into common blocks

(second order sections)

9

+

-2∞2

+

z-12Æ2

++

z-1

+

p0

-∞1

+

z-1

Æ1

x y

∞22+±2

2−(Æ22+Ø2

2)

z-1z-1

Page 448: Signal Processing Columbia

2012-12-12Dan Ellis

FFT

10

x[0]x[4]x[2]x[6]

x[1]x[5]x[3]x[7]

X[0]X[1]X[2]X[3]

X[4]X[5]X[6]X[7]

X0[0]X00

X01

X10

X11

X0[1]X0[2]X0[3]

X1[0]X1[1]X1[2]X1[3]

DFTN4

DFTN4

DFTN4

DFTN4

WN0

WN1

WN2

WN3

WN4

WN5

WN6

WN7WN/2

3

WN/20WN/23

WN/20

different from before same as before

N/2 pt DFT of x for even n N/2 pt DFT of x for odd nX0[<k>N/2] X1[<k>N/2]

X[k] =N1

n=0 x[n] · WnkN

=N

2 1m=0 x[2m] · Wmk

N2

+ W kN

N2 1

m=0 x[2m + 1] · WmkN2


Recommended