Date post: | 02-Jun-2018 |
Category: |
Documents |
Upload: | dev-vishnu |
View: | 238 times |
Download: | 0 times |
of 72
8/11/2019 Adaptive Signal Processing , ebook . Ingle
1/72
1
Adaptive Signal Processing Adaptiivne signaalittlus
Leon H. SibulKevadsemester, 2007
8/11/2019 Adaptive Signal Processing , ebook . Ingle
2/72
2
Course Outline - ppekava
I Introduction Overview of applications and basic concepts
of adaptive signal processing.1. Brief overview of applicationsa. Linear prediction. b. Speech coding
c. Noise cancellationd. Echo cancellatione. Adaptive filteringf. System identificationg. Equalization and deconvolutionh. Adaptive beamforming and array processingi. Signal separation.
8/11/2019 Adaptive Signal Processing , ebook . Ingle
3/72
3
2. Introduction to basic concepts of optimization and adaptive signal processing.
a. Optimization criteria.Mean square error Minimum variance.Maximum signal to noise ratioMaximum likelihood..Bit error.
b. Introduction to basic adaptive algorithms.Gradient search.The least mean-square (LMS) algorithm.
Stochastic approximation. Nonlinear algorithms.Linear algebra and orthogonal decomposition algorithms.
3. Matrix notation and basic linear algebra.
8/11/2019 Adaptive Signal Processing , ebook . Ingle
4/72
4
II Theory of optimum and adaptive systems.
1. Review of discrete-time stochastic processes.2. Mean-square error3. Finite impulse response Wiener filters.4. Gradient decent algorithm.5. Stability, convergence and properties of error surfaces.6. Examples of applications.
III Basic adaptive algorithms and their properties.1. The least mean-square (LMS) algorithm.
a. Derivation of basic LMS algorithm.
b. Learning curve, time constants, misadjustment, and stability.c. Step size control.d. Variations of LMS algorithm.
8/11/2019 Adaptive Signal Processing , ebook . Ingle
5/72
5
2. Recursive least-squares algorithm.3. Lattice algorithms.4. Linear algebra and orthogonal decomposition algorithms.5. Frequency domain algorithms.
IV Applications.1. Linear prediction and speech coding.2. Noise cancellation.4. Echo cancellation.5. Adaptive beamforming and array processing.
a. Linear adaptive arrays. b. Constrained adaptive arrays.
Minimum variance desired look constraint.Frost beamformer
c. Generalized sidelobe canceller.d. Robust adaptive arrays.
8/11/2019 Adaptive Signal Processing , ebook . Ingle
6/72
6
Bibliography
1. Vary, P. and Martin, R., Digital Speech Transmission- Enhancement,Coding and Error Concealment, John Wiley & Sons, LTD., Chichester,England, 2006.
2. Schobben, D. W. E., Real Time Concepts in Acoustics, Kluwer Academic Publishers, Dordrecht, The Netherlands, 2001.
3. Poularikas, A. D. and Ramadan, z. M., Adaptive Filter Primer with MATLAB, CRC, Taylor & Francis, Boca Raton, FL., USA, 2006.
4. Haykin, S., Adaptive Filter Theory, Third Ed., Prentice Hall, UpperSaddle River, NJ, USA, 1996.
5. Alexander, S.T., Adaptive Signal Processing, Theory and Applications,Springer-Verlag, New York, USA, 1986.
6. Widrow, B. and Sterns, S.D., Adaptive Signal Processing, Prentice-Hall,Englewood Cliffs, NJ, USA, 1985.
8/11/2019 Adaptive Signal Processing , ebook . Ingle
7/72
7
7. Adaptive Signal Processing, Edited by L.H Sibul, IEEE Press, NewYork, USA 1987.
8. Manzingo R.A. and Miller, T.W., Introduction to Adaptive Arrays, John
Wiley-Interscience, New York, USA, 1980.9. Swanson C. D.,Signal Processing for Intelligent Sensors, Marcel
Dekker, New York, USA, 2000.10. Colub,G.H., and Van Loan, Matrix Computations, The Johns Hopkins
University Press, Baltimore, MD, USA, 1983.11. Tammeraid, Ivar, Lineaaaralgebra rakendused, TT Kirjastus, Tallinn,
Estonia, 1999. 2. Lineaaralgebra avutusmeetodid. 2.3 Singulaarlahutus.12. Van Trees, H.L.,Optimum Array Processing, Part IV of Detetection,
Estimation and Modulation Theory, Wiley-Interscience, New York,USA, 2002. Chapter 6 Optimum Waveform Estimation, Chapter 7-Adaptive Beamformers, A- Matrix Operations.
8/11/2019 Adaptive Signal Processing , ebook . Ingle
8/72
8
13. Allen, B., and Ghavami, M., Adaptive Array Systems; Fundamentals and Applications, Wiley, Chichester, England, 2005.
14. Cichocki, A., and Amari, S-I, Adaptive Blind Signal and Image
Processing, Wiley,West Sussex, England, 2002.
8/11/2019 Adaptive Signal Processing , ebook . Ingle
9/72
9
ppenuded ja hindamine:
1. Semestri t ja aruanne: Rakenduslesande
lahendus kasutades adaptiivset signaalittlust jaMATLABi. Teema valik oleneb pilase huvidest jaoskustest. 60% hindest.
2. Kodutd ja harjutused. 20% hindest Kodutd jaharjutused peavad olema sooritatud, etlppeksamile pseda.
3. Suuline lppeksam. Ksitab peamiselt semestri td ja phiteooriat. pilane vib kasutada kuni 20leheklge enda tehtud mrkmeid. 20% hindest.
8/11/2019 Adaptive Signal Processing , ebook . Ingle
10/72
10
Semestrit ja aruande nuded.1. Sissejuhastus- lesande definitsioon, selle rakendus, selle
thtsus ja lhike aruande levaade.2. Teooria ja algoritmi tuletus.3. Kasutatud algoritm ja kuidas see lahendab kesoleva
rakenduslesande.
4. MATLABIi programm.5. Graafikud ja nende selgitus.6. Tulemuste anals, selgitused ja jreldused.
7. Kokkuvte.8. Kirjandus.
Mrkused: Word, Powerpoint, PDF, umbes 10 kuni 15 leheklge, eesti vi inglisekeeles.
8/11/2019 Adaptive Signal Processing , ebook . Ingle
11/72
11
Basic Concepts of Adaptive Signal Processing.
Applications.
Optimization Criterion or Performance Measures.
Adaptive or Learning Algorithms .
Improved System Performance.
Noise reduction, echo cancellation,..
Performance Measures.
Signal andnoiseenvironment
8/11/2019 Adaptive Signal Processing , ebook . Ingle
12/72
12
Linear Prediction Filter of Order n.
T T T T
1a
1
n
i=
2a 3a na
d(k ) x(k )
( ) x k
( ) { }
{ } ( ) ( ){ }
1
22
( ) Prediction error ( ) ( ) ( ) .
Adaptive algorithms minimize mean square prediction error:
( )
n
ii
x k a x k i d k x k x k
E d k E x k x k
=
= =
=
V Vary and Martin, 2006, Ch. 6, Haykin,1996, Ch. 6.
8/11/2019 Adaptive Signal Processing , ebook . Ingle
13/72
13
Optimum Linear Prediction.
{ } ( )( )
{ } ( ) ( ) ( ) ( ){ }
{ } ( ){ }
( ) ( ){ } ( ) ( ) ( )
22
2
2 22
2
1
Minimize mean square error: ( ) ( ) .
( )2 2 0 1,2,... .
( )2 0 Minimum.
=
n
ii
E d k E x k x k
E d k d k E d k E d k x k n
a a
E d k E x k
a
E d k x k E x k a x k i x k
E
=
=
= = = =
=
=
( ) ( ){ } ( ) ( )1
1
{ }
( ) ( ) 0
n
ii
n
xx xxi
x k x k a E x k i x k
R R i
=
=
= =
8/11/2019 Adaptive Signal Processing , ebook . Ingle
14/72
8/11/2019 Adaptive Signal Processing , ebook . Ingle
15/72
15
Linear Prediction and Speech Coding.
0
Noisegenerator
Variable
Filter
Impulsegenerator
( ), ( )h k H z
Filter parameters
Discrete time speech production model.
( )u k ( )v k ( ) x k
g
S
0 : pitch period
: voiced/unvoiced : gain( ) : impulse response( ) : excitation signal
( ) :speech signal
N
S g
h k
v k
x k i
1
Autoregressive (AR) model for speech.1( )1 ( )
C(z)= - cm
i
i
H z C z
z
=
=
8/11/2019 Adaptive Signal Processing , ebook . Ingle
16/72
16
Example of application of Linear Predictor to SpeechCoding.
Transmitter Channel Receiver
-+
++++
( )k a ( )k a
Linear Prediction Filter Coefficients.
Adaptive Analysis Filter Speech Synthesis
x(k)
( ) x k d(k) y(k)
Vary and Martin, 2006, Ch. 8.
8/11/2019 Adaptive Signal Processing , ebook . Ingle
17/72
17
Model-Based Speech Coding.
Speech production LP encoder Channel LP Decoder Model
1 ( )1 ( )
V z C z
( ) z 1 ( ) z
( ) D z 11 ( ) z
( )Y z
( ) z
1 ( )( ) ( ) if ( ) ( ) then ( ) ( ) (excitation)1 ( )
1 1Synthesis filter is: ( ) , ( ) ( ) ( ) ( ) ( ).1 ( ) 1 ( )
Bit rate of encoded speech: 2 bit/sample,
sampling frequenc s
s
A z D z V z A z C z D z V z
C z
H z Y z H z D z V z X z A z C z
Bw
f
f
= = =
= = =
=
y=8 kHz, transmision rate in bits/sec. BVary and Martin,2006, Ch. 8.
8/11/2019 Adaptive Signal Processing , ebook . Ingle
18/72
18
Adaptive Noise Canceller.
Primary inputSignal
Source s(k)
NoiseSourcen(k)
+
AdaptiveFilter
LMS algorithm
Noise Reference
Noise estimate( )n k
Output s(k)+n(k)
n(k)
Auxiliary noise sensor obtains signal free noise sample.
Widrow and Sterns, 1985.
8/11/2019 Adaptive Signal Processing , ebook . Ingle
19/72
19
Echo Cancellation for Hands-free Telephone Systems .
Local speaker M
LS
A/D
D/A
( ) distant speaker echo x t
s(t)
n(t)
Adaptive algorithm .
Signal from distant speaker.
( ) ( ) ( ) ( ) y k s k n k x k = + +
( ) x k
( ) x k
-( ) s k
[ ] ( ) ( ) ( ) ( ) ( ) s k s k n k x k x k = + +
( ) y k +
Vary and Martin, 2006, Ch. 13
8/11/2019 Adaptive Signal Processing , ebook . Ingle
20/72
20
System Identification and Modeling.
Excitation signal
Plant or unknownsystem
Adaptive processor
x(k) d(k)
e(k)
+- y(k)
x(k) must be the persistent excitation.
8/11/2019 Adaptive Signal Processing , ebook . Ingle
21/72
21
System Identification.
Clark, G., JASA 2007
8/11/2019 Adaptive Signal Processing , ebook . Ingle
22/72
22
Blind Equalization
Channel h(n)
Unobserved datasequence
Blind equalizer x(n)
v(n)noise
u(n)
Minimize intersymbol interference in unknown multipath channels.
x^(n)
+
8/11/2019 Adaptive Signal Processing , ebook . Ingle
23/72
23
Bussgang Algorithm for Blind Equalization
Transversalfilter
{w^(n)}
Transversalfilter
{w^(n)}
Zero-memorynonlinear estimator
g(.)
Zero-memorynonlinear estimator
g(.)
LMSAlgorithm
LMSAlgorithm
Receivedsignalu(n) y(n)
x^(n)
+-
e(n)Error
(Haykin,1996)
8/11/2019 Adaptive Signal Processing , ebook . Ingle
24/72
24
Finite Impulse Response (FIR) Wiener Filters.
( ) s k
( )n k
( ) x k ( ), ( )h k H z
( ) s k
( ) ( )d k s k =( )k
( ){ } ( ){ }0
22
( ) ( ) ( )
( ) ( )
N
l
s k h l x k l
E k E s k s k
=
=
=
8/11/2019 Adaptive Signal Processing , ebook . Ingle
25/72
25
Finite Impulse Response (FIR) Wiener Filters.
( ){ }( ) ( )
( )( ) ( ) ( )
( ) ( ) ( ) ( ) ( ){ }
( ) ( ) ( )
[ ]
2
1
0
( )
by WSS assumption 0 0
in matrix notation where ( )
N
l
N
xx sxl
il xx
E k s k s k E s k E s k h i h i h i
E h l x k l x k i E s k x k i
h l R i l R i i N
R R i l
=
=
=
=
= =
= =
=
sx xx
1xx sx
R R h
h R R
8/11/2019 Adaptive Signal Processing , ebook . Ingle
26/72
8/11/2019 Adaptive Signal Processing , ebook . Ingle
27/72
27
MATLAB Example of System Identification.
varx=100; x=sqrt(varx)*randn(1,20); >> v=randn(1,20); >> r=xcorr(x,1,'biased')
r =
10.5420 80.7933 10.5420
>> rx=[80.7933 10.5420]; >> Rx=toeplitz(rx)
Rx =
80.7933 10.5420 10.5420 80.7933
>> y=filter([1 0.38],1,x);
>> dn=y+v; >> pdx=xcorr(x,dn,'biased'); >> p=pdx(1,19:20)
p =
39.0133 83.4911
>> w=inv(Rx)*p'
w =
0.3541 0.9872
8/11/2019 Adaptive Signal Processing , ebook . Ingle
28/72
28
Frequency Domain Wiener Filter.
( ) ( ) ( )
( ){ } ( ) ( ){ }( ) ( ) ( ){ } ( ) ( ) ( ) ( ){ }
( ) ( ) ( ){ } ( ) ( ){ }( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )
( ) ( )
( ) ( ) ( ) ( )
( )
( ) ( )
( )
22
2
( )
XX XS SX SS
SX SX XX
XX XX
SX SS
XX
h k x k H X
E E S S
H E X X H H E X S
H E X S E S S
H P H H P H P P
P P H P H
P P P
P P
=
=
+
= +
=
+
8/11/2019 Adaptive Signal Processing , ebook . Ingle
29/72
8/11/2019 Adaptive Signal Processing , ebook . Ingle
30/72
8/11/2019 Adaptive Signal Processing , ebook . Ingle
31/72
31
The Mean Square Error (MSE) Performance Criterion.
{ } { }{ }
{ }
{ }
{ }
2 2
2 2( )
1
( )
2
Error signal: ( ) ( ) ( )
Squared error: ( ) ( ) 2 ( ) ( ) ( ) ( )MSE: ( ) ( ) 2
( ) ( )
( ) ( ) ( ) ( )
( ) ( )
d t
d t i
N
t d t t
t d t d t t t t E t E d t
E x t d t
E x t d t E t t
E x t d t
E
=
= += +
= =
T
T T T
T Tx xx
Tx xx
w
w x
w x w x x ww R w R w
R R x x
{ } ( ) ( )( )
( ) 2 2 0 Wiener-Hopf equation :
"Wiener solution":d t opt d t
opt d t
t
= + = =
=x xx xx x
1xx x
R R w R w R
w R R
8/11/2019 Adaptive Signal Processing , ebook . Ingle
32/72
32
Minimum Variance (MV) Optimization.
Array Steering Adaptive Weights
1
2
1w
2w
N w
1 x 1 z
2 x 2 z
N x N z { }
( ) ( )( ) exp ( )i i i
y t t z t j x t
==
T
w z
( )t x ( )t z w Adaptive array steered to the signal direction.
8/11/2019 Adaptive Signal Processing , ebook . Ingle
33/72
33
Minimum Variance (MV) Optimization
[ ] [ ] ( )
( )
Array input: ( ) ( ) ( ) ( ) ( )Signal direction vector:
1, exp , exp 2 , ,exp 1
2 sin , sensor distance between linear array elements,
wavelength.
Beam steering matri
t t t s t t
j j j N
d d
= + = +
=
=
T
x s n d n
d
[ ]
1
k 2
N-1
1 0 0 00
x: = exp .0 00 0
jk
=
8/11/2019 Adaptive Signal Processing , ebook . Ingle
34/72
8/11/2019 Adaptive Signal Processing , ebook . Ingle
35/72
35
Sidelobe Cancellation (SLC) System.
Main channelMain beam
Auxiliary sensors1w
N w
+
Adaptive weightAdjustment.*
- -
*Minimize cross-correlation between main and auxiliary channels.
8/11/2019 Adaptive Signal Processing , ebook . Ingle
36/72
36
Generalized Sidelobe Canceller.
Sensor Data
B
daptiveaw
Fixed Beamformer cw
B Blocking matrix.
8/11/2019 Adaptive Signal Processing , ebook . Ingle
37/72
37
Example of Blocking Matrix for Sidelobe Canceller.
-2 -1.5 -1 -0.5 0 0.5 1 1.5 20
2
4
6
8
10
12
14
16
Main beam
Auxiliary beam from
blocking matrix.
For blocking of signal from desired look direction blocking matrix must beorthogonal to "desired look" steering vector .
10 1 1 1 1 1 Example:
10
1
= = = =
H1
H H
HN
B
d
b d
B d 0 d B
b
1 1 1 1
8/11/2019 Adaptive Signal Processing , ebook . Ingle
38/72
38
Blind Source Separation
IndependentSources
Array ofSensors
Uncorrelated Normalized Data
SeparatedSources
MixingMatrix
A
Linear Preprocessor
T
SourceSeparation
W . .
.
.
.
.
.
.
.
.
.
.
Nonlinear Adaptive
Algorithm
PCA ICA
( )t1s( )t2s
( )ts( )ts ( )tx( )t1x( )t2x
( )t1v( )t2v
1u
2u
u
( )tr s
WMr
( )tMx ( )tr vr
u
8/11/2019 Adaptive Signal Processing , ebook . Ingle
39/72
39
Beamforming & Source Separation
SVD or ULVD
1r
H
r U
W
Subspace Filter Source Separation
Subspace
Estimation
Eigenstructure
&ParameterEstimation
TA Estimates
1s2s
3s
Estimated SourceSignals
sX
M21 ,..,
8/11/2019 Adaptive Signal Processing , ebook . Ingle
40/72
40
Optimization Criteria and Basic Algorithm Minimize or maximize a scalar performance measure ( ) Basic Adaptive Algorithm:( 1) ( ) ( ) ( )( ) search direction
( ) step sizeExamples:Steepest decent - ( ) ( )LMS - estimated gradient
J
k k k k
k
k
k J k
+ = +
=
w
w w d
d
d
Stochasic Approximation
Newton's and Quasi-Newton
8/11/2019 Adaptive Signal Processing , ebook . Ingle
41/72
41
Common Adaptive Algorithms.
( )( )
( ) ( )
Steepest Decent:
( 1) ( ) -
Least-Mean Squares (LMS) algorithm:( 1) ( ) 2
Estimation and Direct Matrix Inversion (DMI).Recursive Least- Squares (RLS).Affine Projection.
k k J k
k k k k
+ = +
+ = +
Ww w w
w w x
8/11/2019 Adaptive Signal Processing , ebook . Ingle
42/72
42
Error Performance Surface.
020
4060
80100
0
50
100440
450
460
470
480
490
8/11/2019 Adaptive Signal Processing , ebook . Ingle
43/72
8/11/2019 Adaptive Signal Processing , ebook . Ingle
44/72
44
Derivation of the LMS Algorithm .
2
0 0
2
( ) ( ) ( ) ( )
LMS algorithm assumes that performance measure is ( ) ( ).
( ) ( )
( ) 2 ( ) 2 ( ) ( ).( )( )
LMS weight adjustment alg L L
k d k k k
k
k k w w
J k k k
k k ww
=
=
= = =
T
2
w
x w
w
w x
[ ][ ]
0
orithm is:( 1) ( ) ( )
( ) 2 ( ) ( ) step size.( ) ( ), , ( ) filter weights at time .
( ) ( ), ( 1), , ( ) input data.
T
L
T
k k J
k k k k w k w k k
k x k x k x k L
+ =
= +=
=
ww w w
w xw
x
8/11/2019 Adaptive Signal Processing , ebook . Ingle
45/72
45
The LMS Algorithm for M-th Order Adaptive
Filter.
Inputs: M=filter length =step-size factor ( ) input data to the adaptive filter (0) int alize the
n
==
x
w weight vector=0
Outputs: ( ) adaptive filter output= ( ) ( ) ( ) ( ) ( ) ( ) error Algorithm: ( 1) ( ) 2 ( ) ( )
n n n d n
n d n y n
n n n n
= == =+ = +
Tw x
w w x
8/11/2019 Adaptive Signal Processing , ebook . Ingle
46/72
46
LMS Function.function[w,y,e,J]=lms(x,dn,mu,M) N=length(x);y=zeros(1,N);w=zeros(1,M);for n=M:N
x1=x(n:-1:n-M+1);%for each n% the vector x1 of length M with produced from x
%with elements in reverse order.y(n)=w*x1';e(n)=dn(n)-y(n);w=w+2*mu*e(n)*x1;w1(n-M+1,:)=w(1,:);
end;J=e.^2;
8/11/2019 Adaptive Signal Processing , ebook . Ingle
47/72
8/11/2019 Adaptive Signal Processing , ebook . Ingle
48/72
48
Convergence of the Mean Weight Vector of LMS
( ) ( ) ( )( ) ( ) ( ) ( ) ( )
( ) ( ) ( )
( )
( )
( )( ) [ ] [ ]
{ }
2
1
1 2 0
2 2 1 2 0
1 2 0
1 2 0 00
1 20
0 0 0 1 2
1lim 0 if 1 1 2 1 0
lim ( /
k
k
k
m
k
L
MAX MAX k MAX
OPT k
k
k tr tr
E k
=
= =
+ =
=
= < < < < =
=
xx
xx xx
xx
xx
h I h
h I h I h
h I h
h R
w w
8/11/2019 Adaptive Signal Processing , ebook . Ingle
49/72
49
Convergence Rate of the LMS Algorithm.
( )
( )
2
maxmax
max min
2
LMS weight convergence is geometric with geometric ratiofor coordinate:
1 1 11 2 exp 12!
For large
Conditio
:1
n number) o
11
f
2 12
1 (
p
p p p p p
p
p p p p p
r
p th
r
r
= +
=
xxR
8/11/2019 Adaptive Signal Processing , ebook . Ingle
50/72
8/11/2019 Adaptive Signal Processing , ebook . Ingle
51/72
51
Misadjustment Due to Gradient Noise
Estimated gradient: ( ) 2 ( ) ( ) ( ) ( )( ) true gradient( ) zero-mean gradient estimation noise
At minimum mean-square error (mse) point ( ) =0 and ( ) 2 ( ) ( ) ( )
Gradient noise
k k k k k
k
k
k
k k k k
= = +
= =
x n
n
x n
{ } { } { } { }2 2min
covariance:( ) ( ) 4 ( ) ( ) ( ) 4 ( ) ( ) ( )
4 ( )
( ) and ( ) ar
k k E k k k E k E k k
k
k k
= =
=
H H H
xx
n n x x x x
R
x e uncorrelated
8/11/2019 Adaptive Signal Processing , ebook . Ingle
52/72
52
Misadjustment Due to Gradient Noise
( ) ( )( ) ( )
( ) ( ){ }
LMS a lgorithm with noisy gradient:( 1) ( ) ( ) ( ) ( ) ( )
1 ( ) 2 ( ) ( )
Transforming by :( 1) 2 ( ) ( ) ( )
At close to optimum ( ) 0 (learning transients hav
k k k k k k
k k k k
k k k k k
E k
+ = + = + +
+ = + +
+ = + =
=
xx
T
T
w w w n
v v R v n
U
h I h n n U n
h
{ }{ } ( ) { }( ) { }
{ } { }
2
e died out)
Using the fact that ( ) ( ) =0 covariance of ( ) is:( 1) ( 1) 2 ( ) ( ) 2 ( ) ( )
Close to optimum value ( ) are wide-sense stationary
( 1) ( 1) ( ) ( )
E k k k
k k E k k E k k
k
E k k E k k
+ + = +
+ + =
T
T T T
T T
h n h
h h I h h I n n
h
h h h h
{ } ( ) { } ( ){ } [ ]1 mi
2
n
2min( ) ( ) 2
( ) )
( ( )
(
) 4 E k k E
E k
k
k
k
= +
= = T
T T
hh
h h I
h I
h h
R h
8/11/2019 Adaptive Signal Processing , ebook . Ingle
53/72
53
Misadjustment Due to Gradient Noise
{ } ( ){ }21
min1
1min
Excess MSE= ( ) ( )
Average excess MSE= ( ) ( ) ( )
=1
average excess MSEM=misadjustment =1
If 1 for all (usual cas
N
p p p
N p
p p
N p
p p
p p
k k
E k k E h k
=
=
=
=
=
T
T
h h
h h
[ ]
pmse
1
1
1 1 14
e), th
long filter large misadjustment.
fast convergence large m
en
M=
isadjustment.
1,
1 1
2F
M4
or
N
p p
p
p pmse p
pmse
N
p
tr
=
=
= =
=
=
xxR
8/11/2019 Adaptive Signal Processing , ebook . Ingle
54/72
54
Sensitivity of Square Linear Systems.
Results from numerical analysis.
( )
( )
( )( )
1Let be a nonsingular matrix and:
The solution to approximates the solution of with error estimate:
1
where denotes norms and is the
=
<
<
8/11/2019 Adaptive Signal Processing , ebook . Ingle
58/72
58
Leaky LMS Algorithm.Wiener optimum weight calculation requiresinverting possibly an ill-conditiond matrix:
this causes numerical errors and slow convergence.if the mode (1 2 ) does not converge.
12
opt d
p p
p p
=
1xx xw R R
slow convergence for small .
Leaky LMS algorithm:
( 1) (1 2 ) ( ) 2 ( ) ( ) 0.use ( ) ( ) ( ) ( ) we have:
( 1) 2 ( ) ( ) ( ) 2 ( ) ( ).
p
k k k k k d k k k
k k k k d k k
+ = + >=
+ = + +
T
T
w w xx w
w I x x I w x
8/11/2019 Adaptive Signal Processing , ebook . Ingle
59/72
8/11/2019 Adaptive Signal Processing , ebook . Ingle
60/72
60
Block LMS Algorithm[ ]input signal ( ) ( ), ( 1), , ( 1
block 1
0 B 2B 3B B time samples
n x n x n x n M
k
= +x
[ ]0 1 11
1
, , # ,
, 0,1, , 1 0,1,
( ) ( ), ( ), ,( ) ( ) ( ) ( ) ( ) ( )
( ) ( ) (
M
M
n
k block time n sample time B samples in a block M FI length
sample time n kB i i M k
filter weights k w k w k w
output y n y kB i k kB i w k x kB i
error n y n d n
=
= + = =
== + = + = +
=
T
T
w
w x
) ( ) ( ) ( )kB i d kB i y kB i + = + +
8/11/2019 Adaptive Signal Processing , ebook . Ingle
61/72
61
Block LMS Algorithm
1
0
1
0
:( 1) ( ) ( ) ( )
1( 1) ( ) ( )22( ) ( ) ( )
B
i
B AVE
B
AVE Bi
In block LMS error signal is averaged over blocks B
k k kB i kB i
or k k k
k kB i kB i B B
=
=
+ = + + +
+ =
= + + =
w w x
w w
x
8/11/2019 Adaptive Signal Processing , ebook . Ingle
62/72
62
Fast FFT Based LMS Algorithm. Fast linear convolution
Fast linear correlation
[ ]
tap weights of FIR filter are padded with zeros. point FFT 2
( )Frequency domain weight vector ( ) 1 null vector.
( ) ( ), , ( 1), ( ), , ( 1)
M
N N M
k k FFT M
k FFT kM M kM kM kM M
=
=
= +
wW 0
0X x x x x
{ }{ }[ ]
( 1)
( ) ( ), ( 1), , ( 1)
( )
.
k th block k th block
k y kM y kM y kM M
last M elements of IFFT k
element by element multiplication of matrices
= + +
=
Ty
X W
8/11/2019 Adaptive Signal Processing , ebook . Ingle
63/72
63
Fast FFT Based LMS Algorithm. II
[ ]
[ ][ ]
(0) 0 00 (1)
Define: ( ) ( ) 00 0 ( 1)
( ) ( ), ( 1), , ( 1)
( ) ( )( ) ( ), ( 1), , ( 1)
( ) ( )( )
k
k
k
X
X
k diag k X M
k y kM y kM y kM M
last M elements of IFFT k k k kM kM kM M
k FFT k first M elements of IFFT
k
= =
= + +
== + +
= =
T
T
U X
y
U We
0E
e
( ) ( )
( )( 1) ( )
k k
k k k
+ = +
HU E
W W0
8/11/2019 Adaptive Signal Processing , ebook . Ingle
64/72
64
Recursive Least-Squares Algorithm.1
1 2 1
1 1
1 1 1
11 1 1
samples, 1 filter coefficients.
n n n M
n n n M
n N n N n M N
n n n n M
x x x
x x x N M
x x x
x x x x
+ +
+ + +
+ ++ + + + + +
= + =
= = = +
n
n 1
n 1 nH H H H Hn 1 n 1 n 1 n n n n n n
n n
X
z
z zX X X z X z z X X
X X
( ) ( )1 11
1 1
11 1
11
Using lemma:
+ +
+ +
+ +
+ +
++
+ = + = +
=
=
11 1 1 1 1 1
1 1H H H1 1 n n n n n nH H
n 1 n 1 n n 1H Hn n n n
1Hn n n n
1Hn n n
n
A BCD A A B C DA B DA
X X z z X XX X X X
z X X z
I K z X X
X X zK
1
1 1
+
+ +
+
Hn
1H Hn n n n
z
z X X z
8/11/2019 Adaptive Signal Processing , ebook . Ingle
65/72
8/11/2019 Adaptive Signal Processing , ebook . Ingle
66/72
66
Approximations to RLS Projection algorithm:
0 2 0 1
Stochastic approximation:
0 2
LMS algorithm:2
++
+ +
++
+ +
+ +
= < < <
8/11/2019 Adaptive Signal Processing , ebook . Ingle
67/72
67
Constraints to Maintain Look Direction
Frequency Response.[ ] [ ] [ ] [ ]
[ ] [ ] [ ] [ ]
1
1 2
2
Equivalent look-direction Filt
K
J
JK K K
w w w
w ww
x
x
i
i
[ ] [ ] [ ] [ ]1 2
er:
J d
f f f
s
i
signal output
( ) array output
y k
Frost, 1972
8/11/2019 Adaptive Signal Processing , ebook . Ingle
68/72
8/11/2019 Adaptive Signal Processing , ebook . Ingle
69/72
69
Constrained Optimization
{ }
( ) ( )( )
( )
1
2
1
Minimize
Subject to constraint1=2
0
E y E
J
J
= = =
= =
= +
= +
=
= =
=
=
=TT T T T T T
1 T
T T Txx
T
T 1opt xx opt
w
1 T 1op
x
T T Txx
w x
x
T 1
xx
x
x
x
t x
x
w xx w w R w
C w
w
C w C w w C
w R C C R
w R C C w C R C
C
C R
w R w C w
w R w C
w
C
C C
Frost,1972
8/11/2019 Adaptive Signal Processing , ebook . Ingle
70/72
>> %Levinson Durbin Recursion
8/11/2019 Adaptive Signal Processing , ebook . Ingle
71/72
71
>> %Levinson-Durbin Recursion.
>> r=[1,0.5,0.25,0.0625]; %auto-correlation sequence>> a=levinson(r,3)
a =
1.0000 -0.5000 -0.0417 0.0833
>> h=filter(1,a,[1 zeros(1,25)]);>> stem(h)>> [bb,aa]=prony(h,3,3)
bb =
1.0000 0.0000 -0.0000 0.0000
aa =
1.0000 -0.5000 -0.0417 0.0833
>>
8/11/2019 Adaptive Signal Processing , ebook . Ingle
72/72