Decision Weighted Adaptive Algorithms withApplications to Wireless Channel Estimation
Shane Martin Haas
April 12, 1999
Thesis Defense for the Degree ofMaster of Science in Electrical Engineering
Department of Electrical Engineering and Computer ScienceUniversity of Kansas
2
Introduction) What is an Adaptive Algorithm?
) What is System Identification?
w +
x(n) d(n)
e(n)
ReferenceSignal
DesiredSignal
-
ErrorSignal
AdaptiveAlgorithm
Adjustable Filter
d(n)^
(Accepts x(n), d(n),d(n), and e(n) as input)
^
Channel
) Benefits of Channel Estimation
) Training Sequence Versus Blind Estimation
3
Presentation Overview) Theoretical Development
* Problem Formulation
* Multiple Phase Shift Keying
* Characterizing Wireless Communication Channels
* Bandpass to Low-Pass Conversion of Signals and Systems
* Adaptive Algorithms
� Linear and LMS Estimation Algorithms
� Properties of Decision Weighted Algorithms
) Simulation Methodology
) Simulation Results
) Conclusions
4
Problem Formulation
ModulatorSymbolSource
Channel Demod
DecisionDirectedEstimator
TrainingSequenceEstimator
Channel Estimate
Channel Estimate
SymbolSink
Modulator
Tx Symbols
Tx Signal
Rx Signal
Rx Symbols (Hard Decisions)
Reconstructed Tx Signal
Decision
Soft Decisions
5
Multiple Phase Shift Keying (MPSK)) Modulation
si(t) =
r2E
T
cos�
2�fct+2�i
M
�
for 0 � t < T
= ai1 1(t) + ai2 2(t)
with
1(t) =
r2
Tcos (2�fct)
2(t) = �r
2Tsin (2�fct)
ai1 =
pE cos (�i)
ai2 =
pE sin (�i)
�i =
2�iM
6
) Demodulation
ϕ (t)1
ϕ (t)2
ai1
ai2
s(t)i
φi
X
X
~
~ϕ (t)1
ϕ (t)2
Τ
0
T
0
arctan(Y/X)ChooseSmallest
Compute| φ − θ |
i i
X
Y
θir (t)i DetectedSymbol
7
Characterizing Wireless Communication Channels) Multipath
y(t) =X
n
�n(t)s(t� �n(t))
Transmitter Receiver
path n
path 3
path 2
path 1
path 0 (line-of-sight)
...
) Channel Models
* Radio Relay Three-Path (Rummler) Model
* Mobile Radio Channel Model
8
Bandpass to Low-Pass Conversion
) The Complex Envelope of the MPSK Signalling Waveform
esi(t) =r
2ET
cos�
2�iM
�+ j
r2E
T
sin�
2�iM
�
) Main Idea: Convolution of real bandpass signals is the same as theconvolution of their complex envelope low-pass equivalents
9
Adaptive Algorithms) System Identification Problems
b
β
+
+
u(n)
w(n)
e(n)
y(n)
Unknown System
Model
-y(n)^
b β+
+
u(n)
w(n)
e(n)
Unknown System ModelDecision
y(n) x(n)
y(n)^
-
10
) Linear Estimators
Define for the training sequence estimation problem:
y(n) = b1u1(n) + � � �+ bMuM (n) + w(n)
by(n) = �1u1(n) + � � �+ �MuM (n)
e(n) = y(n)� by(n) = y(n)� (�1u1(n) + � � �+ �MuM (n))
Define for the decision directed estimation problem:
y(n) = b1u1(n) + � � �+ bMuM (n) + w(n)
by(n) = �1x1(n) + � � �+ �MxM (n)
e(n) = y(n)� by(n) = y(n)� (�1x1(n) + � � �+ �MxM (n))
11
Observe the system for N sample periods and write
y =
hy(1) y(2) � � � y(N)iT
w =
hw(1) w(2) � � � w(N)iT
e =
he(1) e(2) � � � e(N)iT
b =
hb1 b2 � � � bM
iT
� =
h�1 �2 � � � �M
iT
U =
26664u1(1) � � � uM (1)
.... . .
...
u1(N) � � � uM (N)3
7775
X =
26664x1(1) � � � xM (1)
.... . .
...
x1(N) � � � xM (N)3
7775
12
The channel output is
y = Ub+w
The error for the training sequence estimation problem is
e = y �U�
While that for the decision directed estimation ise = y �X�
The error or loss function is
J(�) = eTRe
where R is a N x N matrix of weighting coefficients.
13
) Linear Estimator b� =�
UTRU��1
UTRy
) Recursive Weighted Least Squares Estimatoru(n) =h
u1(n) u2(n) � � � uM (n)iT
R = diag(�n�1a1; : : : ; �an�1; aN ) 0 < � � 1
b�n = b�n�1 + anH�1
n
u(n)e(n)
Hn = �Hn�1 + anu(n)uT (n)
e(n) = y(n)� u(n)T b�n�1
14
) Least Mean Squares (LMS) Estimator
b�n = b�n�1 + ��nu(n)e(n)
e(n) = y(n)� u(n)T b�n�1
) Decision weighted estimators are decision directed estimators whoseweights depend on the quality of the decisions.
* Ideal decision weighted estimators use knowledge of decision errorsto calculate their weights. Specifically,
XTRX = XTRU* Soft decision weighted estimators use receiver soft decisions to
calculate their weights.
15
) More on Ideal Decision Weighted Linear Estimators
* Q: How does one choose R such that XTRX = XTRU?
* A: If X and U differ in the jth row, choose the jth column of R
orthogonal to each column in X.
* Q: Are XTRX = XTRU and XTRX non-singular conflictingconditions?
* A: No, let R be an identity matrix with its jth column set to zero if X
and U differ in the jth row. Under slightly more restrictiveassumptions placed on X than in ordinary training sequenceestimators, XTRX is non-singular.
) More on Soft Decision Weighted EstimatorsFor MPSK modulation define a soft decision as
pi = 1� j�i � �ij
�=S
A possible choice for the soft decision weight is
an = pnpn�1 � � � pn�M+1
16
) Biasness of Decision Directed Linear Estimators
If E fwjX;Ug = 0 then
Enb�o = En�
XTRX��1
XTRUo
b
For ideal decision weighted estimators XTRX = XTRU, and thereforethe estimator is unbiased.
) Covariance of Decision Directed Linear EstimatorsIf E fwjX;Ug = 0 then
covnb�o = cov fSUbg+ E�
SVST
where S =�
XTRX��1
XTR and V = E�
wwT j X;U
. Notice for idealdecision weighted estimators SU = I.
17
Simulation Methodology) Algorithm Summary
* Training Sequence LMS (TLMS): Uses training sequence LMS with
�n = 1
* Blind LMS (BLMS): Uses decision directed LMS with �n = 1
* Soft Decision Weighted LMS (SDWLMS): Uses decision directedLMS with soft decision weights
* Ideal Decision Weighted LMS (IDWLMS): Uses decision directedLMS with �n = 1 if x(n) = u(n) and zero otherwise
* Training Sequence RLS (TRLS): Uses training sequence WRLS with
an = 1
* Blind RLS (BRLS): Uses decision directed WRLS with an = 1
* Soft Decision Weighted RLS (SDWRLS): Uses decision directedWRLS with soft decision weights
18
* Ideal Decision Weighted RLS (IDWRLS): Uses decision directedWRLS with an = 1 if x(n) = u(n) and zero otherwise
* Modified Soft Decision Weighted RLS (MSDWRLS): Uses decisiondirected WRLS with soft decision weights; however, we modify thematrix update Hn by removing the weight an, resulting in
Hn = �Hn�1 + x(n)xT (n).
* Modified Ideal Decision Weighted RLS (MSDWRLS): Uses decisiondirected WRLS with an = 1 if x(n) = u(n) and zero otherwise;however, we modify the matrix update Hn by removing the weight
an, resulting in Hn = �Hn�1 + x(n)xT (n).
19
) General Methods for Delay Spread, SNR, and Doppler Frequency Tests
* LMS Gain: � = 0:3
* RLS Forgetting Factor: � = 0:99
* Sampling Rate: 1 sample per second
* Symbol Interval: 4 samples per symbol
* Modulation: QPSK
* Number of Symbols per Individual Simulation: 300 symbols
* Number of Individual Simulations to Perform per Test PointIteration: 20 simulations
* Maximum Symbol Error Rate (SER): 0.2
* Number of Symbols to Skip Before Calculating Estimation Error(N0): 100 symbols
* Initial Estimate: the true response
* Performance Criteria: median average estimation error
20
Simulation Results
4 6 8 10 12 14 16 18 2010
−3
10−2
10−1
100
Estimation Error as a Function of Delay Spread
Delay Spread [s]
Med
ian
Ave
rage
Squ
ared
Err
or fr
om T
rue
Impu
lse
Res
pons
etlms blms sdwlmsidwlms
Figure 1: Median of the average squared error of LMS algorithms as a functionof delay spread (SNR = 10 dB)
21
4 6 8 10 12 14 16 18 2010
−3
10−2
10−1
100
Estimation Error as a Function of Delay Spread
Delay Spread [s]
Med
ian
Ave
rage
Squ
ared
Err
or fr
om T
rue
Impu
lse
Res
pons
e
trls brls sdwrls idwrls msdwrlsmidwrls
Figure 2: Median of the average squared error of RLS algorithms as a functionof delay spread (SNR = 10 dB)
22
0 5 10 15 20 25 3010
−5
10−4
10−3
10−2
10−1
Estimation Error as a Function of SNR
SNR [dB]
Med
ian
Ave
rage
Squ
ared
Err
or fr
om T
rue
Impu
lse
Res
pons
e
tlms blms sdwlmsidwlms
Figure 3: Median of the average squared error of LMS algorithms as a functionof SNR (Delay Spread = 1 symbol interval)
23
0 5 10 15 20 25 3010
−5
10−4
10−3
10−2
10−1
Estimation Error as a Function of SNR
SNR [dB]
Med
ian
Ave
rage
Squ
ared
Err
or fr
om T
rue
Impu
lse
Res
pons
e
trls brls sdwrls idwrls msdwrlsmidwrls
Figure 4: Median of the average squared error of RLS algorithms as a functionof SNR (Delay Spread = 1 symbol interval)
24
10−10
10−9
10−8
10−7
10−6
10−5
10−4
10−3
10−2
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
1.1
1.2Estimation Error as a Function of Doppler Frequency (SNR = 10 dB)
Doppler Frequency [Hz]
Med
ian
Ave
rage
Squ
ared
Err
or fr
om T
rue
Impu
lse
Res
pons
e
tlms blms sdwlmsidwlms
Figure 5: Median of the average squared error of LMS algorithms as a functionof Doppler frequency (SNR = 10 dB)
25
10−10
10−9
10−8
10−7
10−6
10−5
10−4
10−3
10−2
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6Estimation Error as a Function of Doppler Frequency (SNR = 10 dB)
Doppler Frequency [Hz]
Med
ian
Ave
rage
Squ
ared
Err
or fr
om T
rue
Impu
lse
Res
pons
e
trls brls sdwrls idwrls msdwrlsmidwrls
Figure 6: Median of the average squared error of RLS algorithms as a functionof Doppler frequency (SNR = 10 dB)
26
Conclusions) Summary of Performance Test Results
* Soft decision weighted LMS (SDWLMS) performed better than theother LMS algorithms in delay spread (by a factor of 2 to 100, Figure1) and SNR (by a factor of 2, Figure 3) tests
* Soft decision weighted RLS (SDWRLS) performed worse than theother RLS algorithms in delay spread (by a factor of 2, Figure 2) andSNR (by a factor of 3, Figure 4) tests
* Modified soft decision weighted RLS (MSDWRLS) performed betterthan the other RLS algorithms in delay spread (by a factor of 2 to 20,Figure 2) and SNR (by a factor of 2, Figure 2) tests
* SDWLMS performed better at normalized Doppler frequencies lessthan 10�5 and worse at higher Doppler frequencies than the otherLMS algorithms (Figure 5)
* SDWRLS and MSDWRLS performed worse over all Dopplerfrequencies than the other RLS algorithms (Figure 6)
27
* Ideal decision weighted LMS and RLS (IDWLMS and IDWRLS)performed similar to their training sequence versions in all tests, andgenerally better than their ordinary decision-directed counterparts(Figures 1 through 6).
) General Conclusions
* Decision weighted estimators defined, analyzed, and simulated
* SDWLMS shows most promise for implementation
* SDWRLS performed poorly, but MSDWRLS performed well
* Ideal decision weighted algorithms performed similar to trainingsequence algorithms