+ All Categories
Home > Documents > A variable step-size selective partial update LMS algorithm

A variable step-size selective partial update LMS algorithm

Date post: 25-Nov-2016
Category:
Upload: khaled
View: 213 times
Download: 1 times
Share this document with a friend
11
Digital Signal Processing 23 (2013) 75–85 Contents lists available at SciVerse ScienceDirect Digital Signal Processing www.elsevier.com/locate/dsp A variable step-size selective partial update LMS algorithm Khaled Mayyas Department of Electrical Engineering, Jordan University of Science and Technology, Irbid 22110, Jordan article info abstract Article history: Available online 11 September 2012 Keywords: Adaptive filtering Selective partial update algorithms Variable step-size Selective partial update of the adaptive filter coefficients has been a popular method for reducing the computational complexity of least mean-square (LMS)-type adaptive algorithms. These algorithms use a fixed step-size that forces a performance compromise between fast convergence speed and small steady state misadjustment. This paper proposes a variable step-size (VSS) selective partial update LMS algorithm, where the VSS is an approximation of an optimal derived one. The VSS equations are controlled by only one parameter, and do not require any a priori information about the statistics of the system environment. Mean-square performance analysis will be provided for independent and identically distributed (i.i.d.) input signals, and an expression for the algorithm steady state excess mean-square error (MSE) will be presented. Simulation experiments are conducted to compare the proposed algorithm with existing full-update VSS LMS algorithms, which indicate that the proposed algorithm performs as well as these algorithms while requiring less computational complexity. © 2012 Elsevier Inc. All rights reserved. 1. Introduction Adaptive LMS-type algorithms with selective partial update (SPU) belong to the family of adaptive algorithms that updates only a portion of the adaptive filter coefficients at each iteration. These algorithms reduce computational complexity while attempting to maintain close performance to their full-update counterparts, and therefore have been the focus of much attention [1–9]. These al- gorithms use a fixed step-size in their coefficient update recursion thus inheriting the limitation in the full-update algorithm of hav- ing to compromise between fast convergence speed and low level of steady state MSE [10]. This problem is tackled in full-update LMS-type algorithms by employing a VSS, which is adjusted based on a criterion that attempts to measure the adaptation process status, i.e., the proximity of the adaptive filter coefficients to the optimal ones. These criteria are based on ad hoc techniques or op- timization methods [11–14]. The objective of this paper is to enhance the performance of the SPU-LMS algorithm by employing a VSS in the algorithm co- efficient update equation, where the VSS is an approximation of a derived optimal one, thus resulting in a VSS-SPU-LMS algorithm. As opposed to existing full-update VSS algorithms that need fine tuning of more than one parameter in the VSS equations to achieve the desired performance, the proposed algorithm VSS equations are controlled by one parameter and require no a priori information about the statistics of the system environment. Another impor- tant objective of this paper is to present mean-square convergence analysis of the proposed VSS-SPU-LMS algorithm for stationary E-mail address: [email protected]. i.i.d. input signals, where an expression for the algorithm steady state excess MSE is provided. Though it is desirable to analyze the algorithm in more realistic situations where the input signal is correlated, order statistics employed by the algorithm render such analysis difficult to conduct to attain closed form expressions that quantify clearly the theoretical mean-square performance of the algorithm. In [5,6,8], an i.i.d. input signal is assumed to be able to provide explicit steady state expressions. It is only in [6] that or- der statistics are taken into account in the algorithm analysis, and a specific model for the i.i.d. input signal is assumed. Our analy- sis incorporates order statistics, and our approach is different from that in [6]. Moreover, the SPU algorithm in [6] uses a fixed step- size, and has a slightly different coefficient update equation than that of the proposed algorithm in this paper. 2. VSS-SPU-LMS adaptive algorithm In the SPU-LMS algorithm, M coefficients out of the adap- tive filter N coefficients are updated at each iteration, where the selected M coefficients are those associated with the M largest |x(n j + 1)|, j = 1, 2,..., N , at this iteration, where x(n) is the input signal. The remaining N M coefficients are left unchanged. This selection criterion was shown to result in the closest pos- sible performance to the full-update LMS algorithm given that both algorithms have used the same step-size value [3,5]. De- fine an N × N diagonal matrix A(n) that has M ones at diagonal entries indicated by s i , i = N M + 1, N M + 2,..., N corre- sponding to the M maxima of |x(n s i + 1)|, i = 1, 2,..., N , and zeros at diagonal entries indicated by s i , i = 1, 2,..., N M, where |x(n s 1 + 1)| |x(n s 2 + 1)| ··· |x(n s N + 1)|. Define the 1051-2004/$ – see front matter © 2012 Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.dsp.2012.09.004
Transcript
Page 1: A variable step-size selective partial update LMS algorithm

Digital Signal Processing 23 (2013) 75–85

Contents lists available at SciVerse ScienceDirect

Digital Signal Processing

www.elsevier.com/locate/dsp

A variable step-size selective partial update LMS algorithm

Khaled Mayyas

Department of Electrical Engineering, Jordan University of Science and Technology, Irbid 22110, Jordan

a r t i c l e i n f o a b s t r a c t

Article history:Available online 11 September 2012

Keywords:Adaptive filteringSelective partial update algorithmsVariable step-size

Selective partial update of the adaptive filter coefficients has been a popular method for reducing thecomputational complexity of least mean-square (LMS)-type adaptive algorithms. These algorithms usea fixed step-size that forces a performance compromise between fast convergence speed and smallsteady state misadjustment. This paper proposes a variable step-size (VSS) selective partial updateLMS algorithm, where the VSS is an approximation of an optimal derived one. The VSS equations arecontrolled by only one parameter, and do not require any a priori information about the statistics of thesystem environment. Mean-square performance analysis will be provided for independent and identicallydistributed (i.i.d.) input signals, and an expression for the algorithm steady state excess mean-squareerror (MSE) will be presented. Simulation experiments are conducted to compare the proposed algorithmwith existing full-update VSS LMS algorithms, which indicate that the proposed algorithm performs aswell as these algorithms while requiring less computational complexity.

© 2012 Elsevier Inc. All rights reserved.

1. Introduction

Adaptive LMS-type algorithms with selective partial update(SPU) belong to the family of adaptive algorithms that updates onlya portion of the adaptive filter coefficients at each iteration. Thesealgorithms reduce computational complexity while attempting tomaintain close performance to their full-update counterparts, andtherefore have been the focus of much attention [1–9]. These al-gorithms use a fixed step-size in their coefficient update recursionthus inheriting the limitation in the full-update algorithm of hav-ing to compromise between fast convergence speed and low levelof steady state MSE [10]. This problem is tackled in full-updateLMS-type algorithms by employing a VSS, which is adjusted basedon a criterion that attempts to measure the adaptation processstatus, i.e., the proximity of the adaptive filter coefficients to theoptimal ones. These criteria are based on ad hoc techniques or op-timization methods [11–14].

The objective of this paper is to enhance the performance ofthe SPU-LMS algorithm by employing a VSS in the algorithm co-efficient update equation, where the VSS is an approximation ofa derived optimal one, thus resulting in a VSS-SPU-LMS algorithm.As opposed to existing full-update VSS algorithms that need finetuning of more than one parameter in the VSS equations to achievethe desired performance, the proposed algorithm VSS equationsare controlled by one parameter and require no a priori informationabout the statistics of the system environment. Another impor-tant objective of this paper is to present mean-square convergenceanalysis of the proposed VSS-SPU-LMS algorithm for stationary

E-mail address: [email protected].

1051-2004/$ – see front matter © 2012 Elsevier Inc. All rights reserved.http://dx.doi.org/10.1016/j.dsp.2012.09.004

i.i.d. input signals, where an expression for the algorithm steadystate excess MSE is provided. Though it is desirable to analyzethe algorithm in more realistic situations where the input signal iscorrelated, order statistics employed by the algorithm render suchanalysis difficult to conduct to attain closed form expressions thatquantify clearly the theoretical mean-square performance of thealgorithm. In [5,6,8], an i.i.d. input signal is assumed to be able toprovide explicit steady state expressions. It is only in [6] that or-der statistics are taken into account in the algorithm analysis, anda specific model for the i.i.d. input signal is assumed. Our analy-sis incorporates order statistics, and our approach is different fromthat in [6]. Moreover, the SPU algorithm in [6] uses a fixed step-size, and has a slightly different coefficient update equation thanthat of the proposed algorithm in this paper.

2. VSS-SPU-LMS adaptive algorithm

In the SPU-LMS algorithm, M coefficients out of the adap-tive filter N coefficients are updated at each iteration, where theselected M coefficients are those associated with the M largest|x(n − j + 1)|, j = 1,2, . . . , N , at this iteration, where x(n) is theinput signal. The remaining N − M coefficients are left unchanged.This selection criterion was shown to result in the closest pos-sible performance to the full-update LMS algorithm given thatboth algorithms have used the same step-size value [3,5]. De-fine an N × N diagonal matrix A(n) that has M ones at diagonalentries indicated by si , i = N − M + 1, N − M + 2, . . . , N corre-sponding to the M maxima of |x(n − si + 1)|, i = 1,2, . . . , N , andzeros at diagonal entries indicated by si , i = 1,2, . . . , N − M , where|x(n − s1 + 1)| � |x(n − s2 + 1)| � · · · � |x(n − sN + 1)|. Define the

Page 2: A variable step-size selective partial update LMS algorithm

76 K. Mayyas / Digital Signal Processing 23 (2013) 75–85

vector x(n) = A(n)x(n), where x(n) = [x(n) x(n−1) · · · x(n−N +1)]T

is the input signal vector, then the SPU-LMS weight update equa-tion can be written as

w(n + 1) = w(n) + μe(n)x(n) (1)

where

e(n) = d(n) − xT (n)w(n) (2)

is the output error signal, d(n) is the desired signal, w(n) =[w1(n) w2(n) · · · w N (n)]T is the adaptive filter coefficient vector,and μ is the step-size that controls stability, convergence speed,and the level of the steady state MSE. A constant step-size μnecessitates a compromise between fast convergence speed andtracking capabilities on the one hand, and small steady state MSEdesired in adaptive filtering applications on the other hand. Simi-larly to the full-update LMS algorithm, attaining both fast conver-gence speed and small misadjustment in the SPU-LMS algorithmcan be tackled using a time-varying step-size μ(n) in the SPU-LMSweight update equation (1).

Assuming the desired signal is given by

d(n) = xT (n)w∗ + η(n) (3)

where η(n) is a zero-mean independent noise sequence, w∗ is theunknown system parameter vector, and defining the coefficient er-ror vector

w(n) = w∗ − w(n) (4)

then Eq. (1) can be rewritten as

w(n + 1) = w(n) − μ(n)e(n)x(n) (5)

Pre-multiplying both side of Eq. (5) by wT (n + 1) and taking theexpected value, then the mean squared norm of the coefficient er-ror vector is obtained as

E{∥∥w(n + 1)

∥∥2} = E{∥∥w(n)

∥∥2} − 2μ(n)E{

e(n)wT (n)x(n)}

+ μ2(n)E{

e2(n)xT (n)x(n)}

(6)

The objective of the VSS-SPU-LMS algorithm is to choose the cur-rent step-size value μ(n) such that the updated coefficient vectorw(n + 1) results in the largest reduction in the Euclidean distanceE{‖w∗ − w(n + 1)‖2} = E{‖w(n + 1)‖2}. This can be accomplished

by setting ∂‖E{w(n+1)‖2}∂μ(n)

= 0 in Eq. (6), from which we obtain theoptimal step-size [15]

μ∗(n) = E{e(n)wT (n)x(n)}E{e2(n)xT (n)x(n)} (7)

Let e(n) = xT (n)w(n) be the noise-free error signal. Then, fromEqs. (2) and (3), the output error is given by

e(n) = e(n) + η(n) (8)

Define the vector x(n) = x(n) − x(n), then

e(n) = xT (n)w(n) + xT (n)w(n) (9)

We assume that M is large enough such that x(n) comprises thelargest M elements in magnitude of x(n), then we can assume that

e(n) ≈ xT (n)w(n) (10)

This approximation is known to hold strongly for 0.5N � M < N[8]. It will be shown in the simulations section that the proposedalgorithm with M = N

2 performs as well as, and in some situationsbetter, than full-update VSS-LMS-type algorithms while requiring

less arithmetic complexity. For the case when 0 < M < 0.5N , it isdifficult to determine the values of M for which the approxima-tion in Eq. (10) is acceptably valid. This depends on the experimentconditions, and is not possible to quantify. However, the VSS-SPU-LMS algorithm is still expected to perform better than the fixedstep-size SPU-LMS algorithm due to its ability to provide largestep-size values at initial stages of adaptation and small ones atsteady state. Using Eqs. (8) and (10) in Eq. (7), and noting that η(n)

is zero-mean independent of w(n) and x(n), the optimal step-sizeis approximately written as

μ∗(n) ≈ E{e2(n)}E{e2(n)xT (n)x(n)} (11)

Notice that it is not possible to obtain μ∗(n) from Eq. (11)since e(n) is practically unavailable. In the absence of noise e(n) =e(n) [16]. However, in real applications, e(n) is a noisy measureof e(n). Therefore, using e(n) as an estimate for e(n) will ren-der the algorithm performance sensitive to the presence of noiseleading to performance deterioration. To alleviate this problem, wepropose the following practical approximation to Eq. (11) [12,17]

μ∗(n) ≈ |E{e(n)e(n − 1)}|E{e2(n)xT (n)x(n)} (12)

To estimate the step-size in Eq. (12), the proposed algorithm usesthe following two equations

q(n) = αq(n − 1) + (1 − α)e(n)e(n − 1) (13)

r(n) = αr(n − 1) + (1 − α)e2(n)xT (n)x(n) (14)

and

μ(n) = |q(n)|r(n)

(15)

where 0 < α < 1 is the exponential weighting parameter that con-trols the quality of estimation. The proposed VSS-SPU-LMS algo-rithm is now described by Eqs. (1), (2), (13)–(15), where μ inEq. (1) is replaced by μ(n). Eq. (13) requires 3 multiplicationsand one addition. Eq. (14) needs M + 4 multiplications and Madditions. Therefore, to calculate the VSS in Eq. (15), M + 7 mul-tiplications and M + 1 additions are needed per iteration. In total,the proposed VSS-SPU-LMS algorithm requires N + 2M + 8 multi-plications, and N + 2M + 1 additions at each iteration.

It should be mentioned that to obtain Eq. (12), we have as-sumed that E{e2(n)} ≈ |E{e(n)e(n − 1)}|, where from Eq. (8),|E{e(n)e(n − 1)}| = |E{e(n)e(n − 1)}|. Accordingly, E{e2(n)} ≈|E{e(n)e(n − 1)}|. This result depends on the assumption that thedisturbance noise η(n) is uncorrelated since E{e(n)e(n − 1)} =E{e(n)e(n − 1)} + E{η(n)η(n − 1)} = E{e(n)e(n − 1)}. However,in some situations, the noise can be correlated. For example,in a double-talk scenario in acoustic echo cancellation, the near-end speech signal is correlated. This problem can be avoidedby using a double-talk detector. Also, when the unknown sys-tem impulse response is undermodeled, undermodelling noisecan be correlated [18]. In this case, the quantity E{η(n)η(n − 1)}appears in E{e(n)e(n − 1)}, and depending on its level rela-tive to E{e(n)e(n − 1)}, the approximation above can becomeweak. To alleviate such situations, we can use E{e(n)e(n − D)},where D is chosen such that E{η(n)η(n − D)} is small comparedto E{e(n)e(n − D)}, and D � N . Notice that E{e2(∞)} is the steadystate excess MSE of the algorithm that practically has a nonzerovalue. On the other hand, E{e(n)e(n − 1)} or E{e(n)e(n − 1)}, andassuming uncorrelated noise sequence, approaches zero at steadystate since error samples are known to become uncorrelated atsteady state. However, the estimation process of E{e(n)e(n − 1)} inEq. (13) is not ideal and E{μ(∞)} is shown in the next section tohave a nonzero value that is function of α.

Page 3: A variable step-size selective partial update LMS algorithm

K. Mayyas / Digital Signal Processing 23 (2013) 75–85 77

3. Mean-square analysis of the VSS-SPU-LMS algorithm

This section will provide mean-square performance analysis forthe proposed VSS-SPU-LMS algorithm. The input signal is assumedi.i.d. zero-mean stationary sequence. Exact modeling of the un-known system is assumed, i.e., the adaptive filter vector w(n) andthe unknown system coefficient vector w∗ have the same length N .However, the algorithm updates at each iteration M coefficients ofthe adaptive filter coefficients, where 1 � M � N . In addition, thecommon independence assumption of x(n) and w(n) is used [19].We also assume that x(n) is independent of w(n).

3.1. Mean-square equations

The MSE is given by [15]

ε(n) = E{

e2(n)}

(16)

Substituting Eq. (2) in Eq. (16), and using Eqs. (3) and (4) yield

ε(n) = εmin + σ 2x C(n) (17)

where C(n) = E{wT (n)w(n)} = E{‖w(n)‖2}, εmin = E{η2(n)}, andσ 2

x is the input signal variance. The excess MSE is defined by [15]

εex(n) = σ 2x C(n) (18)

From Eq. (1), and using Eq. (4), the coefficient error update equa-tions of the VSS-SPU-LMS algorithm can be put in the form

wsi (n + 1) =⎧⎨⎩

wsi (n), i = 1,2, . . . , N − M

wsi (n) + μ(n)e(n)x(n − si + 1),

i = N − M + 1, N − M + 2, . . . , N

(19)

Substituting Eqs. (2) and (3) in Eq. (19) results in

wsi (n + 1) = wsi (n) − μ(n)x(n − si + 1)xT (n)w(n)

+ μ(n)x(n − si + 1)η(n),

i = N − M + 1, N − M + 2, . . . , N (20)

Squaring both sides of Eq. (20), taking the expected value, and aftersome manipulation yield

E{

w2si(n + 1)

} = [1 − 2E

{μ(n)

}E{

x2si(n)

}+ E

{μ2(n)

}E{

x4si(n)

}]E{

w2si(n)

}+ E

{μ2(n)

}σ 2

x E{

x2si(n)

} N∑j=1j �=si

E{

w2j (n)

}

+ E{μ2(n)

}E{

x2si(n)

}εmin (21)

where we have denoted x(n − si + 1) by xsi (n). For the VSS-SPU-LMS algorithm, C(n + 1) = E{‖w(n + 1)‖2} has the form

C(n + 1) =N−M∑i=1

E{

w2si(n)

} +N∑

i=N−M+1

E{

w2si(n + 1)

}(22)

In [20], it was shown that given the assumptions used here, thesequence of indices of the updated coefficients is a Markov pro-cess with a uniform probability of selecting any coefficient forupdating. Hence, we will assume that E{w2

si(n)} = E{w2

j (n)}, ∀ j =1,2, . . . , N . Now, substituting Eq. (21) in Eq. (22) results in

C(n + 1) =[

1 − 2E{μ(n)}

Nψ2,M + E{μ2(n)}

N

× (ψ4,M + σ 2

x (N − 1)ψ2,M)]

C(n)

+ E{μ2(n)

}ψ2,Mεmin (23)

where ψp,M is defined as ψp,M = ∑Ni=N−M+1 E{xp

si(n)}. The calcu-

lation of ψ2,M(n) and ψ4,M(n) in Eq. (23) is a problem of orderstatistics [21,22]. It should be noted that not every distribution hasa closed form solution for ψ2,M or ψ4,M , however, we derive in Ap-pendix A expressions that allow the numerical calculation of ψ2,M

and ψ4,M . When the input signal is Gaussian i.i.d., explicit closedform expressions exist for moments of order statistics from an i.i.d.chi-square distribution with one degree of freedom [23]. Moreover,for a complex Gaussian input signal, the integration in the mo-ments of order statistics from an exponential distribution can besolved [24].

To compute C(n + 1) in Eq. (23), E{μ(n)} and E{μ2(n)} need tobe evaluated. From Eq. (15), we have

E{μ(n)

} = E

{ |q(n)|r(n)

}∼= E{|q(n)|}

E{r(n)} (24)

and

E{μ2(n)

} = E

{q2(n)

r2(n)

}∼= E{q2(n)}

E{r2(n)} (25)

The approximations above are necessary to obtain approximateclosed form expressions for E{μ(n)} and E{μ2(n)} [25]. To cal-culate E{|q(n)|} in Eq. (24), we use Eq. (13) to write q(n) recur-sively as

q(n) = (1 − α)

n−1∑i=0

αie(n − i)e(n − i − 1) (26)

For sufficiently large n, the central limit theorem justifies the as-sumption that q(n) is a zero-mean Gaussian sequence, thus

E{∣∣q(n)

∣∣} =√

2

πσq(n) (27)

where σq(n) = √E{q2(n)} is the standard deviation of q(n) and

E{

q2(n)} = (1 − α)2

n−1∑i=0

n−1∑j=0

αiα j

× E{

e(n − i)e(n − i − 1)e(n − j)e(n − j − 1)}

∼= (1 − α)2n−1∑i=0

α2i E{

e2(n − i)}

E{

e2(n − i − 1)}

(28)

This approximation is valid upon convergence where error samplescan be assumed uncorrelated. Taking the expected value of bothsides of Eq. (14) results in

E{

r(n)} = αE

{r(n − 1)

} + (1 − α)E{

e2(n)xT (n)x(n)}

(29)

The second term on the right-hand side of Eq. (29) can be simpli-fied as

E{

e2(n)xT (n)x(n)}

= ψ2,Mεmin + C(n)

N

[ψ4,M + ψ2,Mσ 2

x (N − 1)]

(30)

We also have from Eq. (14)

E{

r2(n)} = α2 E

{r2(n − 1)

}+ 2α(1 − α)E

{r(n − 1)

}E{

e2(n)xT (n)x(n)}

+ (1 − α)2 E{

e4(n)xT (n)x(n)xT (n)x(n)}

(31)

The third term on the right-hand side of Eq. (31) can be approxi-mated as

Page 4: A variable step-size selective partial update LMS algorithm

78 K. Mayyas / Digital Signal Processing 23 (2013) 75–85

E{

e4(n)xT (n)x(n)xT (n)x(n)}

∼= E{

e4(n)} N∑

i=N−M+1

N∑j=N−M+1

E{

x2si(n)x2

s j(n)

}(32)

An approximation for E{e4(n)} is available in [11] for Gaussian i.i.d.input signals, which is valid upon convergence

E{

e4(n)} ∼= 3ε2

min + 6σ 2x εminC(n) + 3σ 4

x C2(n)

[1 + 2

N

](33)

The quantity∑N

i=N−M+1∑N

j=N−M+1 E{x2si(n)x2

s j(n)} can be calcu-

lated numerically as it will be shown in Appendix A. Now, com-bining Eqs. (17), (23)–(25), (27)–(33) establishes an approximateexpression for the MSE behavior of the proposed VSS-SPU-LMS al-gorithm.

It can be easily shown from Eq. (23) that mean-square conver-gence of the proposed algorithm is guaranteed by

0 <E{μ2(n)}E{μ(n)} <

2

σ 2x (N − 1) + ψ4,M

ψ2,M

(34)

When μ(n) = μ and M = N , the algorithm becomes the standardLMS algorithm, where for white Gaussian input signals ψ2,N =Nσ 2

x , ψ4,N = 3Nσ 4x . In this case, the step-size bound in Eq. (34)

becomes 0 < μ < 2σ 2

x [N+2] , which matches the bound obtained for

the LMS algorithm in [26] for white Gaussian inputs.

3.2. Steady state behavior

We proceed to study the steady state performance of the pro-posed algorithm. From Eq. (17), the steady state MSE is given by

ε(∞) = εex(∞) + εmin (35)

where from Eq. (18), the steady state excess MSE εex(∞) has theform

εex(∞) = σ 2x C(∞) (36)

Substituting Eq. (23) in Eq. (36) yields

εex(∞) = yNσ 2x εmin

2 − y[ψ4,M

ψ2,M+ σ 2

x (N − 1)] (37)

where

y = E{μ2(∞)}E{μ(∞)}

∼=√

π

2σq(∞)

E{r(∞)}E{r2(∞)} (38)

At steady state, we evaluate Eqs. (28) and (29), respectively, to ob-tain

σq(∞)∼=

√1 − α

1 + αε(∞) (39)

and

E{

r(∞)} ∼= ψ2,Mε(∞) (40)

In Eq. (31), we assume that (1 − α) is small enough such thatthe last term involving (1 − α)2 can be ignored at steady statecompared with the other terms, then

E{

r2(∞)} ∼= 2α

ε2(∞)ψ22,M (41)

1 + α

Substituting Eqs. (39)–(41) in Eq. (38), we get

y ∼= 1

ψ2,M

√π(1 − α2)

8α2(42)

Substituting Eq. (42) in Eq. (37) establishes an expression for thesteady state excess MSE εex(∞) of the proposed VSS-SPU-LMS al-gorithm. When M = N and μ(n) = μ, we obtain the standard LMS

algorithm steady state excess MSE εex(∞) = μNσ 2x εmin

2−μσ 2x [N+2] where we

assumed a white Gaussian input signal, which is the same ex-pression derived in [26] for the standard LMS algorithm for whiteGaussian inputs.

4. Simulations

Here, all experiments are conducted in a system identificationsetup. The unknown system is a measured car cabin impulse re-sponse truncated to 256 samples, and the adaptive filter has thesame length N = 256. A zero-mean white Gaussian noise η(n) isadded to the system output to obtain the desired signal-to-noiseratio (SNR). Three different input signals are used in the exper-iments; a zero-mean white Gaussian signal with unity variance,a correlated signal generated as

x(n) = 0.8x(n − 1) + a(n) (43)

where a(n) is a zero-mean uncorrelated Gaussian sequence ofunity variance, or a speech signal sampled at 8 kHz. The normal-ized misalignment, defined in dB as

Normalized misalignment = 10 log10

(E{‖w∗ − w(n)‖2}

‖w∗‖2

)(44)

will be the performance measure. All experimental results are ob-tained by averaging over 100 independent runs.

The behavior of the proposed VSS-SPU-LMS algorithm is com-pared with the full-update of three well-known VSS LMS algo-rithms, namely, Aboulnasr [12], Peng [13], and Koike [14], the full-update NLMS algorithm, and the SPU-LMS algorithm. Moreover, wecompare the proposed algorithm with Khong algorithm [10], whichis a VSS selective partial update NLMS algorithm. The algorithmextends the VSS method in [27] to the SPU-NLMS algorithm [3].Its step-size equation requires the knowledge of the disturbancenoise variance, which is practically difficult to estimate. Therefore,authors in [10] used a constant in place of the corresponding noisevariance quantity, which we denote here by κ ,1 and they recom-mended a typical value of 0.01 for κ . Parameters of all algorithmsare carefully tuned to produce the same level of steady state nor-malized misalignment. To avoid possible numerical problems inEq. (15) of the proposed VSS-SPU-LMS algorithm due to very lowinput signal excitations, a small constant δ is added to xT (n)x(n)

in Eq. (14) such that

r(n) = αr(n − 1) + (1 − α)e2(n)(xT (n)x(n) + δ

)(45)

where 0 < δ � 1. Moreover, for flexible implementation, the right-hand side of Eq. (15) is multiplied by a positive constant that isless than or equal to one. Here, the constant is set to one in allexperiments. The computational complexity of the algorithms interms of the number of multiplications and additions per sam-ple time is shown in Table 1. In the first example, the inputsignal is white, and the variance of the noise signal η(n) is se-lected to obtain a high SNR of 45 dB. The step-size of all VSS

1 In [10], the constant is denoted by C , however, it is denoted in this paper by κto avoid its confusion with the quantity C(n) used in this paper.

Page 5: A variable step-size selective partial update LMS algorithm

K. Mayyas / Digital Signal Processing 23 (2013) 75–85 79

Table 1Computational complexity.

Algorithm Proposed SPU-LMS Khonga NLMSa Aboulnasr Peng Koike

# of Multips. N + 2M + 8 N + M + 1 N + 5M + 4 2N + 2 2N + 8 5N + 4 6N + 9# of Adds. N + 2M + 1 N + M N + 4M + 1 2N + 2 2N + 2 4N 5N + 1

a We assume that p(n) = xT(n)x(n) is calculated using the recursion p(n) = p(n − 1) + x2(n) − x2(n − N). This method requires one multiplication and 2 additions.

Fig. 1. Comparison of normalized misalignment of various adaptive algorithms for the experiment of a white input signal and high SNR = 45 dB.

algorithms, except for the proposed VSS-SPU-LMS and Khong al-gorithms, is limited by a maximum step-size μmax = 0.006, anda minimum step-size μmin = 0.0004. For the proposed VSS-SPU-LMS algorithm, M = 128 and α = 0.988. Aboulnasr is used withα = 0.97, β = 0.99, and γ = 1 [12]. Koike parameters are set toρα = 0.001, ρ = 0.001, ρτ = 0.001, and Ld = 128 [14]. For Peng,ρ and α are selected as 0.3 and 0.99, respectively [13]. The NLMSalgorithm is used with μ = 0.1, and the step-size value of theSPU-LMS algorithm is 0.0004. For Khong, M = 128 and the rec-ommended parameters value in [10] are μmax = 1, α = 0.95, andκ = 0.01. However, this value of κ results in a very poor per-formance of the algorithm. Assuming that the noise variance isknown, the algorithm is used with κ(n) given in [10] as κ(n) =εmin[ ‖x(n)‖2

‖x(n)‖2 ]2. The algorithm is also used with a carefully chosen

value κ = 8 × 10−6 that we found to provide a fast initial con-vergence speed of the algorithm. It is noticed that the algorithmperformance is very sensitive to the choice of κ . Note that all algo-rithms parameters are selected to achieve their best performancesin terms of fast initial convergence speed for the same level ofsteady state normalized misalignment. The normalized misalign-ment curves for all algorithms are shown in Fig. 1. It is clear thatthe proposed algorithm exhibits the fastest convergence among allthe algorithms while providing the same steady state misadjust-ment level and updating only 50% of the adaptive filter coefficientsper iteration.

In the second example, the white input signal is employed, andthe variance of the added noise η(n) is selected to obtain a lowSNR of 20 dB. For all algorithms, except for the proposed andKhong algorithms, we set μmax = 0.006 and μmin = 0.0004. Theproposed algorithm is used with M = 128, α = 0.988, and α =0.999. For Aboulnasr, we set α = 0.97, β = 0.99, and γ = 0.01. Pa-rameters value for Koike are ρα = 0.001, ρτ = 0.0001, ρ = 0.001,and Ld = 128. Peng is used with ρ = 0.0009 and α = 0.99. Thestep-size of the NLMS algorithm is chosen as μ = 0.1, and that

of the SPU-LMS algorithm is μ = 0.0004. Khong is used withM = 128, μmax = 1, α = 0.95, and κ = 0.0009. Fig. 2 shows thatthe proposed VSS-SPU-LMS algorithm with α = 0.988, which wasused in the high SNR experiment, converges initially as well asPeng and Koike, then slows down later before steady state. How-ever, it is still faster than Aboulnasr and the NLMS, while updatingonly 50% of the coefficients. When the proposed algorithm is usedwith α = 0.999, its performance improves under low SNR con-ditions. This is because the increase in the level of disturbancenoise reduces the quality of estimation of the algorithm step-size μ(n). More specifically, the numerator of the optimal step-sizein Eq. (7) and also that of its approximation in Eq. (12) are inde-pendent of the noise variance E{η2(n)}. However, the numeratorof the proposed algorithm step-size in Eq. (15) is dependent onthe noise variance particularly during the initial stages of estima-tion when there are not enough samples available to make goodquality estimations. This will directly affect the algorithm tran-sient mean-square behavior as seen from Eq. (23). Increasing thevalue of α to 0.999 improves the estimation quality thus reduc-ing the effect of the high noise level on the estimation process.At steady state, performance of the algorithm step-size is not de-

pendent on the noise level, where E{μ(∞)} = 1ψ2,M

√2(1−α)π(1+α)

and

E{μ2(∞)} = 1−α2αψ2

2,M. The reduction in the steady state normal-

ized misalignment level for α = 0.999 compared to α = 0.988 inFig. 2 is explained from Eq. (37), where the steady state normalizedmisalignment calculated using Eqs. (37) and (44) for α = 0.988is −32.7702 dB, and for α = 0.999 is −38.3735 dB. Notice thatin the above two examples, the proposed algorithm requires lessarithmetic operations than all algorithms, except for Aboulnasrthat has the same computational complexity as the proposed algo-rithm. However, in both cases, the proposed algorithm outperformsAboulnasr.

Page 6: A variable step-size selective partial update LMS algorithm

80 K. Mayyas / Digital Signal Processing 23 (2013) 75–85

Fig. 2. Comparison of normalized misalignment of various adaptive algorithms for the experiment of a white input signal and low SNR = 20 dB.

Fig. 3. Comparison of normalized misalignment of various adaptive algorithms for the experiment of a correlated input signal and high SNR = 45 dB.

In the following two examples, the performance of the algo-rithms is examined with the correlated input signal under highand low SNR conditions. The step-size bounds for the VSS algo-rithms are μmax = 0.001 and μmin = 0.0003. In the high SNR ex-periment, the variance of η(n) is adjusted such that SNR = 45 dB.The proposed VSS-SPU-LMS algorithm is used with α = 0.988 andM = 128. Aboulnasr parameters value are chosen as α = 0.97,β = 0.99, and γ = 0.6. For Koike, the leakage factors value areρα = 0.001, ρτ = 0.01, ρ = 0.01, and Ld = 128. Peng is usedwith ρ = 0.09 and α = 0.95. The NLMS algorithm step-size isselected as μ = 0.27. The step-size value of the SPU-LMS algo-rithm is chosen as μ = 0.0003. Khong is used with M = 128,μmax = 1, α = 0.95, and κ = 3 × 10−6. As seen from Fig. 3,the proposed algorithm outperforms all algorithms in its conver-gence speed while providing the same steady state misadjust-ment.

In the low SNR environment, the variance of η(n) is chosen toprovide a 20 dB SNR. The proposed algorithm is used with α =0.999 and M = 128. The step-size bounds for the algorithms areμmax = 0.001 and μmin = 0.0002 except for Aboulnasr, which was

used with μmax = 0.0015 to attain its best performance in terms ofits initial convergence speed. Note that the step-sizes for all algo-rithms in all experiments, except for the proposed and Khong algo-rithms, are initiated with μmax to provide fast initial convergence.Aboulnasr is used with α = 0.97, β = 0.99, and γ = 0.1. For Koike,we set ρα = 0.001, ρτ = 0.001, ρ = 0.01, and Ld = 128, whereasfor Peng, ρ = 0.002 and α = 0.5. The NLMS step-size is set toμ = 0.16, and that of the SPU-LMS algorithm is μ = 0.0002. Khongis used with M = 128, μmax = 0.75, α = 0.95, and κ = 1 × 10−5.Fig. 4 shows that the proposed algorithm performs as well asKoike and Peng, while Aboulnasr converges faster than all algo-rithms in this case. It should be noted that Aboulnasr algorithmneeds careful tuning of its three parameters and μmax for eachspecific experiment to obtain its best performance whereas theproposed algorithm good performance can be maintained in dif-ferent experimental conditions by the simple selection of only onetime-averaging constant α.

This experiment tests all algorithm with the speech input plot-ted in Fig. 5, which represents the far-end speech signal in a con-text of acoustic echo cancellation in the car cabin. The experiment

Page 7: A variable step-size selective partial update LMS algorithm

K. Mayyas / Digital Signal Processing 23 (2013) 75–85 81

Fig. 4. Comparison of normalized misalignment of various adaptive algorithms for the experiment of a correlated input signal and low SNR = 20 dB.

Fig. 5. The far-end speech signal.

also examines the algorithms response to a sudden change in theunknown system, where the impulse response coefficients of thecar cabin are all multiplied by −1 at n = 70 000. The SNR is setto 20 dB. The step-size bounds as well as all parameters valueof the algorithms are selected wisely to provide the fastest con-vergence speed while providing similar steady state normalizedmisalignment. The step-size bounds for the VSS algorithms areμmax = 0.01 and μmin = 0.0004. The proposed VSS-SPU-LMS algo-rithm is used with α = 0.993 and is run for M = 64 and M = 128.Aboulnasr parameters value are set to α = 0.99, β = 0.99, andγ = 5. Koike parameters are selected as ρα = 0.3, ρτ = 0.01,ρ = 0.01, and Ld = 128. For Peng, ρ and α are chosen, respec-tively, as 0.00008 and 0.99. The NLMS algorithm is used withμ = 0.1. Khong is used with M = 128, μmax = 0.95, α = 0.99,and κ = 0.001. It can be seen from Fig. 6 that before the abruptchange in the system, the proposed and the NLMS algorithms ex-hibit similar convergence speed that is faster than the remainingalgorithms. It should be noted, and as it is well known, that the

NLMS algorithm performs well in situations when the input signalpower is highly nonstationary. After the sudden change, the pro-posed algorithm sustains its improved performance by convergingfaster than all algorithms. For M = 64, the proposed algorithm stilldemonstrates close performance to the case when M = 128 whileupdating only 25% of the adaptive filter coefficients at each itera-tion.

A realistic change to the car cabin impulse response happenswhen a person moves inside the cabin. This can be representedby shifting the impulse response 12 samples to the right [28]. Theabove experiment with the speech input is repeated where the im-pulse response is changed to the shifted one at n = 70 000. Allalgorithms are used with the parameters value of the experimentof Fig. 6, and results are shown in Fig. 7. We can see from Fig. 7that all algorithms, except Aboulnasr and Khong, exhibit bettertracking capabilities after the change in the echo path impulse re-sponse compared to the case of Fig. 6. The proposed VSS-SPU-LMS

Page 8: A variable step-size selective partial update LMS algorithm

82 K. Mayyas / Digital Signal Processing 23 (2013) 75–85

Fig. 6. Comparison of normalized misalignment of various adaptive algorithms for the experiment of a speech input signal and SNR = 20 dB with an abrupt change in thecar cabin impulse response.

Fig. 7. Comparison of normalized misalignment of various adaptive algorithms for the experiment of a speech input signal and SNR = 20 dB with a change in the car cabinimpulse response by shifting it 12 samples to the right.

algorithm maintains an improved tracking response by convergingfaster than all algorithms.

Another practical test to the adaptive algorithm in an AEC setupis during double-talk situations. In this experiment, a near-endspeech signal is added to the desired signal during the periodfrom n = 70 000 to n = 104 000. The SNR remains 20 dB as inthe above two experiments. The proposed VSS-SPU-LMS algorithmis used with M = 128 and α = 0.998. Peng parameters value areρ = 0.00008 and α = 0.995. For Koike, ρα = 0.3, ρτ = ρ = 0.005,and Ld = 128. Aboulnasr is used with α = 0.997, β = 0.997, andγ = 10. The NLMS algorithm step-size is set to μ = 0.08. ForKhong, M = 128, μmax = 0.9, α = 0.995, and κ = 0.001. All al-gorithms parameters value were selected to achieve approximatelythe same steady state normalized misalignment, and results aredepicted in Fig. 8. No double-talk detector was used with any algo-rithm. Fig. 8 illustrates that the proposed and Peng algorithms arerobust during double-talk, and outperform the other algorithms.

Moreover, both algorithms sustain good convergence speed afterdouble-talk.

The following experiments are intended to verify the accuracyof the derived steady state excess MSE expression of the proposedalgorithm in Eq. (37). In the first experiment, the first example ofwhite input signal and high SNR above is repeated for differentvalues of M . Fig. 9 shows the experimental εex(n) curves for theproposed algorithm for M = 32,64,128, and 256. The steady stateexcess MSE εex(∞) results calculated from Eq. (37) are plotted ashorizontal lines. It is clear from Fig. 9 that Eq. (37) predicts verywell the actual steady state behavior of the algorithm. It is alsoseen that εex(∞) decreases as M increases.

In this experiment, we test the algorithm for the same exampleof white input and high SNR with different values of α. Simulationresults are shown in Fig. 10 along with the calculated theoreticalones. Fig. 10 shows that theoretical results match very well theexperimental ones. It is seen also that εex(∞) decreases as α in-creases.

Page 9: A variable step-size selective partial update LMS algorithm

K. Mayyas / Digital Signal Processing 23 (2013) 75–85 83

Fig. 8. Comparison of normalized misalignment of various adaptive algorithms for the experiment of a speech input signal and SNR = 20 dB with a double-talk situation.

Fig. 9. Experimental excess MSE of the proposed VSS-SPU-LMS algorithm and theoretical εex(∞) for a white input signal, N = 256, α = 0.988, SNR = 45 dB, and differentvalues of M .

Finally, we compare experimental and theoretical results for dif-ferent levels of SNR. Example one is again repeated for SNR =10 dB, 20 dB, 30 dB, 40 dB, and 50 dB. Fig. 11 indicates the goodagreement between empirical and analytical results under low andhigh SNR conditions. We should point out that the accuracy ofthe theoretical results is due to the fact that analysis takes intoaccount order statistics employed in the proposed VSS-SPU-LMSalgorithm.

5. Conclusions

The selective partial update LMS (SPU-LMS) algorithm hasgained wide popularity. This is due to its ability to maintain closeperformance to the full-update algorithm while reducing its com-putational complexity. In this paper, convergence characteristics ofthe SPU-LMS algorithm were enhanced by incorporating a vari-able step-size (VSS) in the algorithm leading to a VSS-SPU-LMSalgorithm. The VSS is an efficient implementation of an optimalderived step-size that depends on the available data and need notany a prior information about the adaptive system. Moreover, the

VSS equations are controlled by only one time-averaging parame-ter α, and add only M + 7 multiplications and M + 1 additions tothe complexity cost of the SPU-LMS algorithm.

Mean-square analysis was presented for the proposed VSS-SPU-LMS algorithm for stationary zero-mean i.i.d. input signals, andan expression for the algorithm steady state excess MSE was de-rived. Simulation results verified that the expression predicts verywell the actual performance of the algorithm. This is due mainly tothe fact that the presented analysis model takes into considerationthe order statistics selection technique of the algorithm thus lead-ing to accurate results. Simulation experiments also compared theproposed algorithm with other well-known full-update VSS-LMSalgorithms, which indicated the ability of the proposed algorithmto maintain a performance that rivals these algorithms at a re-duced computational complexity.

Appendix A

In this appendix, we derive expressions that can be calculatednumerically for ψ2,M = ∑N

i=N−M+1 E{x2s (n)}, ψ4,M =

i

Page 10: A variable step-size selective partial update LMS algorithm

84 K. Mayyas / Digital Signal Processing 23 (2013) 75–85

Fig. 10. Experimental excess MSE of the proposed VSS-SPU-LMS algorithm and theoretical εex(∞) for a white input signal, N = 256, M = 128, SNR = 45 dB, and differentvalues of α.

Fig. 11. Experimental excess MSE of the proposed VSS-SPU-LMS algorithm and theoretical εex(∞) for a white input signal, N = 256, M = 128, α = 0.988, and different valuesof SNR.

∑Ni=N−M+1 E{x4

si(n)}, and

∑Ni=N−M+1

∑Nj=N−M+1 E{x2

si(n)x2

s j(n)},

which arise in the mean-square analysis of the VSS-SPU-LMS al-gorithm. The derivations of these expressions depend on resultsfrom moments of order statistics [21,29].

Let Y(n) = |x(n)|, where the elements of Y(n) have a probabil-ity density function (PDF) f y(n)(y) and a cumulative distributionfunction (CDF) F y(n)(y). Note that

F y(n)(y) = Fx(n)(y) − Fx(n)(−y), y > 0 (46)

and

f y(n)(y) = 2 fx(n)(y), y > 0 (47)

where Fx(n)(x) and fx(n)(x) are the CDF and PDF of the input signalsequence, respectively. Let ys1 (n) � ys2 (n) � · · · � ysN (n) be theorder statistics of the elements of Y(n), then the PDF of ysi (n) isgiven by [29]

f ysi (n)(y) = N! f y(n)(y)

(i − 1)!(N − i)![

F y(n)(y)]i−1

× [1 − F y(n)(y)

]N−i(48)

where i = 1,2, . . . , N . Notice that E{xpsi(n)} = E{yp

si(n)} for even

values of p, p > 0. Accordingly, for even values of p, we have

ψp,M =N∑

i=N−M+1

∞∫0

yp f ysi (n)(y)dy

=N∑

i=N−M+1

∞∫0

yp N!(i − 1)!(N − i)! f y(n)(y)

× [F y(n)(y)

]i−1[1 − F y(n)(y)

]N−idy (49)

from which we can calculate numerically ψ2,M and ψ4,M fora given PDF and CDF of the input signal. When x(n) is Gaus-

Page 11: A variable step-size selective partial update LMS algorithm

K. Mayyas / Digital Signal Processing 23 (2013) 75–85 85

sian i.i.d., then f y(n)(n) is the PDF of a random variable froma chi-square distribution with one degree of freedom [21]. Mo-ments of order statistics of samples from this distribution haveexplicit closed form expressions derived in [23]. The quantity∑N

i=N−M+1∑N

j=N−M+1 E{x2si(n)x2

s j(n)} can be evaluated as [21,29]

N∑i=N−M+1

N∑j=N−M+1

E{

x2si(n)x2

s j(n)

}

=N∑

i=N−M+1

N∑j=N−M+1

E{

y2si(n)y2

s j(n)

}

=

⎧⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎩

∑Ni=N−M+1

∑Nj=N−M+1

∫∫0<u<v<∞u2 v2

× N!(i−1)!( j−i−1)!(N− j)! f y(n)(u)

× f y(n)(v)[F y(n)(u)]i−1[F y(n)(v) − F y(n)(u)] j−i−1

× [1 − F y(n)(v)]N−i du dv, 1 � i < j � N

0, otherwise(50)

References

[1] S.C. Douglas, Adaptive filters employing partial updates, IEEE Trans. CircuitsSyst. II 44 (March 1997) 209–216.

[2] T. Aboulnasr, K. Mayyas, Selective coefficient update of gradient-based adap-tive algorithms, in: Proc. IEEE Int. Conf. Acoust., Speech, Signal Process, vol. 3,Munich, April 1997, pp. 1929–1932.

[3] T. Aboulnasr, K. Mayyas, Complexity reduction of the NLMS algorithm via selec-tive coefficient update, IEEE Trans. Signal Process. 47 (May 1999) 1421–1427.

[4] T. Schertler, Selective block update of NLMS-type algorithms, in: Proc. IEEE Int.Conf. Acoust., Speech, Signal Process, vol. 3, Seattle, WA, May 1998, pp. 1717–1720.

[5] K. Dogançay, O. Tanrıkulu, Adaptive filtering algorithms with selective partialupdate, IEEE Trans. Circuits Syst. II 48 (August 2001) 762–769.

[6] S. Werner, M.L.R. de Campos, P.S.R. Diniz, Partial-update NLMS algorithms withdata-selective updating, IEEE Trans. Signal Process. 52 (4) (April 2004) 938–948.

[7] Kyu-Young Hwang, Woo-Jin Song, An affine projection adaptive filtering algo-rithm with selective regressors, IEEE Trans. Circuits Syst. II, Express Briefs 54(January 2007) 43–46.

[8] A.W.H. Khong, P.A. Naylor, Selective-tap adaptive filtering with performanceanalysis for identification of time-varying systems, IEEE Trans. Audio, SpeechLanguage Process. 15 (5) (July 2007) 1681–1695.

[9] M.S.E. Abadi, J.H. Husoy, Mean-square performance analysis of the family ofadaptive filters with selective partial updates, Signal Process. 88 (2008) 2008–2018.

[10] A.W.H. Khong, W.-S. Gan, P.A. Naylor, M. Brookes, A low complexity fast con-verging partial update adaptive algorithm employing variable step-size foracoustic echo cancellation, in: Proc. IEEE Int. Conf. Acoust., Speech, Signal Pro-cess., Las Vegas, NV, April 2008, pp. 237–240.

[11] R.H. Kwong, E.W. Johnston, A variable step size LMS algorithm, IEEE Trans. Sig-nal Process. 40 (July 1992) 1633–1642.

[12] T. Aboulnasr, K. Mayyas, A robust variable step size LMS type algorithm: Analy-sis and simulations, IEEE Trans. Signal Process. 45 (3) (March 1997) 1633–1642.

[13] Wee-Peng Ang, B. Farhang-Boroujeny, A new class of gradient adaptive step-size LMS algorithms, IEEE Trans. Signal Process. 49 (4) (April 2001) 805–810.

[14] S. Koike, A class of adaptive step-size control algorithm for adaptive filters, IEEETrans. Signal Process. 50 (6) (June 2002) 1315–1326.

[15] S. Haykin, Adaptive Filter Theory, 4th edition, Prentice Hall, 2002.[16] S. Hyun-Chool, A.H. Sayed, S. Woo-Jin, Variable step-size NLMS and affine pro-

jection algorithms, IEEE Signal Process. Lett. 11 (2) (February 2004) 132–135.[17] K. Mayyas, A new variable step size control method for the transform domain

LMS adaptive algorithm, Circuits Syst. Signal Process. 24 (6) (December 2005)703–721.

[18] K. Mayyas, Performance analysis of the deficient length LMS adaptive algo-rithm, IEEE Trans. Signal Process. 53 (8) (August 2005) 2727–2734.

[19] L.L. Horowitz, K.D. Senne, Performance advantage of complex LMS for con-trolling narrow-band adaptive arrays, IEEE Trans. Acoust. Speech Signal Pro-cess. ASSP-29 (June 1981) 722–736.

[20] S.C. Douglas, Analysis and implementation of the Max-NLMS adaptive filter,in: Proc. 29th Asilomar Conf. Signals Syst. Comput., vol. 1, Pacific Grove, CA,October 1995, pp. 659–663.

[21] Z. Govindarajulu, Exact lower moments of order statistics in samples from thechi-distribution (1 d.f.), Ann. Math. Stat. 33 (December 1963) 1292–1305.

[22] M.I. Haddad, K. Mayyas, M.A. Khasawneh, Analytical development of theMMAXNLMS algorithm, in: Proc. IEEE Int. Conf. Acoust., Speech, Signal Process,vol. 4, Phoenix, AZ, March 1999, pp. 1853–1856.

[23] S. Nadarajah, A comment on “Partial-Update NLMS Algorithms With Data-Selective Updating”, IEEE Trans. Signal Process. 55 (6) (June 2007) 3148–3149.

[24] B.C. Arnold, N. Balakrishnan, H.N. Nagaraja, A First Course in Order Statistics,Wiley-Interscience, 1992.

[25] S. Koike, Analysis of adaptive filters using normalized signed regressor LMSalgorithm, IEEE Trans. Signal Process. 47 (10) (October 1999) 2710–2723.

[26] A. Feuer, E. Weinstein, Convergence analysis of LMS filters with uncorrelatedGaussian data, IEEE Trans. Acoust. Speech Signal Process. ASSP-33 (February1985) 222–230.

[27] H. Shin, A. Sayed, W. Song, Variable step-size NLMS and affine projection algo-rithms, IEEE Trans. Signal Process. Lett. 11 (2) (February 2004) 132–135.

[28] C. Paleologu, J. Benesty, S. Ciochina, A variable step-size affine projection al-gorithm designed for acoustic echo cancellation, IEEE Trans. Audio, SpeechLanguage Process. 16 (8) (November 2008) 1466–1478.

[29] A. Papoulis, Probability, Random Variables, and Stochastic Processes, McGraw-Hill, New York, 1991.

Khaled Mayyas received the B.S.E. and M.Sc. degrees from the JordanUniversity of Science and Technology (JUST), Irbid, Jordan, in 1990 and1992, and the Ph.D. degree from the University of Ottawa, Ottawa, ON,Canada, in 1995, all in electrical engineering. He is currently a Professor ofElectrical Engineering at JUST. He was Dean of the Faculty of Engineeringat JUST for 2007–2011 and Vice-Dean for 2005–2007. He was a visiting Re-search Associate at the School of Information Technology and Engineering,University of Ottawa, in summers of 1996–1999, and 2002. His researchinterests are in various aspects of adaptive digital signal processing andtheir applications, including the development of efficient algorithms.


Recommended