+ All Categories
Home > Documents > Research Article Subband Adaptive Filtering with -Norm...

Research Article Subband Adaptive Filtering with -Norm...

Date post: 21-Oct-2020
Category:
Upload: others
View: 1 times
Download: 0 times
Share this document with a friend
8
Hindawi Publishing Corporation Mathematical Problems in Engineering Volume 2013, Article ID 601623, 7 pages http://dx.doi.org/10.1155/2013/601623 Research Article Subband Adaptive Filtering with 1 -Norm Constraint for Sparse System Identification Young-Seok Choi Department of Electronic Engineering, Gangneung-Wonju National University, Gangneung 210-702, Republic of Korea Correspondence should be addressed to Young-Seok Choi; [email protected] Received 27 September 2013; Revised 26 November 2013; Accepted 26 November 2013 Academic Editor: Yue Wu Copyright Β© 2013 Young-Seok Choi. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. is paper presents a new approach of the normalized subband adaptive filter (NSAF) which directly exploits the sparsity condition of an underlying system for sparse system identification. e proposed NSAF integrates a weighted 1 -norm constraint into the cost function of the NSAF algorithm. To get the optimum solution of the weighted 1 -norm regularized cost function, a subgradient calculus is employed, resulting in a stochastic gradient based update recursion of the weighted 1 -norm regularized NSAF. e choice of distinct weighted 1 -norm regularization leads to two versions of the 1 -norm regularized NSAF. Numerical results clearly indicate the superior convergence of the 1 -norm regularized NSAFs over the classical NSAF especially when identifying a sparse system. 1. Introduction Over the past few decades, the relative simplicity and good performance of the normalized least mean square (NLMS) algorithm have made it a popular tool for adaptive filter- ing applications. However, its convergence performance is significantly deteriorated in case of correlated input signals [1, 2]. As a popular solution, adaptive filtering in the subband has been recently developed, which is referred to as subband adaptive filter (SAF) [3–7]. Its distinct feature is based on the property that the LMS-type adaptive filters converge faster for white input signals than colored ones [1, 2]. us, carrying out a prewhitening on colored input signals, it results in the accelerated convergence compared to the LMS-type adaptive filters. Recently, the use of multiple-constraint optimization criteria into formulation of a cost function has resulted in the normalized SAF (NSAF) with its computational complexity close to that of the NLMS algorithm [6, 7]. In the context of a system identification, the unknown system to be identified is sparse in common scenarios, such as echo paths [8] and digital TV transmission channels [9]. Namely, the unknown system consists of many near-zero coefficients and a small number of large ones. However, the adaptive filtering algorithms suffer from poor convergence performance in case of identifying the sparse system [8]. Indeed, the capability of the NSAF is faded in a sparse system identification scenario. To deal with this issue, a variety of proportionate adaptive algorithms have been presented for NSAF, which utilize proportionate step sizes to distinct filter taps [10–12]. However, these algorithms have not exploited the sparsity condition of an underlying system. Recently, motivated by compressive sensing framework [13, 14] and the least absolute shrinkage and selection oper- ator (LASSO) [15], a number of adaptive filtering algorithms which make use of the sparsity condition of an underlying system have been developed [16–20]. e core idea behind this approach is to incorporate the sparsity condition of underlying system by imposing an a sparsity-inducing con- straint term. Adding the sparsity constraint using 0 or 1 - norm constraint to the cost function makes the least relevant weights of the filter shrink to zeros. However, to the best of the author’s knowledge, adaptive filtering in subband which exploits the sparsity condition has not been studied yet. With regard, this paper presents a novel approach of the sparsity-regularized NSAFs, which incorporates the sparsity condition of the system directly into the cost function via a sparsity-inducing constraint term. is is carried out by regularizing a weighted 1 -norm of the filter weights estimate
Transcript
  • Hindawi Publishing CorporationMathematical Problems in EngineeringVolume 2013, Article ID 601623, 7 pageshttp://dx.doi.org/10.1155/2013/601623

    Research ArticleSubband Adaptive Filtering with 𝑙1-Norm Constraint forSparse System Identification

    Young-Seok Choi

    Department of Electronic Engineering, Gangneung-Wonju National University, Gangneung 210-702, Republic of Korea

    Correspondence should be addressed to Young-Seok Choi; [email protected]

    Received 27 September 2013; Revised 26 November 2013; Accepted 26 November 2013

    Academic Editor: Yue Wu

    Copyright Β© 2013 Young-Seok Choi. This is an open access article distributed under the Creative Commons Attribution License,which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

    This paper presents a new approach of the normalized subband adaptive filter (NSAF) which directly exploits the sparsity conditionof an underlying system for sparse system identification.The proposed NSAF integrates a weighted 𝑙

    1-norm constraint into the cost

    function of the NSAF algorithm. To get the optimum solution of the weighted 𝑙1-norm regularized cost function, a subgradient

    calculus is employed, resulting in a stochastic gradient based update recursion of the weighted 𝑙1-norm regularized NSAF. The

    choice of distinct weighted 𝑙1-norm regularization leads to two versions of the 𝑙

    1-norm regularized NSAF. Numerical results clearly

    indicate the superior convergence of the 𝑙1-norm regularized NSAFs over the classical NSAF especially when identifying a sparse

    system.

    1. Introduction

    Over the past few decades, the relative simplicity and goodperformance of the normalized least mean square (NLMS)algorithm have made it a popular tool for adaptive filter-ing applications. However, its convergence performance issignificantly deteriorated in case of correlated input signals[1, 2]. As a popular solution, adaptive filtering in the subbandhas been recently developed, which is referred to as subbandadaptive filter (SAF) [3–7]. Its distinct feature is based on theproperty that the LMS-type adaptive filters converge faster forwhite input signals than colored ones [1, 2]. Thus, carryingout a prewhitening on colored input signals, it results in theaccelerated convergence compared to the LMS-type adaptivefilters. Recently, the use of multiple-constraint optimizationcriteria into formulation of a cost function has resulted in thenormalized SAF (NSAF) with its computational complexityclose to that of the NLMS algorithm [6, 7].

    In the context of a system identification, the unknownsystem to be identified is sparse in common scenarios, suchas echo paths [8] and digital TV transmission channels [9].Namely, the unknown system consists of many near-zerocoefficients and a small number of large ones. However, theadaptive filtering algorithms suffer from poor convergence

    performance in case of identifying the sparse system [8].Indeed, the capability of the NSAF is faded in a sparse systemidentification scenario. To deal with this issue, a variety ofproportionate adaptive algorithms have been presented forNSAF, which utilize proportionate step sizes to distinct filtertaps [10–12]. However, these algorithms have not exploitedthe sparsity condition of an underlying system.

    Recently, motivated by compressive sensing framework[13, 14] and the least absolute shrinkage and selection oper-ator (LASSO) [15], a number of adaptive filtering algorithmswhich make use of the sparsity condition of an underlyingsystem have been developed [16–20]. The core idea behindthis approach is to incorporate the sparsity condition ofunderlying system by imposing an a sparsity-inducing con-straint term. Adding the sparsity constraint using 𝑙

    0or 𝑙1-

    norm constraint to the cost function makes the least relevantweights of the filter shrink to zeros. However, to the best ofthe author’s knowledge, adaptive filtering in subband whichexploits the sparsity condition has not been studied yet.

    With regard, this paper presents a novel approach of thesparsity-regularized NSAFs, which incorporates the sparsitycondition of the system directly into the cost function viaa sparsity-inducing constraint term. This is carried out byregularizing a weighted 𝑙

    1-norm of the filter weights estimate

  • 2 Mathematical Problems in Engineering

    u(n)

    w∘

    H0(z)

    H1(z)

    HNβˆ’1(z)

    H0(z)

    ↓N

    ↓N

    ↓N

    ↓N

    ↓N

    ↓N

    d0(n) d0(k)οΏ½(n)

    d(n)d1(n) d1(k)

    H1(z)

    ......

    HNβˆ’1(z)dNβˆ’1(n) dNβˆ’1(k)

    u0(n) y0(n) y0(k)

    u1(n) y1(n) y1(k)

    uNβˆ’1(n) yNβˆ’1(n yNβˆ’1(k))w(k)

    w(k)

    w(k)...

    ......

    G0(z)e0(n) e0(k)

    ↑N

    G1(z)e1(n) e1(k)

    ↑N

    GNβˆ’1(z)eNβˆ’1(n) eNβˆ’1(k)

    ↑N

    e(n)...

    ...

    +

    +

    +

    +

    +

    βˆ’

    βˆ’

    βˆ’

    Figure 1: Subband structure with the analysis filters and synthesis filters and the subband desired signals, subband filter outputs, and subbanderror signals.

    to the cost function. Considering the two choices of theweighted 𝑙

    1-norm regularization, two stochastic gradient-

    based 𝑙1-norm regularized NSAF algorithms are derived.

    First, the 𝑙1-norm NSAF (𝑙

    1-NSAF) is obtained by uti-

    lizing the identity matrix as a weighting matrix. Second,the reweighted 𝑙

    1-norm NSAF (𝑙

    1-RNSAF) which uses the

    current estimate of the system as a weighted 𝑙1-norm is

    developed. Through numerical simulations, the resultantsparsity-regularized NSAFs have proven their superiorityover the classical NSAFs, especially when the sparsity of theunderlying system becomes severe.

    The remainder of the paper is organized as follows.Section 2 introduces the classical NSAF, followed by thederivation of the proposed sparsity-regularized NSAFs inSection 3. Section 4 illustrates the computer simulationresults and Section 5 concludes this study.

    2. Conventional NSAF

    Consider a desired signal 𝑑(𝑛) that arises from the systemidentification model

    𝑑 (𝑛) = u (𝑛)w∘ + V (𝑛) , (1)

    where w∘ is a column vector for the impulse response of anunknown system that we wish to estimate, V(𝑛) accounts formeasurement noise with zeromean and variance𝜎2V , and u(𝑛)denotes the 1 ×𝑀 input vector,

    u (𝑛) = [𝑒 (𝑛) 𝑒 (𝑛 βˆ’ 1) β‹… β‹… β‹… 𝑒 (𝑛 βˆ’π‘€ + 1)] . (2)

    Figure 1 shows the structure of the NSAF, where thedesired signal 𝑑(𝑛) and input signal 𝑒(𝑛) are partitioned into

    𝑁 subbands by the analysis filters𝐻0(𝑧),𝐻

    1(𝑧), . . . , 𝐻

    π‘βˆ’1(𝑧).

    The resulting subband signals, 𝑑𝑖(𝑛) and 𝑦

    𝑖(𝑛) for 𝑖 =

    0, 1, . . . , 𝑁 βˆ’ 1, are critically decimated to a lower samplingrate commensurate with their bandwidth. Here, the variable𝑛 to index the original sequences and π‘˜ to index the decimatedsequences are used for all signals. Then, the decimatedfilter output signal at each subband is defined as 𝑦

    𝑖,𝐷(π‘˜) =

    u𝑖(π‘˜)w(π‘˜), where u

    𝑖(π‘˜) is 1 ×𝑀 row vector, such that

    u𝑖(π‘˜) = [𝑒

    𝑖(π‘˜π‘) , 𝑒

    𝑖(π‘˜π‘ βˆ’ 1) , . . . , 𝑒

    𝑖(π‘˜π‘ βˆ’π‘€ + 1)] , (3)

    andw(π‘˜) = [𝑀0(π‘˜), 𝑀

    1(π‘˜), . . . , 𝑀

    π‘€βˆ’1(π‘˜)]𝑇 denotes an estimate

    for w∘ with length 𝑀. Thus the decimated subband errorsignal is given by

    𝑒𝑖,𝐷

    (π‘˜) = 𝑑𝑖,𝐷

    (π‘˜) βˆ’ 𝑦𝑖,𝐷

    (π‘˜) = 𝑑𝑖,𝐷

    (π‘˜) βˆ’ u𝑖(π‘˜)w (π‘˜) , (4)

    where 𝑑𝑖,𝐷(π‘˜) = 𝑑

    𝑖(π‘˜π‘) is the decimated desired signal at

    each subband.In [6], the authors have formulated the Lagrangian-

    based multiple-constraint optimization problem, which isformulated as

    𝐽NSAF (π‘˜) = β€–w (π‘˜ + 1) βˆ’ w (π‘˜)β€–2

    +

    π‘βˆ’1

    βˆ‘π‘–=0

    πœ†π‘–[𝑑𝑖,𝐷

    (π‘˜) βˆ’ u𝑖(π‘˜)w (π‘˜ + 1)] ,

    (5)

    where πœ†π‘–for 𝑖 = 0, 1, . . . , 𝑁 βˆ’ 1 denote the Lagrange multipli-

    ers. Solving the cost function (5), the update recursion of theNSAF algorithm is given by [6, 7]. Consider

    w (π‘˜ + 1) = w (π‘˜) + πœ‡π‘βˆ’1

    βˆ‘

    𝑖=0

    u𝑇𝑖(π‘˜)

    u𝑖 (π‘˜)2𝑒𝑖,𝐷

    (π‘˜) , (6)

    where πœ‡ is the step-size parameter.

  • Mathematical Problems in Engineering 3

    3. Weighted 𝑙1-Norm Regularized NSAF

    3.1. Derivation of the Proposed Algorithm. To reflect thesparsity condition of the true system, that is, w∘, a weighted𝑙1-norm of the filter weight estimate is regularized on the cost

    function of the NSAF, which is given by

    𝐽𝑙1βˆ’NSAF (π‘˜) = β€–w (π‘˜ + 1) βˆ’ w (π‘˜)β€–

    2

    +

    π‘βˆ’1

    βˆ‘

    𝑖=0

    πœ†π‘–[𝑑𝑖,𝐷

    (π‘˜) βˆ’ u𝑖(π‘˜)w (π‘˜ + 1)]

    +𝛾‖Πw (π‘˜ + 1)β€–1,

    (7)

    where β€–Ξ w(π‘˜ + 1)β€– accounts for the weighted 𝑙1-norm of the

    filter weight vector w(π‘˜ + 1) and is written as

    β€–Ξ w (π‘˜ + 1)β€–1 =π‘€βˆ’1

    βˆ‘

    π‘š=0

    πœ‹π‘š

    π‘€π‘š (π‘˜ + 1) , (8)

    where Ξ  is a 𝑀 Γ— 𝑀 weighting matrix whose diagonalelements are πœ‹

    π‘šand other elements are equal to zero, and

    π‘€π‘š(π‘˜ + 1) denotes the π‘šth tap weight of w(π‘˜ + 1), for π‘š =

    0, 1, . . . ,𝑀 βˆ’ 1. In addition, 𝛾 is a positive value parameterwhich plays a role in compromising the error related term andthe weighted 𝑙

    1-norm regularization in the right-hand side of

    (7).To find the optimal weight vector w(π‘˜ + 1) which

    minimizes the cost function (7), the derivative of (7) withrespect to w(π‘˜ + 1) is taken and set to zero. Note that theweighted 𝑙

    1-norm regularization term, that is, β€–Ξ w(π‘˜ + 1)β€–

    1,

    is not differentiable at any point in case π‘€π‘š(π‘˜ + 1) = 0. To

    address this issue, a subgradient calculus [21] is carried out.Thus, taking the derivative of (7) with respect to the

    weight vector w(π‘˜ + 1) and letting the derivative be equal tozero, it leads to

    w (π‘˜ + 1) = w (π‘˜) + 12

    π‘βˆ’1

    βˆ‘

    𝑖=0

    πœ†π‘–u𝑇𝑖(π‘˜) βˆ’

    𝛾

    2βˆ‡π‘ 

    wβ€–Ξ w (π‘˜ + 1)β€–1,

    (9)

    where βˆ‡π‘ w𝑓(β‹…) denotes a subgradient vector of a function 𝑓(β‹…)with respect to w(π‘˜ + 1). An available subgradient vectorβˆ‡π‘ 

    wβ€–Ξ w(π‘˜ + 1)β€–1 is obtained as [21]. Consider

    βˆ‡π‘ 

    wβ€–Ξ w (π‘˜ + 1)β€–1 = Π𝑇 sgn (Ξ w (π‘˜ + 1))

    = Ξ  sgn (w (π‘˜ + 1)) ,(10)

    sinceΞ  is assumed as a diagonal matrix with positive-valuedelements, where sgn(β‹…) is a componentwise sign functiondefined by

    sgn (π‘₯) ={

    {

    {

    π‘₯

    |π‘₯|, π‘₯ ΜΈ= 0

    0, elsewhere.(11)

    Substituting (10) into (9) and assuming sgn[w(π‘˜ + 1)] β‰ˆsgn[w(π‘˜)], it is given by

    w (π‘˜ + 1) = w (π‘˜) + 12

    π‘βˆ’1

    βˆ‘

    𝑖=0

    πœ†π‘–u𝑇𝑖(π‘˜) βˆ’

    𝛾

    2Ξ  sgn (w (π‘˜)) . (12)

    Substituting (12) into the multiple constraints of theNSAF, that is, 𝑑

    𝑖,𝐷(π‘˜) = u

    𝑖(π‘˜)w(π‘˜ + 1), 𝑖 = 0, 1, . . . , 𝑁 βˆ’ 1

    and rewriting as a matrix form, it leads to

    Ξ› = 2[U (π‘˜)U𝑇 (π‘˜)]βˆ’1

    e𝐷(π‘˜)

    +𝛾[U (π‘˜)U𝑇 (π‘˜)]βˆ’1

    U (π‘˜)Ξ  sgn (w (π‘˜)) ,(13)

    where Ξ› = [πœ†0, πœ†1, . . . , πœ†

    π‘βˆ’1]𝑇 is the𝑁 Γ— 1 Lagrange vector,

    U (π‘˜) = [[[

    u0(π‘˜)

    ...uπ‘βˆ’1

    (π‘˜)

    ]]

    ]

    , e𝐷(π‘˜) =

    [[

    [

    𝑒0,𝐷

    (π‘˜)

    ...π‘’π‘βˆ’1,𝐷

    (π‘˜)

    ]]

    ]

    . (14)

    By neglecting the off-diagonal elements ofU(π‘˜)U𝑇(π‘˜) [6],the components of Ξ› in (13) can be simplified to

    πœ†π‘–= 2

    𝑒𝑖,𝐷

    (π‘˜)

    u𝑖 (π‘˜)2+ 𝛾

    u𝑖(π‘˜)

    u𝑖 (π‘˜)2Ξ  sgn (w (π‘˜)) , (15)

    for 𝑖 = 0, 1, . . . , 𝑁 βˆ’ 1.Consequently, combining (12) and (15), the update recur-

    sion of the sparsity-regularized NSAF is given by

    w (π‘˜ + 1) = w (π‘˜)

    + πœ‡

    π‘βˆ’1

    βˆ‘

    𝑖=0

    [u𝑇𝑖(π‘˜)

    u𝑖 (π‘˜)2𝑒𝑖,𝐷

    (π‘˜)

    +1

    2𝛾

    u𝑖(π‘˜)

    u𝑖 (π‘˜)2Ξ  sgn (w (π‘˜)) u𝑇

    𝑖(π‘˜)]

    βˆ’πœ‡π›Ύ

    2Ξ  sgn (w (π‘˜)) ,

    (16)

    where πœ‡ is the step-size parameter.

    3.2. Determination of the Weighted 𝑙1-Norm Regularization.

    Here, by choosing the weighting matrix Ξ , two versions ofthe sparsity-regularized NSAF are developed. First, the use ofthe identity matrix as the weighting matrix, that is, Ξ  = I

    𝑀,

    results in the following update recursion:

    w (π‘˜ + 1) = w (π‘˜)

    + πœ‡

    π‘βˆ’1

    βˆ‘

    𝑖=0

    [u𝑇𝑖(π‘˜)

    u𝑖 (π‘˜)2𝑒𝑖,𝐷

    (π‘˜)

    +1

    2𝛾

    u𝑖(π‘˜)

    u𝑖 (π‘˜)2sgn (w (π‘˜)) u𝑇

    𝑖(π‘˜)]

    βˆ’πœ‡π›Ύ

    2sgn (w (π‘˜)) ,

    (17)

    which is referred to as the 𝑙1-norm NSAF (𝑙

    1-NSAF) as

    an unweighted case. The 𝑙1-NSAF uniformly attracts the

  • 4 Mathematical Problems in Engineering

    Table 1: Computational complexity.

    NSAF 𝑙1-NSAF 𝑙

    1-RNSAF

    Multiplications 3𝑀 + 3𝑁𝐿 6𝑀 + 3𝑁𝐿 7𝑀 + 3𝑁𝐿Divisions 1 2 2 +𝑀/𝑁(𝑀: filter length,𝑁: number of subbands, and 𝐿: length of the analysis filtersand synthesis filters).

    tap coefficients of w(π‘˜) to zero. The zero attraction processleads to the improved convergence of the 𝑙

    1-NSAF when the

    majority of entries of a system are zero; that is, a system issparse.

    Second, to approximate the actual sparsity condition of anunderlying system, that is, 𝑙

    0-norm of the system, the weights

    of Ξ  are chosen inversely proportional to magnitude of theactual coefficients of the system as given by

    πœ‹π‘š={

    {

    {

    1π‘€π‘š

    , π‘€π‘š

    ΜΈ= 0

    ∞, π‘€π‘š= 0,

    (18)

    where π‘€π‘šdenotes the π‘šth coefficients of the system, that

    is, w∘. However, since the actual coefficients of the systemare unavailable, the estimate of the current filter weights isutilized instead of the actual weights, which is referred to asthe reweighting scheme [22], as follows:

    πœ‹π‘š(π‘˜) =

    1π‘€π‘š (π‘˜)

    + πœ–for π‘š = 0, 1, . . . ,𝑀 βˆ’ 1, (19)

    where π‘€π‘š(π‘˜) denotes theπ‘šth tap weight of the w(π‘˜) and πœ– is

    a small positive value to avoid singularity when |π‘€π‘š(π‘˜)| = 0.

    Then, the weighting matrix Ξ  consists of the values of πœ‹π‘š(π‘˜)

    as theπ‘šth diagonal elements and has a time-varying feature.Finally, the update recursion is given by

    w (π‘˜ + 1) = w (π‘˜)

    + πœ‡

    π‘βˆ’1

    βˆ‘

    𝑖=0

    [u𝑇𝑖(π‘˜)

    u𝑖 (π‘˜)2𝑒𝑖,𝐷

    (π‘˜)

    +1

    2𝛾

    u𝑖(π‘˜)

    u𝑖 (π‘˜)2sgn (w (π‘˜)) u𝑇

    𝑖(π‘˜)]

    βˆ’πœ‡π›Ύ

    2

    sgn (w (π‘˜))|w (π‘˜)| + πœ–

    ,

    (20)

    where u𝑖(π‘˜) = u

    𝑖(π‘˜)Ξ  and the vector division operation in

    last term accounts for a componentwise division. Then, thisrecursion is called the reweighted 𝑙

    1-normNSAF (𝑙

    1-RNSAF)

    Table 1 lists the number of multiplications and divisionsof the NSAF [6], 𝑙

    1-NSAF, and 𝑙

    1-RNSAF per iteration. As

    shown in Table 1, the use of 𝑙1-norm constraint leads to an

    acceptable increase in computation.

    4. Numerical Results

    Theperformance of the proposed sparsity-regularizedNSAFsis validated by carrying out computer simulations in a system

    identification scenario in which the unknown channel israndomly generated. The lengths of the unknown system are𝑀 = 128 and 512 in experiments where 𝑆 of them arenonzero. The nonzero filter weights are positioned randomlyand their values are taken from a Gaussian distributionN(0, 1/𝑆). Here, 𝑆 = 4 is used in the simulations exceptFigure 5 in which various 𝑆 values are considered. Theadaptive filter and the unknown system are assumed to havethe same number of taps. The input signals are obtainedby filtering a white, zero-mean, Gaussian random sequencethrough a first-order system𝐺(𝑧) = 1/(1βˆ’0.9π‘§βˆ’1).The signal-to-noise ratio (SNR) is calculated by

    SNR = 10 log10(𝐸 [𝑦2(𝑛)]

    𝐸 [V2 (𝑛)]) , (21)

    where 𝑦(𝑛) = u(𝑛)w∘. The measurement noise V(𝑛) is addedto𝑦(𝑛) such that SNR= 10, 20, and 30 dB. In order to comparethe convergence performance, the normalized mean squaredeviation (MSD),

    NormalizedMSD = 𝐸[wβˆ˜βˆ’ w(π‘˜)

    2

    β€–wβˆ˜β€–2] , (22)

    is taken and averaged over 50 independent trials.The cosine-modulated filter banks [23] with the subband number of𝑁 =4 are used in the simulations. The prototype filter of length𝐿 = 32 is used. For comparison purpose, the proportionateNSAF (PNSAF) [12] is considered, which has been developedfor sparse system identification.The step-size is set to πœ‡ = 0.5for SAF algorithms except the PNSAFwhere the step sizesπœ‡ =0.6 (Figure 2) and πœ‡ = 0.65 (Figure 6) are chosen to achievesimilar steady-state MSD with the 𝑙

    1-RNSAF for comparison

    purpose. For the 𝑙1-RNSAF, πœ– = 0.01 is chosen. In addition,

    𝜌 = 0.05 is used for the PNSAF. The 𝛾 values are obtained byrepeated trials to minimize the steady-state MSD.

    Figure 2 shows the normalizedMSD curves of the NLMS,NSAF, 𝑙

    1-NSAF, and 𝑙

    1-RNSAF, in cases of𝑁 = 4 and SNR =

    30 dB. For the 𝑙1-NSAF and 𝑙

    1-RNSAF, 𝛾 = 3Γ—10βˆ’5 is chosen.

    As shown in Figure 2, the not only 𝑙1-RNSAFoutperforms the

    conventional NLMS, NSAF, PNSAF, and 𝑙1-NSAF, but also

    the 𝑙1-NSAF has better performance than other conventional

    algorithms, in terms of the convergence rate and the steady-state misalignment.

    In Figure 3, to verify the effect of 𝛾 on convergenceperformance, the normalized MSD curves of the 𝑙

    1-RNSAF

    for different 𝛾 values are illustrated, in case of 𝑁 = 4 andSNR = 30 dB. For different 𝛾 values (𝛾 = 1Γ—10βˆ’4, 1Γ—10βˆ’5, 5Γ—10βˆ’5, and 1 Γ— 10βˆ’6), the 𝑙

    1-RNSAF is not excessively sensitive

    to 𝛾.The analysis of an optimal 𝛾 value remains a future work.Next, the performance of the proposed 𝑙

    1-norm regular-

    izedNSAFs is compared to the original NSAF under differentSNR conditions. Figure 4 depicts the normalizedMSD curvesof the NSAF, 𝑙

    1-NSAF, and 𝑙

    1-RNSAF under SNR = 10

    and 20 dB, respectively. The 𝛾 value for the 𝑙1-NSAF and 𝑙

    1-

    RNSAF is set to 5 Γ— 10βˆ’5. It is clear that both the 𝑙1-NSAF

    and 𝑙1-RNSAF are superior to the NSAF under several SNR

    cases. Furthermore, the 𝑙1-RNSAF performs well compared

    to 𝑙1-NSAF.

  • Mathematical Problems in Engineering 5

    0 200 400 600 800 1000 1200 1400

    βˆ’35

    βˆ’30

    βˆ’25

    βˆ’20

    βˆ’15

    βˆ’10

    βˆ’5

    0

    5

    Number of iterations

    Nor

    mal

    ized

    mea

    n sq

    uare

    dev

    iatio

    n (d

    B)

    NLMSNSAF (N = 4)PNSAF (N = 4)

    l1-NSAF (N = 4)l1-RNSAF (N = 4)

    Figure 2: Normalized MSD curves of the NLMS, NSAF, PNSAF, 𝑙1-

    NSAF, and 𝑙1-RNSAF (𝑁 = 4).

    0 200 400 600 800 1000 1200 1400

    βˆ’35

    βˆ’30

    βˆ’25

    βˆ’20

    βˆ’15

    βˆ’10

    βˆ’5

    0

    Number of iterations

    Nor

    mal

    ized

    mea

    n sq

    uare

    dev

    iatio

    n (d

    B)

    𝛾 = 1 Γ— 10βˆ’3

    𝛾 = 1 Γ— 10βˆ’4

    𝛾 = 1 Γ— 10βˆ’5

    𝛾 = 5 Γ— 10βˆ’5

    𝛾 = 7 Γ— 10βˆ’5

    Figure 3: Normalized MSD curves of the 𝑙1-RNSAF for various 𝛾

    values (𝑁 = 4).

    In Figure 5, the convergence properties of the NSAF and𝑙1-RNSAF are compared under various sparsity conditions of

    an underlying system. With the same length of the system,that is, 𝑀 = 128, different sparsity conditions (𝑆 = 4, 8, 16,and 32) are considered under SNR = 30 dB. The value of 𝛾is set to 3 Γ— 10βˆ’5 for the 𝑙

    1-RNSAF. Figure 5 shows that the

    NSAF is independent of the sparsity condition. On the otherhand, the results indicate that the more sparse the underlyingsystem, the better the 𝑙

    1-RNSAF.

    The comparison of performance of the NSAF, 𝑙1-NSAF,

    and 𝑙1-RNSAF with a long system, here, the filter length𝑀 =

    0 200 400 600 800 1000 1200 1400βˆ’25

    βˆ’20

    βˆ’15

    βˆ’10

    βˆ’5

    0

    5

    10

    Number of iterations

    Nor

    mal

    ized

    mea

    n sq

    uare

    dev

    iatio

    n (d

    B)

    (a) NSAF (SNR = 10 dB)(b) l1-NSAF (SNR = 10 dB)(c) l1-RNSAF (SNR = 10 dB)

    (d) NSAF (SNR = 20 dB)(e) l1-RNSAF (SNR = 20 dB)(f) l1-RNSAF (SNR = 20 dB)

    Figure 4: Normalized MSD curves of the NSAF, 𝑙1-NSAF, and 𝑙

    1-

    RNSAF under various SNR conditions (SNR = 10, 20, and 30 dB,𝑁 = 4).

    0 200 400 600 800 1000 1200βˆ’35

    βˆ’30

    βˆ’25

    βˆ’20

    βˆ’15

    βˆ’10

    βˆ’5

    0

    Number of iterations

    Nor

    mal

    ized

    mea

    n sq

    uare

    dev

    iatio

    n (d

    B)

    NSAF (S = 4)l1-RNSAF (S = 4)NSAF (S = 8)l1-RNSAF (S = 8)

    NSAF (S = 16)l1-RNSAF (S = 16)NSAF (S = 32)l1-RNSAF (S = 32)

    Figure 5: Normalized MSD curves of the NSAF and 𝑙1-RNSAF

    under various sparsity conditions (𝑆 = 4, 8, 16, and 32,𝑁 = 4).

    512, is presented in Figure 6. For the 𝑙1-NSAF and 𝑙

    1-RNSAF,

    𝛾 = 5Γ—10βˆ’5 is chosen. A similar result of Figure 2 is observed

    in Figure 6.Finally, the tracking capabilities of the algorithms of a

    sudden change in the system are tested for 𝑁 = 4 andSNR = 30 dB. Figure 7 shows the results when the unknownsystem is right-shifted for 20 taps. Same value of 𝛾 in Figure 2is used. As can be seen, the 𝑙

    1-NSAF and 𝑙

    1-RNSAF keep track

    of weight change without losing the convergence rate nor

  • 6 Mathematical Problems in Engineering

    0 1000 2000 3000 4000 5000βˆ’40

    βˆ’35

    βˆ’30

    βˆ’25

    βˆ’20

    βˆ’15

    βˆ’10

    βˆ’5

    0

    Number of iterations

    Nor

    mal

    ized

    mea

    n sq

    uare

    dev

    iatio

    n (d

    B)

    NLMSNSAF (N = 4)PNSAF (N = 4)

    l1-NSAF (N = 4)l1-RNSAF (N = 4)

    Figure 6: Normalized MSD curves of the NLMS, NSAF, PNSAF, 𝑙1-

    NSAF, and 𝑙1-RNSAF for long system of𝑀 = 512 (𝑁 = 4).

    0 500 1000 1500 2000 2500 3000βˆ’40

    βˆ’35

    βˆ’30

    βˆ’25

    βˆ’20

    βˆ’15

    βˆ’10

    βˆ’5

    0

    5

    Number of iterations

    Nor

    mal

    ized

    mea

    n sq

    uare

    dev

    iatio

    n (d

    B)

    NSAFPNSAF

    l1-NSAFl1-RNSAF

    Figure 7: Normalized MSD curves of the NSAF, PNSAF, 𝑙1-NSAF,

    and 𝑙1-RNSAF in case of a time-varying unknown system (𝑁 = 4).

    The system is right-shifted for 20 taps at 1500 iterations.

    the steady-state misalignment compared to the conventionalNLMS, NSAF, and PNSAF. To be specific, the 𝑙

    1-RNSAF

    achieves better performance than the 𝑙1-NSAF in terms of

    both convergence rate and steady-state misalignment.

    5. Conclusion

    A new family of the NSAFs which takes into accountthe sparsity condition of an underlying system has beenpresented by incorporating a weighted 𝑙

    1-norm constraint of

    filter weights in the cost function. The update recursion isobtained by employing subgradient calculus on the weighted𝑙1-norm constraint term. Subsequently, two sparsity regular-

    ized NSAFs, that is, the unweighted 𝑙1-NSAF and 𝑙

    1-RNSAF

    have been developed. The numerical results indicate that theproposed 𝑙

    1-NSAF and 𝑙

    1-RNSAF achieve highly improved

    convergence performance over the conventional algorithmsfor sparse system identification.

    Conflict of Interests

    The author declares that there is no conflict of interestsregarding the publication of this paper.

    Acknowledgment

    This research was supported by new faculty research pro-gram funded by Gangneung-Wonju National University(2013100162).

    References

    [1] S. Haykin, Adaptive Filter Theory, Prentice Hall, Upper SaddleRiver, NJ, USA, 4th edition, 2002.

    [2] A. H. Sayed, Fundamentals of Adaptive Filtering, Wiley, NewYork, NY, USA, 2003.

    [3] A. Gilloire and M. Vetterli, β€œAdaptive filtering in subbandswith critical sampling: analysis, experiments, and applicationto acoustic echo cancellation,” IEEE Transactions on SignalProcessing, vol. 40, no. 8, pp. 1862–1875, 1992.

    [4] M. De Courville and P. Duhamel, β€œAdaptive filtering in sub-bands using a weighted criterion,” IEEE Transactions on SignalProcessing, vol. 46, no. 9, pp. 2359–2371, 1998.

    [5] S. S. Pradhan and V. U. Reddy, β€œA new approach to subbandadaptive filtering,” IEEE Transactions on Signal Processing, vol.47, no. 3, pp. 655–664, 1999.

    [6] K. A. Lee andW. S. Gan, β€œImproving convergence of the NLMSalgorithm using constrained subband updates,” IEEE SignalProcessing Letters, vol. 11, no. 9, pp. 736–739, 2004.

    [7] K. A. Lee and W. S. Gan, β€œInherent decorrelating and leastperturbation properties of the normalized subband adaptivefilter,” IEEE Transactions on Signal Processing, vol. 54, no. 11, pp.4475–4480, 2006.

    [8] D. L. Duttweiler, β€œProportionate normalized least-mean-squares adaptation in echo cancelers,” IEEE Transactions onSpeech and Audio Processing, vol. 8, no. 5, pp. 508–518, 2000.

    [9] W. F. Schreiber, β€œAdvanced television systems for terrestrialbroad-casting: some problems and some proposed solutions,”Proceedings of IEEE, vol. 83, no. 6, pp. 958–981, 1995.

    [10] S. L. Gay, β€œEfficient, fast converging adaptive filter for networkecho cancellation,” in Proceedings of the 32nd Asilomar Confer-ence on Signals, Systems & Computers, pp. 394–398, November1998.

    [11] H. Deng and M. Doroslovački, β€œImproving convergence of thePNLMS algorithm for sparse impulse response identification,”IEEE Signal Processing Letters, vol. 12, no. 3, pp. 181–184, 2005.

    [12] M. S. E. Abadi, β€œProportionate normalized subband adaptivefilter algorithms for sparse system identification,” Signal Process-ing, vol. 89, no. 7, pp. 1467–1474, 2009.

  • Mathematical Problems in Engineering 7

    [13] D. L. Donoho, β€œCompressed sensing,” IEEE Transactions onInformation Theory, vol. 52, no. 4, pp. 1289–1306, 2006.

    [14] E. J. CandeΜ€s, J. Romberg, and T. Tao, β€œRobust uncertaintyprinciples: exact signal reconstruction from highly incompletefrequency information,” IEEE Transactions on InformationThe-ory, vol. 52, no. 2, pp. 489–509, 2006.

    [15] R. Tibshirani, β€œRegression shrinkage and selection via the lasso,”Journal of Royal Statistical Society B, vol. 58, pp. 267–288, 1996.

    [16] Y. Chen, Y. Gu, and A. O. Hero, β€œSparse LMS for systemidentification,” in Proceedings of the International Conference onAcoustics, Speech, and Signal Process (ICASSP ’09), pp. 3125–3128, April 2009.

    [17] Y. Gu, J. Jin, and S. Mei, β€œπ‘™1norm constraint LMS algorithm for

    sparse system identification,” IEEE Signal Processing Letters, vol.16, no. 9, pp. 774–777, 2009.

    [18] J. Jin, Y. Gu, and S. Mei, β€œA stochastic gradient approach oncompressive sensing signal reconstruction based on adaptivefiltering framework,” IEEE Journal on Selected Topics in SignalProcessing, vol. 4, no. 2, pp. 409–420, 2010.

    [19] E. M. Eksioglu and A. K. Tanc, β€œRLS algorithm with convexregularization,” IEEE Signal Processing Letters, vol. 18, no. 8, pp.470–473, 2011.

    [20] N. Kalouptsidis, G. Mileounis, B. Babadi, and V. Tarokh,β€œAdaptive algorithms for sparse system identification,” SignalProcessing, vol. 91, no. 8, pp. 1910–1919, 2011.

    [21] D. P. Bertsekas, Convex Analysis and Optimization, AthenaScientific, Cambridge, Mass, USA, 2003.

    [22] E. J. CandeΜ€s, M. B. Wakin, and S. P. Boyd, β€œEnhancing sparsityby reweighted 𝑙

    1minimization,”The Journal of Fourier Analysis

    and Applications, vol. 14, no. 5-6, pp. 877–905, 2008.[23] P. P. Vaidyanathan,Multirate Systems and Filterbanks, Prentice-

    Hall, Englewood Cliffs, NJ, 1993.

  • Submit your manuscripts athttp://www.hindawi.com

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    MathematicsJournal of

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    Mathematical Problems in Engineering

    Hindawi Publishing Corporationhttp://www.hindawi.com

    Differential EquationsInternational Journal of

    Volume 2014

    Applied MathematicsJournal of

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    Probability and StatisticsHindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    Journal of

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    Mathematical PhysicsAdvances in

    Complex AnalysisJournal of

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    OptimizationJournal of

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    CombinatoricsHindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    International Journal of

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    Operations ResearchAdvances in

    Journal of

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    Function Spaces

    Abstract and Applied AnalysisHindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    International Journal of Mathematics and Mathematical Sciences

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    The Scientific World JournalHindawi Publishing Corporation http://www.hindawi.com Volume 2014

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    Algebra

    Discrete Dynamics in Nature and Society

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    Decision SciencesAdvances in

    Discrete MathematicsJournal of

    Hindawi Publishing Corporationhttp://www.hindawi.com

    Volume 2014 Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    Stochastic AnalysisInternational Journal of


Recommended