804
Abstract—The distributed estimation problem arises in many sensor network-based applications. Recently, adaptive networks have been proposed in the literature to solve the problem of linear estimation in a cooperative fashion. Among the adaptive networks, the incremental-based algorithms (networks) offer excellent estimation performance, specially in small size networks. The goal of this paper is to design an incremental least-mean-squares (LMS) adaptive network with predefined performance. Specifically, under small step-sizes and some conditions on the data, we assign the step size parameter at any node in an incremental LMS adaptive network, in a way that that the steady-state value of mean-square deviation (MSD) at each individual node becomes smaller than a desired value. In the proposed algorithm, the step-size is adjusted for each node according to its measurement quality which is stated in terms of observation noise variance. Simulation results demonstrate the performance advantages of the proposed algorithm.
I. INTRODUCTION
N many wireless sensor network applications, the ultimate goal is to obtain an accurate estimate of an
unknown parameter, based on observations acquired by spatially distributed sensors. Precision agriculture, environment monitoring, target localization, and tracking are examples of such applications [1]. Recently, distributed adaptive estimation algorithms (also known as adaptive networks) have been proposed in the literature that solve the distributed estimation problem in a cooperative and adaptive manner [2-8]. Indeed, an adaptive network is a collection of N individual adaptive nodes that observe space-time data and collaborate, according to some cooperation protocol, in order to estimate parameters related to some events of interest [2]. These solutions are appealing choices when the statistical information of the underlying processes of interest is not available or change over time.
The performance of adaptive networks depends on the mode of cooperation between nodes, e.g., incremental [2-5] and diffusion [6-8]. In the incremental adaptive networks, a cyclic path through the network is required, and nodes communicate with neighbors within this path [3-5]. In the diffusion algorithms, nodes communicate with all of their neighbors, and no cyclic path is required [6-8].
A. Rastegarnia and A. Khalili are with Department of Electrical
Engineering, Malayer University, Malayer 65719-95863, Iran (e-mail: {a_rastegar, a.khalili}@ ieee.org). W. M. Bazzi is with Electrical Engineering Department, American University in Dubai, P.O. Box 28282, Dubai, UAE (email: [email protected]).
TABLE I SYMBOLS AND THEIR DESCRIPTIONS
In comparison, the incremental-based algorithms
(networks) offer excellent estimation performance, especially in small size networks, while diffusion based networks are more robust to link and node failures.
The goal of this paper is to design an incremental LMS adaptive network (by step size assignment), so that, the steady-state value of mean-square deviation at each individual node becomes smaller than a desired value. To be more specific, let us define the following quantities (these quantities will be introduced in section II with more details).
• kh : steady-state value of MSD at node k
• ph : predefined (desired) MSD at node k
• km : step-size of node k
Now, we can pose our problem in this paper as follows:
find the appropriate value for km at each individual node so
that for any node in the network we have k ph h£ . In the
proposed algorithm, the step-size is adjusted for each node according to its measurement quality which is given by observation noise variance. The important feature of the proposed algorithm is that it only uses the local data to assign the step-size at each node. Simulation results show that by using the proposed algorithm, each individual node achieves (in the steady-state) at most the desired MSD. Notation: Throughout the paper, we use boldface letters
for random quantities. The * symbol is used for both complex conjugation for scalars and Hermitian transpose for matrices. The other symbols are listed in Table. 1.
II. BACKGROUND
A. The Incremental LMS Adaptive Network
Consider a distributed network (e.g. a WSN) with N nodes {1,..., }N= , which communicate according to
the incremental protocol. At time i , each node k Î has
Design of an Incremental LMS Adaptive Network with Desired Mean-Square Deviation
Amir Rastegarnia, Wael M. Bazzi, and Azam Khalili
I
2011 2nd International Conference on Control, Instrumentation and Automation (ICCIA)
805
access to the scalar measurement ( )kd i and 1 M´
regression vector ,k iu that are related via
,( ) ( )ok k i kd i u w v i= + (1)
where the 1M ´ vector o Mw Î is an unknown
parameter and ( )kv i is the observation noise term with
variance 2,v ks . The measurements ,{ ( ), }k k id i u are assumed
to be realizations of zero-mean jointly wide-sense stationary
random processes { } ,k kd u . The objective of network is to
estimate ow from measurements collected at N nodes. Note
that ow is the solution of the following optimization problem
2argmin ( ) where ( ) { }w J w J w E w= -d U (2)
where
1 1
2 2
1
,
N NN M N´ ´
é ù é ùê ú ê úê ú ê úê ú ê úê ú ê úê ú ê úê ú ê úê ú ê úë û ë û
u du d
U d
u d
(3)
The optimal solution of (2) (i.e. ow ) is given by [2], [9]
1ou duw R R-= (4)
Where
{ } { }* *, anddu uR E R E= =U d U U (5)
In order to use (4) each node must have access to the
global statistical information { },u duR R which are not
available in many application or change in time. To address this issue and moreover, to enable the network to response to changes in statistical properties of data in real time, the incremental LMS (I-LMS) adaptive network is proposed in [3]. The update equation in I-LMS is given by
( )*, 1, , , 1,( )k i k i k k i k k i k iu d i uy y m y- -= + - (6)
where ,k iy denotes the local estimate of w .o at node k
at time i and km is the step size. In I-LMS algorithm, the
calculated estimates (i.e. ,k iy ) are sequentially circulated
from node to node as shown in Fig. 1.
knode
node
node
1
node 2
1k−
node 1k+nodeN
1,iψ
2,iψ
1,k iψ
−
,k iψ
1,k iψ
+
,N iψ
Fig. 1. The block diagram of I-LMS adaptive network.
B. Steady-State of I-LMS Adaptive Network
A good performance measure of the adaptive network is the MSD which for each node $k$ is defined as follows
2 2
1, 1,( ) ( )k k k IE Eh - ¥ - ¥= y y (7)
where
1, 1, ok i k iw- -- y y (8)
In [2] the mean-square performance of I-LMS algorithm is studied using energy conservation arguments. The analysis relies on the data model (1) and also the following assumptions
1. ,{ }k iu s are spatially and temporally independent.
2. The regressors ,{ }k iu arise from a circular Gaussian
distribution with covariance matrix ,u kR .
In [2], a complex closed-form expression for MSD has been derived. However, in the case of small step sizes, simplified expressions for the MSD can be described as follows: for each node k , introduce the eigendecomposition
,u k k k kR U U *= L where kU is unitary and kL is a diagonal
matrix with the eigenvalues of ,u kR .
,1 ,2 ,diag{ , ,..., }, (node )k k k k M kl l lL = (9)
Then, according to the results from [2]:
2 2, ,
1
1,
1
1
2
N
l v l l jM
k Nj
l l j
m s lh
m l
=
=
=
æ ö÷ç ÷ç ÷ç ÷ç ÷ç ÷ç» ÷÷ç ÷ç ÷ç ÷ç ÷ç ÷÷çè ø
åå
å
(10)
In the next section, we use (10) to derive our proposed algorithm.
III. PROPOSED SCHEME
To derive our proposed algorithm, we consider the following assumptions
[A.1.] The correlation matrix ,u kR can be expressed as
,u k MR Ir= (11)
where r Î which is unknown.
[A.2.] For observation noise variance at node k we have 2, [ , ]v k a bs Î .
Considering (A.1) in (10) yields
2 2,
11 2
1
( , , , )
2
N
v
k N N
M m sh m m m
m
=
=
Ȍ
å
(12)
Note that (12) reveals an equalization effect on the MSD throughout the network, i.e. for ,k Î , we have
kh h= .
806
Now we pose our design problem in this paper as follows:
Find the appropriate value for km at each individual node so
that for any k Î we have k ph h£ , i.e.
2 2,
1
1
,
2
N
k v kk
pN
kk
M
k
m sh
m
=
=
£ " Îå
å (13)
To find the appropriate km 's, we use (12) and rewrite it as
( )2 2 2 21 ,1 , 1
2 pv N v N NM
hm s m s m m+ + £ + + (14)
After some computations, we obtain the following condition
2,
2,p
kv k
kM
hm
s£ " Î (15)
It is straightforward to show that using (15) in (12) yields
k ph h£ . However, the problem arises when 2, 0v ks
since
2, 0limv k
ks
m
= ¥ (16)
where (16) in turn makes the I-LMS algorithm to diverge. Thus, to ensure the convergence of I-LMS algorithm and to
have k ph h£ we select the step-size at each node k as
follows
max2,
2min ,p
kv kM
hm m
s
æ ö÷ç ÷ç= ÷ç ÷ç ÷÷çè ø (17)
where maxm is a positive constant that avoids the
divergence of I-LMS algorithm. A suitable choice for maxm
can be found as follows: we select maxm so that for
maxm m= we still have k ph h£ . To this aim, we consider
again the equation (13) and let maxm m= which result in
2max ,
1
2
N
v kk
p
M
N
m sh= £
å (18)
From (18) we obtain
1
2,
1max
2
N
v kp k
M N
sh
m
-
=
æ ö÷ç ÷ç ÷ç ÷ç ÷ç ÷ç= ÷÷ç ÷ç ÷ç ÷ç ÷ç ÷÷çè ø
å (19)
The term in the above parenthesis is the average value of
observation noise variances 2,v ks that, using (A.2), can be
approximated as
2,
1
2
N
v kk a b
N
s=
æ ö÷ç ÷ç ÷ç ÷ç ÷ +ç ÷ç »÷÷ç ÷ç ÷ç ÷ç ÷ç ÷÷çè ø
å (20)
Replacing (20) in (19) gives
max
4
( )p
M a b
hm =
+ (21)
Finally, using (21) in (17) we obtain
2,
2 4min ,
( )p p
kv kM a bM
h hm
s
æ ö÷ç ÷ç= ÷ç ÷ç ÷+ ÷çè ø (22)
Note that from (22) we can easily conclude that the proposed algorithm assigns small step-sizes for nodes that present poor performance such that, in the limit case, they would become simply relay nodes. Moreover, the proposed algorithm uses only the local data to assign the step size at each node thus no communication cost is added in comparison with I-LMD algorithm. It is also notable that to implement the proposed algorithm, each node needs to have
its observation noise variance 2,v ks . To address this issue
(have an accurate estimates of 2,v ks ), in [10] we have
proposed a two-phase method where in the first phase, the standard I-LMS is run with a common step size for all nodes. Once that steady-state has been reached, each node uses the latest weight vector in order to estimate its noise variance. We use the method in [10] to estimate the required
2,v ks in our proposed algorithm. The pseudo code of the
proposed algorithm is shown in sequel.
IV. SIMULATION RESULTS
In this section we present the simulation results to evaluate the performance of the proposed scheme. The parameters and values used in the simulation setup are listed in Table II. In the simulations, we have used the method
given in [10] to estimate 2,v ks at each node. Then, these
estimates are used in the proposed algorithm to compute
maxm in (21) and km in (22).
807
TABLE I PARAMETERS AND THEIR VALUES
The quantity of interest, namely, MSD, is obtained by
averaging the last 50 samples of the corresponding learning curves. The curves are generated by averaging over 100
independent runs. In Fig. 2 we have shown 2,v ks and km that
are computed by our proposed algorithm. Comparing the
quality of measurement at each node (i.e. 2,v ks ) with
computed km , we can see that km goes smaller as 2,v ks
becomes larger. Thus, the proposed algorithm assigns small values for low quality nodes. In Fig. 3, we have shown the
steady-state MSD at each node k (i.e. kh ) and the desired
(predefined) MSD (i.e. ph ).
Fig. 2. The 2
,v ks s and corresponding computed km s for different node.
We can conclude from Fig. 3 that, the proposed algorithm is able to guarantee the achievement of the desired MSD at every node. In other words, using the proposed algorithm we
have k ph h£ for all nodes in the network. In addition, we
can see that for any , j Î , we have jh h» . The
performance of our proposed algorithm can also be observed from Fig. 4 in which the MSD learning curve for node 1 is plotted.
Fig. 3. The steady-state MSD (kh ) and the desired MSD (i.e.
ph ).
Fig. 4. The MSD learning curve for node 1k = .
V. CONCLUSION
In this paper we considered the design of an incremental LMS adaptive network so that by properly tuning the step size at each node we obtain the desired MSD performance. Our proposed algorithm is based on the measurement quality and assigns each node, a step size value according to its observation noise variance. The proposed algorithm uses only the local data to assign the step size at each node, thus no communication cost is added in comparison with I-LMD algorithm. As our simulation results show, the proposed algorithm considerably improves the performance of the IDLMS algorithm under the same condition.
REFERENCES [1] D. Estrin, G. Pottie, and M. Srivastava, “Intrumenting the world with
wireless sensor setworks,” in Proc. IEEE Int. Conf. Acoustics, Speech, Signal Processing (ICASSP), Salt Lake City, UT, May 2001, pp. 2033-2036.
[2] C. G. Lopes and A. H. Sayed, “Distributed processing over adaptive networks,” in Proc. Adaptive Sensor Array Processing Workshop, MIT Lincoln Lab., Lexington, MA, Jun. 2006.
808
[3] C. G. Lopes and A. H. Sayed, “Incremental adaptive strategies over distributed networks,” IEEE Trans. Signal Process., vol. 55, no. 8, pp. 4064-4077, Aug. 2007.
[4] A. H. Sayed and C. G. Lopes, “Distributed recursive least-squares strategies over adaptive networks,” in Proc. Asilomar Conf. Signals, Systems, Computers, Monterey, CA, pp. 233-237, Oct. 2006.
[5] L. Li, J. A. Chambers, C. G. Lopes, and A. H. Sayed, “Distributed estimation over an adaptive incremental network based on the affine projection algorithm,” IEEE Trans. Signal Process., vol. 58, no. 1, pp. 151-164, Jan. 2010.
[6] C. G. Lopes and A. H. Sayed, “Diffusion least-mean squares over adaptive networks: Formulation and performance analysis,” IEEE Trans. on Signal Processing, vol. 56, no. 7, pp.3122–3136, July 2008.
[7] F. S. Cattivelli, C. G. Lopes, and A. H. Sayed, “Diffusion recursive least-squares for distributed estimation over adaptive networks,” IEEE Trans. on Signal Process., vol. 56, no. 5, pp. 1865–1877, May 2008.
[8] F. S. Cattivelli and A. H. Sayed, “Multilevel diffusion adaptive networks,” in Proc. IEEE Int. Conf. Acoustics, Speech, Signal Processing (ICASSP), Taipei, Taiwan, April 2009.
[9] A. H. Sayed, Fundamentals of Adaptive Filtering, John Wiley and Sons, New York, NJ, USA, 2003.
[10] A. Rastegarnia, M. A. Tinati and A. Khalili, “A distributed incremental LMS Algorithm with reliability of observation consideration,” in proc. IEEE Int. Conf. on Communications Syatems (ICCS), pp. 67-70, Singapore, Nov. 2010.