+ All Categories
Home > Education > Adaptive equalization

Adaptive equalization

Date post: 19-Nov-2014
Upload: kamal-bhatt
View: 2,134 times
Download: 0 times
Share this document with a friend
Adaptive equalization using ANN
Popular Tags:
ADAPTIVE CHANNEL EQUALIZATION Kamal Bhatt M.Tech-Electronics & Communication Engg. ID-44036 College of Technology, Pantnagar G.B.Pant University of Agriculture and Technology, Pantnagar
Page 1: Adaptive equalization


Kamal Bhatt

M.Tech-Electronics & Communication Engg.


College of Technology, Pantnagar G.B.Pant University of Agriculture and Technology,


Page 2: Adaptive equalization


Neural networks are the simplified models of the biological neuron systems.

Neural networks are typically organized in layers. Layers are made up of a number of interconnected 'nodes' .which contain an 'activation function'.

Patterns are presented to the network via the 'input layer', which communicates to one or more 'hidden layers' where the actual processing is done via a system of weighted 'connections'.

The hidden layers then link to an 'output layer' where the answer is output

Page 3: Adaptive equalization

MODEL OF ARTIFICIAL NEURON An appropriate model/simulation of the nervous system

should be able to produce similar responses and behaviours in artificial systems.

The nervous system is build by relatively simple units, the neurons, so copying their behaviour and functionality should be the solution.

Page 4: Adaptive equalization


Preceptron Learning Algorithm:

1. Initialize weights2. Present a pattern and target output

3. Compute output :

4. Update weights :

Repeat starting at 2 until acceptable level of error

y f wxi


[ ]0


iiiwtwtw )()1(




Page 5: Adaptive equalization


An artificial Neural Network is defined as a data processing system consisting of a large number of interconnected processing elements or artificial neurons.

There are three fundamentally different classes of neural networks. Those are.

Single layer feedforward Networks.

Multilayer feedforward Networks.

Recurrent Networks.

Page 6: Adaptive equalization

The tasks to which artificial neural networks are applied tend to fall within the following broad categories:

•Function approximation, or regression analysis, including time series prediction and modeling.

•Classification, including pattern and sequence recognition, novelty detection and sequential decision making.

•Data processing, including filtering, clustering,blind signal separation and compression.


Page 7: Adaptive equalization

The LMS algorithm by Widrow and Hoff in 1960 paved the way for the development of adaptive filters used for equalisation.

Lucky used this algorithm in 1965 to design adaptive channel equalisers. Maximum Likelihood Sequence Estimator (MLSE) equaliser and its Viterbi implementation in 1970’s.

The multi layer perceptron (MLP) based symbol-by-symbol equalisers was developed in 1990

Equalization History

Page 8: Adaptive equalization

During 1989 to 1995 some efficient nonlinear artificial neural network equalizer structure for channel equalization were proposed, those include Chebyshev Neural Network , Functional link ANN

In 2002 Kevin M. Passino described the Optimization Foraging Theory in article “Biomimicry of Bacterial Foraging”

More recently in 2008, a rank based statistics approach known as Wilcoxon learning method has been proposed for signals processing application to mitigate the linear and nonlinear learning problems.

Page 9: Adaptive equalization

Digital Communication Systems

Page 10: Adaptive equalization

Adaptive channel equalizers have played an important role in digital communication systems.

Equalizer works like an inversed filter which is placed at the front end of the receiver. Its transfer function is inverse to the transfer function of the associated channel , is able to reduce the error causes between the desired and estimated signal.

This is achieved through a process of training. During this period the transmitter transmits a fixed data sequence and the receiver has a copy of the same.


Page 11: Adaptive equalization

We use Equalizers to compensate the received signals which are corrupted by the noise, interference and signal power attenuation introduced by communication channels during transmission.

Linear transversal filters (LTF) are commonly used in the design of channel equalizers. The linear equalizers fail to work well when transmitted signals have encountered severe nonlinear distortion.

A neural network (NN) has the capability of complicatedly mapping the input to the output signals, which makes the NN-based equalizers a potentially suitable solution to deal with nonlinear channel distortion.

Page 12: Adaptive equalization
Page 13: Adaptive equalization

The problem of equalization may be treated as a problem of signals classification, so neural networks (NN) are quite promising candidates because they can produce arbitrarily complex decision region.

Studies performed during the last decade have established the superiority of neural equalizers comparative to the traditional equalizers, in conditions of shigh nonlinear distortions and rapidly varying signals.

Several different neural equalizers architectures have been developed, mostly combinations between a conventional linear transversal filter (LTE) and a neural network.

The LTE eliminates the linear distortions, such as ISI, so the NN can be focused on compensating the nonlinearities. There have been studies on the following structures: a LTE and a multilayer perception (MLP) , a LTE and a radial basis function network (RBF) a LTE and a recurrent neural network

Page 14: Adaptive equalization

MLP networks are sometimes plagued by long training times and may be trapped at bad local minima.

RBF networks often provide a faster and more robust solution to the equalization problem. In addition, the RBF neural network has a structure similar to the optimal Bayesian symbol decision Therefore, the RBF is an ideal processing structure to implement the optimal Bayesian equalizer

. The RBF performances are better than the LTE and MLP equalizers. g. Several learning algorithms have been proposed to update the RBF parameters. However, the most popular algorithm consists of an unsupervised learning rule for the centers of hidden neurons and a supervised learning rule for the weights of the output neurons.

Page 15: Adaptive equalization

The centers are generally updated using the k-means clustering algorithm which consists of computing the squared distance between the input vector and the centers, choosing a minimum squared distance, and moving the corresponding center closer to the input vector.

The k mean algorithm has some potential problems: classification depend on the initials values of the centers of RBF, on the type of chosen distance, on the number of classes. If a center is inappropriate chosen it may never be updated, so it may never represent a class.

Here is proposed a new competitive method to update the RBF centers, which recompenses the winning neuron and penalizes the second winner, named rival..

Page 16: Adaptive equalization

Gradient Based Adaptive Algorithm

An adaptive algorithm is a procedure for adjusting the parameters of an adaptive filter to minimize a cost function chosen for the task at hand.

Page 17: Adaptive equalization

where G( ) is a particular vector-valued nonlinear function( depends on cost function chosen), μ(t) is a step size parameter, e(t) and s(t) are the error signal and input signal vector, respectively, and ω (t) is a vector of states that store pertinent information about the characteristics of the input and error signals

In this case, the parameters in ω(t) correspond to the impulse response values of the filter at time n. We can write the output signal y(t) as

The general form of an adaptive FIR filtering algorithm is

Page 18: Adaptive equalization

The Mean-Squared Error (MSE) cost function can be defined as

WMSE(t) can be found from the solution to the system of equations

The method of steepest descent is an optimization procedure for minimizing the cost function J(t) with respect to a set of adjustable parameters W(t). This procedure adjusts each parameter of the system according to relationship

Page 19: Adaptive equalization

Linear Equalization Algorithms

Page 20: Adaptive equalization


• In the family of stochastic gradient algorithms

• Approximation of the steepest – descent method

• Based on the MMSE criterion.(Minimum Mean square Error)

• Adaptive process containing two input signals:

• 1.) Filtering process, producing output signal.

• 2.) Desired signal (Training sequence)

• Adaptive process: recursive adjustment of filter tap weights

Page 21: Adaptive equalization


Filter output

Estimation error

Tap-weight adaptation





kk nwknuny


neknunw1nw *kk










weight-tap of

value old


weigth-tap of

value update

Page 22: Adaptive equalization

The recursive least squares (RLS) algorithm is another algorithm for determining the coefficients of an adaptive filter. In contrast to the LMS algorithm, the RLS algorithm uses information from all past input samples (and not only from the current tap-input samples) to estimate the (inverse of the) autocorrelation matrix of the input vector.

To decrease the influence of input samples from the far past, a weighting factor for the influence of each sample is used. This cost function can be represented as

Recursive Least Square Algorithm

Page 23: Adaptive equalization
Page 24: Adaptive equalization

Non Linear Equalizers

Page 25: Adaptive equalization

Multilayer Perceptron Network

In 1958, Rosenblatt demonstrated some practical applications using the perceptron. The perceptron is a single level connection of McCulloch-Pitts neurons is called as Single-layer feed forward networks.

The network is capable of linearly separating the input vectors into pattern of classes by a hyper plane. Similarly many perceptrons can be connected in layers to provide a MLP network, the input signal propagates through the network in a forward direction, on a layer-by-layer basis. This network has been applied successfully to solve diverse problems.

Page 26: Adaptive equalization

MLP Neural Network Using BP Algorithm

Page 27: Adaptive equalization

Generally MLP is trained using popular error back-propagation algorithm.Si represent the inputs s1, s2………. sn to the network, and yk represents the output of the final layer of the neural network. The connecting weights between the input to the first hidden layer, first to second hidden layer and the second hidden layer to the output layers are represented by


The final output layer of the MLP may be expressed as

Page 28: Adaptive equalization

The final output yk(t) at the output of neuron k, is compared with the desired output d(t) and the resulting error signal e(t) is obtained as

The instantaneous value of the total error energy is obtained by summing all error signals over all neurons in the output layer, that is

This error signal is used to update the weights and thresholds of the hidden layers as well as the output layer. The updated weights are,

Page 29: Adaptive equalization
Page 30: Adaptive equalization

Functional Link Artificial Neural Network

FLANN is a novel single layer ANN network in which the original input pattern is expanded to a higher dimensional space using nonlinear functions, which provides arbitrarily complex decision regions by generating nonlinear decision boundaries.

The main purpose of enhanced the functional expansion block to used for the channel equalization process.

Each element undergoes nonlinear expansion to form M elements such that the resultant matrix has the dimension of N×M. The functional expansion of the element xk by power series expansion is carried out using the equation given in

Page 31: Adaptive equalization
Page 32: Adaptive equalization

At tth iteration the error signal e(t) can be computed as

The weight vector can be updated by least mean square (LMS) algorithm, as

Page 33: Adaptive equalization

BER Performance of FLANN equalizer compared with LMS, RLS based Equalizer

Page 34: Adaptive equalization

Chebyshev Artificial Neural Network

Chebyshev artificial neural network is similar to FLANN. The difference being that in a FLANN the input signal is expanded to higher dimension using functional expansion. In Chebyshev the input is expanded using Chebyshev polynomial. Similarly as FLANN network given in the ChNN weights are updated by LMS algorithm. The Chebyshev polynomials generated using the recursive formula given as

Page 35: Adaptive equalization
Page 36: Adaptive equalization

BER Performance of ChNN equalizer compared with FLANN and LMS, RLS based equalizer

Page 37: Adaptive equalization

Radial Basis Function Equalizer

Page 38: Adaptive equalization

The centres of the RBF networks are updated using k-means clustering algorithm. This RBF structure can be extended for multidimensional output as well. Gaussian kernel is the most popular form of kernel function for equalization application, it can be represented as

This network can implement a mapping Frbf : Rm→ R by the function

Training of the RBF networks involves setting the parameters for the centres Ci, spread σr and the linear weights ωi RBF spread parameter, σr 2 is set to channel noise variance σn 2

This provides the optimum RBF network as an equaliser.

Page 39: Adaptive equalization

BER Performance RBF Equalizer Compared ChNN, FLANN, LMS, RLS equalizer

Page 40: Adaptive equalization


We observed that RLS provides faster convergence rate than LMS equalizer. We observed that MLP equalizer is a feed-forward network trained using BP algorithm, it performed better than the linear equalizer, but it has a drawback of slow convergence rate, depending upon the number of nodes and layers. Optimal equalizer based on maximum a-posterior probability (MAP) criterion can be implemented using Radial basis function (RBF) network.RBF equalizer mitigation all the ISI, CCI and BN interference and provide minimum BER plot. But it has one draw back that if input is increased the number of centres of the network increases and makes the network more complicated.

Page 41: Adaptive equalization


•Haykin, S., "Adaptive Filter Theory", Prentice Hall,2005•Haykin.S “Neural Network”, PHI 2003•Kavita Burse, R. N. Yadav, and S. C. ShrivastavaChannel Equalization Using Neural Networks: A Review ‘ IEEE Transactions on Systems, Man, And Cybernetics —Part B: CYBERNETICS, VOL. 40, NO. 3, MAY 2010’•Jagdish C. Patra, Ranendra N. Pal, Rameswar Baliarsingh, and Ganapati Panda : Nonlinear Channel Equalization for QAM Constellation Using Artificial Neural Network ‘ IEEE Transactions on Systems, Man, And Cybernetics —Part B: CYBERNETICS, VOL. 29, NO. 2, APRIL 1999’•Amalendu Patnaik‘, Dimitrios E. Anagnostou‘, Rabindra K. Mishra’, Christos G. Christodoulou‘, and J. C. Lyke’ ‘Applications of Neural Networks in Wireless Communications ‘IEEE Antennas and Propagation Magazine, Vol. 46, No. 3. June 2004 •R. Rojas: Neural Networks, Springer-Verlag, Berlin, 1996•http://www.geocities.com/SiliconValley/Lakes/6007/Neural.htm