Date post: | 28-Dec-2015 |
Category: |
Documents |
Upload: | jonah-farmer |
View: | 217 times |
Download: | 0 times |
Functional Brain Signal Processing: EEG & fMRI
Lesson 8
Kaushik Majumdar
Indian Statistical Institute Bangalore Center
M.Tech. (CS), Semester III, Course B50
Artificial Neural Network (ANN)
What does a single node in an ANN do?
x1
x2
x3
x4
x5
w12
w22
w32w42
w52
y2 5
21
exp i ii
b w x
5
21
i ii
w x b
More Nodes
x1
x2
x3
x4
x5
x6
y1
y2
y3
y4
output
6
1
1
1 expj
j ij iji
y
b w x
6
1 1 11
i ii
b w x
6
2 2 21
i ii
b w x
6
3 3 31
i ii
b w x
6
4 4 41
i ii
b w x
Hidden layer
Input layer
Output layer
1 if inside, 0 if outside the closed region
Number of Hidden Layers
There must be two hidden layers to identify the following annulus.
A neural network is basically a function approximator, which can approximate continuous functions by piecewise linear functions (interpolation). Neural networks are also known as universal approximator.
Separation or Classification
A separation or classification is nothing but approximating the surface separating the (mixed) data. In other words it approximates a continuous function generating the separating surface.
A classifier will have to approximate the function whose graph is this curve.
Classification by ANN
Most classification tasks are accomplished by separating the data with curve(s) consisting only a single line. Therefore for most classification tasks ANNs with a single hidden layer is sufficient.
However number of nodes in the hidden layer is to be determined by trial and error for optimal classification.
Universal Approximation
For any continuous mapping there must exist a three-layer neural network (having an input or ‘fanout’ layer with n processing elements, a hidden layer with 2n + 1 processing elements, and an output layer with m processing elements) that implements
exactly. Hecht-Nielsen, 1988.
:[0,1]n n mf
f
Backpropagation Neural Network
By far the most widely used type of neural network. It is simple yet powerful neural network even for
complex models having hundred of thousands of parameters.
Its conceptual simplicity and high success rate makes it a mainstay in adaptive pattern recognition.
Offers means to calculate input to hidden layer weights.
Duda et al., Chapter 6, p. 283 & 289
Regularization
It is a deep issue concerning complexity of the network. Number of input and output nodes is fixed. But number of hidden nodes and connection weights are not. These are free parameters. If there are too few of them the training set cannot be adequately learned. If there are too many of them, generalization of the network will be poor
Regularization (cont.)
(apart from enhanced computational complexity). That is, its performance on the test data set will fall down (while on training data set its performance may remain very high).
Training seizure pattern Testing seizure pattern
Backpropagation Architecture
Hecht-Nielsen, 1988
x1 x2 x3 x4
y1 y2
GeneralThree layer
Backpropagation Architecture (cont.)
Hecht-Nielsen, 1988
Backpropagation Algorithm
22
1
1 1( ) ( )
2 2
c
k kk
J t z
w t z has to be minimized, where t and z are target and network output vectors respectively. c is # output nodes.
J
w
wwhere is the learning rate.
( 1) ( ) ( )m m m w w w m stands for the m’th iteration.
Epileptic EEG Signal
Subasi and Ercelebi, Comp. Meth. Progr. Biomed., 78: 87 – 99, 2005
DB4 Wavelet
DB wavelets do not have closed form representation (cannot be expressed by an elegant mathematical formula, like Morlet wavelet).
http://en.wikipedia.org/wiki/Daubechies_wavelet
DB4 Wavelet Generation: Cascade Algorithm
11
0
( ) ( ) 2 (2 )N
k k
n
t h n t n
( ) ( ) 2 (2 )k
n
t g n t n
g(n), h(n) are impulse response functions. Ψ(t) is the wavelet. DB4 will contain only 4 taps or coefficients.
1 3(0)
4 2
3 3(1)
4 2
h
h
3 3(2)
4 2
1 3(3)
4 2
h
h
(0) (3) (2) (1)
(1) (2) (3) (0)
g h g h
g h g h
http://www.bearcave.com/misl/misl_tech/wavelets/daubechies/index.html
EEG Data
Electrode placement was according to 10 – 20 system.
4 signals selected as F7 – C3, F8 – C4, T5 – O1 and T6 – O2.
Sample frequency 200 Hz. Band-pass filtered in 1 – 70 Hz range upon
acquisition. EEG was segmented at 1000 time point window
(5s).
Feature Extraction by DB4 Wavelets
EEG signals decomposed by high-pass (called ‘detail signal’) and low-pass (called ‘approximation’) FIR filtering
Assignment
Preprocess depth EEG signals (to be given) by wavelet transforms (DB4 wavelet is seen to be more efficient than other wavelets, see Subasi & Ercelebi, 2005 and Vardhan & Majumdar, 2011). This will extract features from the signals.
Use a three layer (that is, with only one hidden layer) perceptron neural network to
Assignment (cont.)
classify the features to separate out the seizure portion from non-seizure portion in the signals.
References
A. Subasi and E. Ercelebi, Classification of EEG signals using neural networks and logistic regression, Comp. Meth. Progrm. Biomedicine, 78: 87 – 99, 2005.
I. Kaplan, Daubechies D4 wavelet transform, http://www.bearcave.com/misl/misl_tech/wavelets/daubechies/index.html
References (cont.)
R. Hecht-Nielsen, Theory of the backpropagation neural network, INNS 1988, p. I-593 – I-605. Freely available at http://s112088960.onlinehome.us/annProjects/Research%20Paper%20Library/backPropTheory.pdf
I. Daubechies, Ten lectures on wavelets, SIAM, 1992. p. 115, 132, 194, 242.
THANK YOU
This lecture is available at http://www.isibang.ac.in/~kaushik