1 Pattern Recognition: Statistical and Neural Lonnie C. Ludeman Lecture 21 Oct 28, 2005 Nanjing...

Post on 01-Jan-2016

213 views 0 download

Tags:

transcript

1

Pattern Recognition:Statistical and Neural

Lonnie C. Ludeman

Lecture 21

Oct 28, 2005

Nanjing University of Science & Technology

2

Lecture 21 Topics

1.Example – Analysis of simple Neural Network

2.Example - Synthesis of special forms of Artificial Neural Networks

3. General concepts of Training an Artificial Neural Network- Supervised and unsupervised,training sets

4. Neural Networks Nomenclature and Notation

5. Derivation and Description of the Backpropagation Algorithm for Feedforward Neural Networks

3

Example: Analyze the following Neural Network

-1

1

-1

110

00

1

4

Solution: Outputs of layer 1 ANEs

5

Output of layer 2 ANE is

Thus from layer 1 we have

- 2 ≥ 0 < 0

6

7

Final Solution: Output Function for Given Neural Network

8

Example: Synthesize a Neural Network

Given the following decision regions build a neural network to perform the classification process

Solution: Use Hyperplane-AND-OR structure

9

Each gk(x) specifies a

hyperplane boundary

10

Hyperplane Layer AND Layer OR Layer

all f(·) = μ(·)

Solution:

11

Training a Neural Network

“With a teacher” “Without a teacher”

12

13

Training Set

xj are the training samples

dj is the class assigned to training sample xj

14

Example of a training set:

( x1 = [ 0, 1 ,2 ]T , d1 = C1 ) ,

( x2 = [ 0, 1 ,0 ]T , d2 = C1 ) ,

( x3 = [ 0, 1 ,1 ]T , d3 = C1 ) ,

( x4 = [ 1, 0 ,2 ]T , d4 = C2 ) ,

( x5 = [ 1, 0 ,3 ]T , d5 = C2 ) ,

( x6 = [ 0, 0 ,1 ]T , d6 = C3 ) ,

( x7 = [ 0, 0 ,2 ]T , d7 = C3 )

( x8 = [ 0, 0 ,3 ]T d8 = C3 )

( x9 = [ 0, 0 ,3 ]T d9 = C3 )

( x10 = [ 1, 1 ,0 ]T d10 = C4 )

( x11 = [ 2, 2 ,0 ]T d11 = C4 )

( x12 = [ 2, 2 ,2 ]T d12 = C5 )

( x13 = [ 3, 2, 2 ]T d13 = C6 )

{

}

15

General Weight Update Algorithm

x(k) is the training sample for the k th iteration

d(k) is the class assigned to training sample x(k) y(k) is the output vector for the k th training sample

16

Training with a Teacher( Supervised)

1. Given a set of N ordered samples with their known class assignments.

2. Randomly select all weights in the neural network.

3. For each successive sample in the total set of samples, evaluate the output.

4. Use these outputs and the input sample to update the weights

5. Stop at some predetermined number of iterations or if given performance measure is satisfied. If not stopped go to step 3

17

Training without a Teacher( Unsupervised)

1. Given a set of N ordered samples with unknown class assignments.

2. Randomly select all weights in the neural network.

3. For each successive sample in the total set of samples, evaluate the outputs.

4. Using these outputs and the inputs update the weights

5. If weights do not change significantly stop with that result. If weights change return to step 3

18

Supervised Training of a Feedforward Neural Network

Nomenclature

19

Output vector of layer m

Output vector of layer L

Node Number Layer m

Node Number Layer L

1

20

Weight Matrix for layer m

Node 1 Node 2 Node Nm

N

Nm

21

fix

Layers, Nets, Outputs, Nonlinearities

22

Define the performance Ep for sample x(p) as

We wish to select weights so that Ep is

Minimized – Use Gradient Algorithm

23

Gradient Algorithm for Updating the weights

p w(p)

px(p)

24

Derivation of weight update equation for Last Layer (Rule #1) Backpropagation Algorihm

The partial of ym(L)

with respect to wkj(L) is

25

General Rule #1 for Weight Update

Therefore

26

Derivation of weight update equation for Next to Last Layer (L-1) Backpropagation Algorithm

27

28

General Rule #2 for Weight Update- Layer L-1 Backpropagation Algorithm

Therefore

and the weight correction is as follows

29

where weight correction (general Rule #2) is

w

(L-1)

30

Backpropagation Training Algorithm for Feedforward Neural networks

31

Input pattern sample xk

32

Calculate Outputs First Layer

33

Calculate Outputs Second Layer

34

Calculate Outputs Last Layer

35

Check Performance

ETOTAL(p) ½ (d[x(p-i)] – f( wT(p-i)x(p-i) )2

i = 0

Ns - 1

ETOTAL(p+1) = ETOTAL(p) + Ep+1 (p+1) – Ep-Ns (p-Ns )

Single Sample Error

Over all Samples Error

Can be computed recursively

36

Change Weights Last Layer using Rule #1

37

Change Weights previous Layer using Rule #2

38

Change Weights previous Layer using Modified Rule #2

39

Input pattern sample xk+1

Continue Iterations Until

40

Repeat process until performance is satisfied or maximum number of iterations are reached.

If performance not satisfied at maximum number of iterations the algorithm

stops and NO design is obtained.

If performance is satisfied then the current weights and structure provide the

required design.

41

Freeze Weights to get Acceptable Neural Net Design

42

Backpropagation Algorithm for Training Feedforward Artificial Neural Networks

43

Summary Lecture 21

1.Example – Analysis of simple Neural Network

2.Example - Synthesis of special forms of Artificial Neural Networks

3. General concepts of Training an Artificial Neural Network- Supervised and unsupervised,and description of training sets

4. Neural Networks Nomenclature and Notation

5. Derivation and Description of the Backpropagation Algorithm for Feedforward Neural Networks

44

End of Lecture 21