Other Network Models

Post on 02-Feb-2016

34 views 0 download

description

Other Network Models. Deterministic weight updates. Until now, weight updates have been deterministic. State = current weight values & unit activations But a probabilistic distribution can be used to determine whether or not a unit should change to the new calculated state. - PowerPoint PPT Presentation

transcript

Other Network Models

2

Deterministic weight updates

• Until now, weight updates have been deterministic.

• State = current weight values & unit activations• But a probabilistic distribution can be used to

determine whether or not a unit should change to the new calculated state.

• So for example, in Discrete Hopfield, even if a unit is selected for update, it might not be updated.

3

Simulated Annealing

Points tried at medium

Points tried at low

Find a global minimum using simulated annealing

4

S.A.

• A deterministic algorithm like backpropogation that uses gradient descent often gets caught in local minima.

• Once caught, the network can no longer move along error surface to a more optimal solution.

• Metropolis algorithm: Select at random a part of the system to change. The change is always accepted if the global system energy falls, but if there’s an increase in energy then the change is accepted with propability p.

5

S.A.

)exp(T

Ep

E Is change in energy and T is temperature.

6

Example algorithm for function minimizing (Geman and Hawang, 1986)

1. Select at random an initial vector x and an initial value of T.

2. Create a copy of x called xnew and randomly select a component of xnew to change. Flip the bit of the selected component.

3. Calculate the change in energy.4. If the change in energy is less than 0 then x =

xnew. Else select a random number between 0 and 1 using a uniform distribution probability density function. If the random number is less than formula then x = xnew.

7

Continued

5. If there have been a specified number (M) of changes in x for which the value of f has dropped or there have been N changes in x since the last change in temperature, then set T = αT.

6. If the minimum value of f has not decreased more than some specified constant in the last L iterations then stop, otherwise go back and repeat from step 2.

8

Boltzmann machine

• Is a neural network that uses the idea of simulated annealing for updating the network’s state.

• It’s a Hopfield network that uses a stochastic process for updating the state of a network unit.

• Assume +1 and -1 activation values.

iji

ij

ijj i

ij

wssE

wssE

2

1

9

Probability function for state change

)exp(1

1

TE

p

10

Weight update

ij

ij

ijijijw

)(

= correlation between units during clamped phase

= correlation between units during free-running phase

11

An example Boltzmann machine (can be used for autoasssociation)

Input layerOutput layer

12

Probabilistic Neural Networks

• In a PNN, a pattern is classified based on its proximity to neighbouring patterns.

• The manner in which neighbouring patterns are distributed is important.

• A simple metric to decide the class of a new metric is to calculate the centroid for each class.

• The PNN is based on Bayes’ technique of classification, make a decision as to the most likely class that a sample is taken from. The decision requires to estimate a probability density function for each class.

• The estimate is constructed from training data.

13

Class estimation methods

14

Guassian dist.

15

Gaussian dist.

Gaussian function for two variables

16

PDF (Probability density function)

The estimated PDF is the summation of the individual Gaussians centered at each sample point. Here σ = 0.1

17

PDF

The same estimate as in previous figure but with σ = 0.3. The width is too large, then there is a danger that classes will become blurred (a high chance of misclassifying).

18

PDF

The same estimate as in previous figure but with σ = 0.05. The width becomes too small, then there is a danger of poor generalization: the fit around the training samples becomes too close.

19

PPN

• The class with a highly dense population in the region of an unknown sample will be preferred over other classes.

• The probability density function (PDF) needs to be estimated.

• The estimate can be found using Parzen’s PDF estimator which uses a weight function that is centered at a training point. The weight function is called a potential function or kernel.

• A commonly used function is a Gaussian function.

20

PPN

• The Gaussian functions are then summed to give the PDF.

• The form of Gaussian function is as follows:

n

i

ixxxg

12

2

)exp()(

This square will be cancelled with square-root in normalization formula.

21

Example

• There are two classes of a data of a single variable in the following figure. A sample positioned at 0.2 is from an unknown class. Using a PDF with a Gaussian kernel, estimate the class that the sample is from.

22

0

0,1

0,2

0,3

0,4

0,5

0,6

0,7

0,8

0,9

1

-1 -0,5 0 0,5 1

Series1

Unknown sample

Figure. The unknown sample to be classified using a PDF.

23

SOLUTION

• The value for α = 0.1. The result of the density estimation are shown in table of the following slide.

• Although the unknown sample is closest to a point in class A the calculation favors class B. The reason why B is preferred is the high density of points around 0.35.

24

The calculation of the density estimation

Class Training PointDistance from

unknown PDF  

A -0,2 (-0.2 -0.2)2 =0.16 exp( -0.16 / 0.01) = 0 0

A -0,5 0,49 0 0

A -0,6 0,64 0 0

A -0,7 9,81 0 0

A -0,8 1 0 0

A 0,1 0,01 0,3679 0

      0,3679  

         

B 0,35 0,0225 exp( -0.0225 / 0.01) = 0,1054 0

B 0,36 0,0256 0,0773 0

B 0,38 0,0324 0,0392 0

B 0,365 0,0272 0,0657 0

B 0,355 0,024 0,0905 0

B 0,4 0,04 0,0183 0

B 0,5 0,09 0,001 0

B 0,6 0,16 0 0

B 0,7 0,25 0 0

      0,3965  

Sample point

25

The neural network architecture for a PNN

• The input and pattern layers are fully connected.• The weights feeding into a pattern unit are set to

the elements of the corresponding pattern vector.

• The activation of a pattern unit is

)

)(

exp(

2

2

iij

j

xw

ox is an unknown input pattern.

26

PNN

)

1

exp(2

iji

j

wx

o

If the input vectors are all of unit length, then the following form of the activation function can be used.

Number of input units = number of features

Number of pattern units = number of training samples

Number of summation units = number of classes

The weights from the pattern to summations units are fixed at 1.

27

An Example PNN Architecture

)(xf A

Input layer

Pattern layer

Summation layer

Output layer

)(xfb

28

Example

• Following figure shows a set of training points from three classes and an unknown sample. Normalize the inputs to unit length and, using a PNN, find the class to which the unknown sample is assigned.

29

The unknown sample to be classified using a PNN

-4

-3

-2

-1

0

1

2

3

4

5

6

7

0 2 4 6 8 10

Series1

A

B

C

Unknown sample

30

Solution

-1

-0,8

-0,6

-0,4

-0,2

0

0,2

0,4

0,6

0,8

1

0 0,5 1 1,5

Series1

The vectors shown in previous figure are normalized here.

31

Training data normalized to unit length

Unnormalized   Normalized  

x1 x2 x1 x2

3 5 0,5145 0,8575

4 4 0,7071 0,7071

3 4 0,6 0,8

5 6 0,6402 0,7682

4 6 0,5547 0,8321

4 5 0,6247 0,7809

7 2 0,9615 0,2747

7 3 0,9191 0,3939

8 2 0,9701 0,2425

8 3 0,9363 0,3511

9 4 0,9138 0,4061

1 -1 0,7071 -0,7071

1 -2 0,4472 -0,8944

2 -2 0,7071 -0,7071

3 -2 0,8321 -0,5547

3 -3 0,7071 -0,7071

32

Unknown sample

Unnormalized   Normalized  

x1 x2 x1 x2

5,8 4,4 0,7967 0,6044

33

Computation of the PNN for classifying the unknown sample

w1 w2 Activation

Activation of summed unit

 

0,5145 0,8575 0,0008  

0,7071 0,7071 0,395  

0,6 0,8 0,0213  

0,6402 0,7682 0,0768  

0,5547 0,8321 0,004  

0,6247 0,7809 0,048 0,5459

       

0,9615 0,2747 0,0011  

0,9191 0,3939 0,0516  

0,9701 0,2425 0,0003  

0,9363 0,3511 0,0153  

0,9138 0,4061 0,0706 0,1389

       

0,7071 -0,7071 0  

0,4472 -0,8944 0  

0,7071 -0,7071 0  

0,8321 -0,5547 0  

0,7071 -0,7071 0 0

34

Calculations of activations

>> exp(((0.6247*0.7967)+(0.7809*0.6044)-1)/0.01)

ans =

0.0482

>> exp(((0.9138*0.7967)+(0.4061*0.6044)-1)/0.01)

ans =

0.0704