+ All Categories
Home > Documents > Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.

Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.

Date post: 17-Jan-2016
Category:
Upload: matilda-jenkins
View: 219 times
Download: 0 times
Share this document with a friend
Popular Tags:
52
Ming-Feng Yeh 1 CHAPTER 16 CHAPTER 16 Adaptive Adaptive Resonance Resonance Theory Theory
Transcript
Page 1: Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.

Ming-Feng Yeh 1

CHAPTER 16CHAPTER 16

AdaptiveAdaptiveResonanceResonance

TheoryTheory

Page 2: Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.

Ming-Feng Yeh 2

ObjectivesObjectives

There is no guarantee that, as more inputs are applied to the competitive network, the weight matrix will eventually converge.

Present a modified type of competitive learning, called adaptive resonance theory (ART), which is designed to overcome the problem of learning stability.

Page 3: Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.

Ming-Feng Yeh 3

Theory & ExamplesTheory & Examples

A key problem of the Grossberg network and the competitive network is that they do NOT always from stable clusters (or categories).

The learning instability occurs because of the network’s adaptability (or plasticity), which causes prior learning to be eroded by more recent learning.

Page 4: Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.

Ming-Feng Yeh 4

Stability / PlasticityStability / Plasticity

How can a system be receptive to significant new patterns and yet remain stable in response to irrelevant patterns?

Grossberg and Carpenter developed the ART to address the stability/plasticity dilemma. The ART networks are based on the Grossberg ne

twork of Chapter 15.

Page 5: Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.

Ming-Feng Yeh 5

Key InnovationKey Innovation

The key innovation of ART is the use of “expectations.” As each input is presented to the network, it is

compared with the prototype vector that is most closely matches (the expectation).

If the match between the prototype and the input vector is NOT adequate, a new prototype is selected. In this way, previous learned memories (prototypes) are not eroded by new learning.

Page 6: Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.

Ming-Feng Yeh 6

OverviewOverview

Input

Layer 1(Retina)

Layer 2(Visual Cortex )

LTM(AdaptiveWeights)

STM

Normalization ConstrastEnhancement

Basic ART architecture

Grossberg competitive network

Page 7: Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.

Ming-Feng Yeh 7

Grossberg NetworkGrossberg Network

The L1-L2 connections are instars, which performs a clustering (or categorization) operation. When an input pattern is presented, it is multiplied (after normalization) by the L1-L2 weight matrix.A competition is performed at Layer 2 to determine which row of the weight matrix is closest to the input vector. That row is then moved toward the input vector.After learning is complete, each row of the L1-L2 weight matrix is a prototype pattern, which represents a cluster (or a category) of input vectors.

Page 8: Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.

Ming-Feng Yeh 8

ART Networks -- 1ART Networks -- 1

Learning of ART networks also occurs in a set of feedback connections from Layer 2 to Layer 1. These connections are outstars which perform pattern recall.When a node in Layer 2 is activated, this reproduces a prototype pattern (the expectation) at layer 1.Layer 1 then performs a comparison between the expectation and the input pattern.When the expectation and the input pattern are NOT closely matched, the orienting subsystem causes a reset in Layer 2.

Page 9: Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.

Ming-Feng Yeh 9

ART Networks -- 2ART Networks -- 2

The reset disables the current winning neuron, and the current expectation is removed.

A new competition is then performed in Layer 2, while the previous winning neuron is disable.

The new winning neuron in Layer 2 projects a new expectation to Layer 1, through the L2-L1 connections.

This process continues until the L2-L1 expectation provides a close enough match to the input pattern.

Page 10: Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.

Ming-Feng Yeh 10

ART SubsystemsART SubsystemsLayer 1 Comparison of input pattern and expectation.L1-L2 Connections (Instars)

Perform clustering operation. Each row of W1:2 is a prototype pattern.

Layer 2Competition (Contrast enhancement)

L2-L1 Connections (Outstars)Perform pattern recall (Expectation).Each column of W2:1 is a prototype pattern

Orienting SubsystemCauses a reset when expectation does not match input patternDisables current winning neuron

Page 11: Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.

Ming-Feng Yeh 11

Layer 1Layer 1

Page 12: Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.

Ming-Feng Yeh 12

Layer 1 OperationLayer 1 Operation

Equation of operation of Layer 1:

Output of Layer 1:

)(][)()()()()( 211121:2111

1

tttttdt

tdaWnbaWpnbn

n

Excitatory input:Input pattern + L1-L2 expectation

Inhibitory input:Gain control from L2

)( 11 nhardlima

0,00,1

)(nn

nhardlim

Page 13: Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.

Ming-Feng Yeh 13

Excitatory Input to L1Excitatory Input to L1

The excitatory input:

Assume that the jth neuron in Layer 2 has won the competition, i.e.,

The excitatory input to Layer 1 is the sum of the input pattern and the L2-L1 expectation.

)(21:2 taWp

.,0,1 22 jkaa kj

1:21:21:21:22

1:21

21:2

0

1

00

2 jSj wwwwwaW

1:221:2jwpaWp

Page 14: Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.

Ming-Feng Yeh 14

Inhibitory Input to L1Inhibitory Input to L1

The inhibitory input – the gain control

The inhibitory input to each neuron in Layer 1 is the sum of all of the outputs of Layer 2.

The gain control to Layer 1 will be one when Layer 2 is active (one neuron has won the competition), and zero when Layer 2 is inactive (all neurons having zero output).

21][ aW

111

111111

1

W

Page 15: Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.

Ming-Feng Yeh 15

Steady State Analysis -- 1Steady State Analysis -- 1

The response of neuron i in Layer 1:

Case 1: Layer 2 is inactive – each

In steady state:If thenIf thenThe output of Layer 1 is the same as the input pattern

22

1

211

1

21:2,

1111 S

jji

S

jjjiiii

i anbawpnbndt

dn

02 ja

}{1111

iiii pnbn

dt

dn

00 1111

iiii pnbn

dt

dn

i

ii p

pbn

1

11

0ip

1ip01 in

0211 bni

iii pnhardlima )( 11 pa 1

Page 16: Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.

Ming-Feng Yeh 16

Steady State Analysis -- 2Steady State Analysis -- 2

Case 2: Layer 2 is active – and

In steady state:

Layer 1 is to combine the input vector with the expectation from Layer 2. Since both the input and the expectation are binary pattern, we will use a logic AND operation to combine the two vectors. if either or is equal to 0 if both and are equal to 1

12 ja .,02 jkak

111:2,

1111

ijiiiii nbwpnbn

dt

dn

01

dt

dni1:2

,

11:2,

11

2

)(

jii

jiii

wp

bwpbn

01 in ip 1:2, jiw

01 in ip 1:2, jiw 02 11 bb

011 bb

1112 bbb 1:21jwpa

Page 17: Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.

Ming-Feng Yeh 17

Layer 1 ExampleLayer 1 Example

Let

Assume that Layer 2 is active and neuron 2 of Layer 2 wins the competition.

10

,1011

,5.1,1,1.0 1:211 pWbb

22

1

211

1

21:2,

1111 S

jji

S

jjjiiii

i anbawpnbndt

dn

5305.03)5.1(10)1(1.0 11

111

111

11

11

11 n

dt

dnnnnn

dt

dn

5405.04)5.1(11)1(1.0 11

121

212

12

12

12 n

dt

dnnnnn

dt

dn

Page 18: Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.

Ming-Feng Yeh 18

Response of Layer 1Response of Layer 1

0 0.05 0.1 0.15 0.2-0.2

-0.1

0

0.1

0.2

n11

t 16---– 1 e

30t–– =

n21

t 18--- 1 e

40t–– =

11:22 1

011

10

awp

Page 19: Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.

Ming-Feng Yeh 19

Layer 2Layer 2

From the orienting subsyste

m

Page 20: Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.

Ming-Feng Yeh 20

Layer 2 OperationLayer 2 Operation

Equation of operation of Layer 2:

The rows of adaptive weights , after training, will represent the prototype patterns.

)(][)(

)(][)()()(

22222

12:12222222

tt

tttdt

td

nfWnb

aWnfWnbnn

on-center feedbackadaptive instar

excitatory input

off-surround feedbackinhibitory input

2:1W

otherwise,0

])[(max)(if,1 12:112:12

awaw Tj

j

Ti

ia

Page 21: Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.

Ming-Feng Yeh 21

Layer 2 ExampleLayer 2 Example

Let

015.05.0

)()(

,11

,11

,1.0 2:12

2:112:122

T

T

ww

Wbb

0,,00,)(10)(

22

nnnnf

)()(1)()()(1)()(

1.0 22

221

12:11

21

221

21

21 tnftntnftntndt

tdn T aw

)()(1)()()(1)()(

1.0 21

222

12:12

22

222

22

22 tnftntnftntndt

tdn T aw

Page 22: Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.

Ming-Feng Yeh 22

Response of Layer 2Response of Layer 2

0 0.05 0.1 0.15 0.2

-1

-0.5

0

0.5

1

w1:22

Ta1

n22

t

w1:21

Ta1

n12

t

t

102a

011a

Page 23: Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.

Ming-Feng Yeh 23

Orienting SubsystemOrienting Subsystem

Determine if there is a sufficient match between the L2-L1 expectation (a1) and the input pattern (p)

Page 24: Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.

Ming-Feng Yeh 24

Orienting Subsyst. Operat.Orienting Subsyst. Operat.

Equation of operation of the Orienting Subsystem:

excitatory input:

inhibitory input:

Whenever the excitatory input is larger than the inhibitory input, the Orienting Subsystem will be driven on.

100000000

)()()()(

aWpW tnbtnbtndt

tdn

excitatory input inhibitory input

2

1

01

pppW

S

jjp

21

1

11101

)( aaaW

S

jj ta

Page 25: Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.

Ming-Feng Yeh 25

Steady State OperationSteady State Operation

Steady state:

Let , then if , or

if (vigilance)

The condition that will cause a reset of Layer 2.

210200212

21002000

1

}){(}){(0

apap

ap

bbn

nbnbn

212

21020

0

1 ap

ap

bbn

100 bb 00 n 0212 ap

00 n

2

21

p

a

Page 26: Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.

Ming-Feng Yeh 26

Vigilance ParameterVigilance Parameter

. The term is called the vigilance

parameter and must fall in the range

If is close to 1, a reset will occur unless is close to

If is close to 0, need not be close to to present a reset.

, whenever Layer 2 is active.The orienting subsystem will cause a reset when there is enough of a mismatch between and

2

21

p

a

10 1a p

1a p

1:21jwpa

212ap

p 1:2jw

otherwise,0

if,1221

0 paa

Page 27: Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.

Ming-Feng Yeh 27

Orienting Subsystem Ex.Orienting Subsystem Ex.

Suppose that

In this case a reset signalwill be sent to Layer 2,since is positive.

01

,11

),75.0(4,3,1.0 1ap

0 0.05 0.1 0.15 0.2-0.2

-0.1

0

0.1

0.2

t

n0

t )}(4{)(1

)}(3{)(1

)()(

1.0

12

11

021

0

00

aatn

pptn

tndt

tdn

20)(110)( 0

0

tndt

tdn

)(0 tn

Page 28: Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.

Ming-Feng Yeh 28

Learning LawLearning Law

Two separate learning laws:one for the L1-L2 connections,(instar) and another for L2-L1connections (outstar).

Both L1-L2 connections and L2-L1 connections are updated at the same time.Whenever the input and theexpectation have an adequate match.

The process of matching, and subsequentadaptation, is referred to as resonance.

2:1W1:2W

Page 29: Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.

Ming-Feng Yeh 29

Subset / Superset DilemmaSubset / Superset Dilemma

Suppose that ,

so that the prototype patterns are

If the output of Layer 1 isthen the input to Layer 2 will be

Both prototype vectors have the same inner product with a1, even though the 1st prototype is identical to a1 and the 2nd prototype is not.This is called subset/superset dilemma.

1110112:1W

2:12

2:112:1

2

2:11

111

011ww

w

w

T

T

T0111 a

2212:1 aW

Page 30: Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.

Ming-Feng Yeh 30

Subset / Superset SolutionSubset / Superset Solution

One solution to the subset/superset dilemma is to normalize the prototype patterns.

The input to Layer 2 will then be

The first prototype has the largest inner product with a1. The first neuron in Layer 2 will be active.

313131021212:1W

32112:1 aW

Page 31: Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.

Ming-Feng Yeh 31

Learning Law: L1-L2Learning Law: L1-L2

Instar learning with competition:

When neuron i of Layer 2 is active, the ith row of , , is moved in the direction of a1. The learning law is that the elements of compete, and thereforeis normalized.

)(])}[({)(][)}({)()]([ 12:112:12

2:1

tttttadt

tdiii

i aWwbaWwbw

011

101110

,

100

010001

,

0

00

,

1

11

WWbb

2:1W2:1

1w2:1

1w

Page 32: Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.

Ming-Feng Yeh 32

Fast LearningFast Learning

For fast learning, we assume that the outputs of Layer 1 and Layer 2 remain constant until the weights reach steady state.

assume that and setCase 1:

Case 2:

Summary:

jk

kjijjiiji tatwtatwta

dt

tdw)()()()}(1{)(

)( 12:1,

12:1,

22:1

,

1)(2 tai 0)(2:1, dttdw ji

1)(2 ta j

2:1,

21212:1,

2:1, )1()1()1(0 jijiji www aa

121

2:1,

a

jiw

0)(2 ta j

212:1

,0 ajiw 02:1, jiw

1,1

21

12:1

,

a

ajiw

Page 33: Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.

Ming-Feng Yeh 33

Learning Law: L2-L1Learning Law: L2-L1

Typical outstar learning:

If neuron j in Layer 2 is active (has won the competition), then column j of is moved toward a1.

Fast learning: assume that andColumn j of converges to the output of Layer 1, a1, which is a combination of the input pattern and the appropriate prototype pattern. The prototype pattern is modified to incorporate the current input pattern.

)()()()]([ 11:22

1:2

tttadt

tdjj

j aww

1:2W

12 ja 0w

dt

td j )]([ 1:2

11:2 aw j

1:2W

Page 34: Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.

Ming-Feng Yeh 34

ART1 Algorithm SummaryART1 Algorithm Summary

0. Initialization: The initial is set to all 1’s. Every elements of the initial is set to .

1. Present an input pattern to the network.Since Layer 2 is NOT active on initialization, the output of Layer 1 is .

2. Compute the input to Layer 2, , and activate the neuron in Layer 2 with the largest input

In case of tie, the neuron with the smallest index is declared the winner.

1:2W2:1W )1( 1 S

pa 1

12:1 aW

otherwise,0

])[(max)(if,1 12:112:12

awaw Tj

j

Ti

ia

Page 35: Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.

Ming-Feng Yeh 35

Algorithm Summary Cont.Algorithm Summary Cont.

3. Compute the L2-L1 expectation (assume that neuron j of Layer 2 is activated):

4. Layer 2 is active. Adjust the Layer 1 output to include the L2-L1 expectation:

5. Determine the degree of match between the input pattern and the expectation (Orienting Subsystem):

6. If , then set , inhibit it until an adequate match occurs (resonance), and return to step 1.If , then continue with step 7.

1:221:2jwaW

1:21jwpa

otherwise,0

if,1221

0 paa

10 a 02 ja

00 a

Page 36: Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.

Ming-Feng Yeh 36

Algorithm Summary Cont.Algorithm Summary Cont.

7. Update row j of when resonance has occurred:

8. Update column j of :

9. Remove the input pattern, restore all inhibited neurons in Layer 2, and return to step 1.

The input patterns continue to be applied to the network until the weights stabilize (do not change).

ART1 network can only be used for binary input patterns.

2:1W

1,1

21

12:1

,

a

ajiw

1:2W11:2 aw j

Page 37: Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.

Ming-Feng Yeh 37

Solved Problem: P16.5Solved Problem: P16.5

Train an ART1 network using the parameters and , and choosing (3 categories), and using the

following three input vectors:

Initial weights:

1-1: Compute the Layer 1 response:

24.0 32 S

011

,001

,010

321 ppp

5.05.05.05.05.05.05.05.05.0

,111111111

2:11:2 WW

010

11 pa

Page 38: Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.

Ming-Feng Yeh 38

P16.5 ContinuedP16.5 Continued

1-2: Compute the input to Layer 2

Since all neurons have the same input, pick the first neuron as winner.

1-3: Compute the L2-L1 expectation

5.05.05.0

010

5.05.05.05.05.05.05.05.05.0

12:1 aW

T0012 a

1:21

21:2

111

001

111111111

waW

Page 39: Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.

Ming-Feng Yeh 39

P16.5 ContinuedP16.5 Continued

1-4: Adjust the Layer 1 output to include the expectation

1-5: Determine the match degree: Therefore (no reset)

1-6: Since , continued with step 7.

1-7: Resonance has occurred, update row 1 of

010

111

010

1:211

1 wpa

4.0112

1

21 pa

00 a

00 a2:1W

5.05.05.05.05.05.0

010

010

12

2 2:1121

12:1

1 Waa

aw

Page 40: Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.

Ming-Feng Yeh 40

P16.5 ContinuedP16.5 Continued

1-8: Update column 1 of :

2-1: Compute the new Layer 1 response

(Layer 2 inactive):

2-2: Compute the input to Layer 2:

Since neurons 2 and 3 have the same input, pick the second neuron as winner:

1:2W

110111110

010

1:211:21 Waw

001

21 pa

5.05.0

0

001

5.05.05.05.05.05.0

01012:1 aW

T0102 a

Page 41: Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.

Ming-Feng Yeh 41

P16.5 ContinuedP16.5 Continued

2-3: Compute the L2-L1 expectation:

2-4: Adjust the Layer 1 output to include the expectation

2-5: Determine the match degree: Therefore (no reset)

2-6: Since , continued with step 7.

111

1:22

21:2 waW

001

111

001

1:222

1 wpa

4.0112

1

21 pa

00 a

00 a

Page 42: Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.

Ming-Feng Yeh 42

P16.5 ContinuedP16.5 Continued

2-7: Resonance has occurred, update row 2 of

2-8: Update column 2 of :

3-1: Compute the new Layer 1 response:

3-2: Compute the input to Layer 2:

2:1W

5.05.05.0001010

001

12

2 2:1121

12:1

2 Waa

aw

1:2W

100101110

001

1:211:22 Waw

011

31 pa

001

111

011

5.05.05.0001010

212:1 aaW

Page 43: Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.

Ming-Feng Yeh 43

P16.5 ContinuedP16.5 Continued

3-3: Compute the L2-L1 expectation:

3-4: Adjust the Layer 1 output to include the expectation

3-5: Determine the match degree: Therefore (no reset)

3-6: Since , continued with step 7.

010

1:22

21:2 waW

010

010

011

1:213

1 wpa

4.0212

3

21 pa

00 a

00 a

Page 44: Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.

Ming-Feng Yeh 44

P16.5 ContinuedP16.5 Continued

3-7: Resonance has occurred, update row 1 of

3-8: Update column 2 of :

This completes the training, since if you apply any of the three patterns again they will not change the weights. These patterns have been successfully clustered.

2:1W

5.05.05.0001010

010

12

2 2:1121

12:1

1 Waa

aw

1:2W

100101110

010

1:211:22 Waw

Page 45: Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.

Ming-Feng Yeh 45

Solved Problem: P16.6Solved Problem: P16.6

Repeat Problem P16.5, but change the vigilance parameter to .

The training will proceed exactly as in Problem P16.5, until pattern p3 is presented.

3-1: Compute the Layer 1 response:

3-2: Compute the input to Layer 2:

6.0

011

31 pa

001

111

011

5.05.05.0001010

212:1 aaW

Page 46: Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.

Ming-Feng Yeh 46

P16.6 ContinuedP16.6 Continued

3-3: Compute the L2-L1 expectation:

3-4: Adjust the Layer 1 output to include the expectation

3-5: Determine the match degree: Therefore (reset)

3-6: Since , set , inhibit it until an adequate match occurs (resonance), and return to step 1.

010

1:22

21:2 waW

010

010

011

1:213

1 wpa

6.0212

3

21 pa

10 a

10 a 021 a

Page 47: Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.

Ming-Feng Yeh 47

P16.6 ContinuedP16.6 Continued

4-1: Recompute the Layer 1 response:

(Layer 2 inactive)

4-2: Compute the input to Layer 2:

Since neuron 1 is inhibited, neuron 2 is the winner:

4-3: Compute the L2-L1 expectation:

4-4: Adjust the Layer 1 output to include the expectation

011

31 pa

111

011

5.05.05.0001010

12:1 aW

T0102 a

001

1:22

21:2 waW

001

001

011

1:223

1 wpa

Page 48: Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.

Ming-Feng Yeh 48

P16.6 ContinuedP16.6 Continued

4-5: Determine the match degree: Therefore (reset)

4-6: Since , set , inhibit it until an adequate match occurs (resonance), and return to step 1.

5-1: Recompute the Layer 1 response:

5-2: Compute the input to Layer 2:

Since neurons 1 & 2 are inhibited,

neuron 3 is the winner:

6.0212

3

21 pa

10 a

10 a 022 a

011

31 pa

111

011

5.05.05.0001010

12:1 aW

T1002 a

Page 49: Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.

Ming-Feng Yeh 49

P16.6 ContinuedP16.6 Continued

5-3: Compute the L2-L1 expectation:

5-4: Adjust the Layer 1 output to include the expectation

5-5: Determine the match degree: Therefore (no reset)

5-6: Since , continued with step 7.

111

1:23

21:2 waW

011

111

011

1:233

1 wpa

6.0222

3

21 pa

00 a

00 a

Page 50: Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.

Ming-Feng Yeh 50

P16.6 ContinuedP16.6 Continued

5-7: Resonance has occurred, update row 3 of

5-8: Update column 2 of :

This completes the training, since if you apply any of the three patterns again they will not change the weights. These patterns have been successfully clustered.

2:1W

03232001010

03232

12

2 2:1132

21

12:1

1 Waa

aw

1:2W

000101110

011

1:211:23 Waw

Page 51: Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.

Ming-Feng Yeh 51

Solved Problem: P16.7Solved Problem: P16.7

Train an ART1 network using the following input vectors.Present the vectors in the order p1-p2-p3-p1-p4. Use theparameters and , and choose (threecategories). Train the network until the weights haveconverged.

The initial matrix is an matrix of 1’s.

The initial matrix is an matrix, with equal to

blue square 1; white square 0

55 grids 25-dimensional vectors

2 6.0 32 S

1:2W 32521 SS2:1W 25312 SS

0769.0)1252(2)1( 1 S

Page 52: Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.

Ming-Feng Yeh 52

P16.7 ContinuedP16.7 Continued

Training sequence: p1-p2-p3-p1-p4

: resonance

√: reset


Recommended