+ All Categories
Home > Documents > Introduction to Neural · Cukurova University N 13 (1) N 13 (2) 34 Introduction to Neural Networks...

Introduction to Neural · Cukurova University N 13 (1) N 13 (2) 34 Introduction to Neural Networks...

Date post: 10-Feb-2021
Category:
Upload: others
View: 1 times
Download: 0 times
Share this document with a friend
11
1 Department of Electrical-Electronics Eng. Cukurova University Cukurova University Introduction to Neural Networks Self Organizing Map 2 Introduction to Neural Networks Cukurova University Simple Models Network has inputs and outputs There is no feedback from the environment no supervision The network updates the weights following some learning rule, and finds patterns, features or categories within the inputs presented to the network 3 Introduction to Neural Networks Cukurova University Unsupervised Learning In unsupervised competitive learning the neurons take part in some competition for each input. The winner of the competition and sometimes some other neurons are allowed to change their weights • In simple competitive learning only the winner is allowed to learn (change its weight). • In self-organizing maps other neurons in the neighborhood of the winner may also learn. 4 Introduction to Neural Networks Cukurova University Simple Competitive Learning x1 x2 xN W11 W12 W22 WP1 WPN Y1 Y2 YP N inputs units P output neurons P x N weights P i N j j ij i X W h ... 2 , 1 1 0 1 or i Y 5 Introduction to Neural Networks Cukurova University Network Activation The unit with the highest field hi fires i* is the winner unit • Geometrically is closest to the current input vector The winning unit’s weight vector is updated to be even closer to the current input vector * i W 6 Introduction to Neural Networks Cukurova University 6 Learning Starting with small random weights, at each step: 1. a new input vector is presented to the network 2. all fields are calculated to find a winner 3. is updated to be closer to the input Using standard competitive learning equ. * i W ) ( * * j i j j i W X W
Transcript
  • 1

    Department of Electrical-Electronics Eng.

    Cukurova University

    Cukurova University

    Introduction to Neural Networks

    Self Organizing Map

    2

    Introduction to Neural Networks

    Cukurova University

    Simple Models

    • Network has inputs and outputs

    • There is no feedback from the environment no supervision

    • The network updates the weights following some learning rule, and – finds patterns, features or categories within the inputs

    presented to the network

    3

    Introduction to Neural Networks

    Cukurova University

    Unsupervised Learning

    In unsupervised competitive learning the neurons take part in some competition for each input. The winner of the competition and sometimes some other neurons are allowed to change their weights

    • In simple competitive learning only the winner

    is allowed to learn (change its weight).

    • In self-organizing maps other neurons in the

    neighborhood of the winner may also learn.

    4

    Introduction to Neural Networks

    Cukurova University

    Simple Competitive Learning

    x1

    x2

    xN

    W11

    W12

    W22

    WP1

    WPN

    Y1

    Y2

    YP

    N inputs unitsP output neuronsP x N weights

    Pi

    N

    jjiji XWh

    ...2,1

    1

    01oriY

    5

    Introduction to Neural Networks

    Cukurova University

    Network Activation

    • The unit with the highest field hi fires

    • i* is the winner unit

    • Geometrically is closest to the current input vector

    • The winning unit’s weight vector is updated to be even closer to the current input vector

    *iW

    6

    Introduction to Neural Networks

    Cukurova University

    6

    Learning

    Starting with small random weights, at each step:

    1. a new input vector is presented to the network

    2. all fields are calculated to find a winner

    3. is updated to be closer to the input

    Using standard competitive learning equ.*iW

    )( ** jijji WXW

    http://www.google.com.tr/url?sa=i&rct=j&q=&esrc=s&frm=1&source=images&cd=&cad=rja&uact=8&ved=0CAcQjRw&url=http://www.gewamed.net/index.php?mod=partners&idp=20&ei=YpJQVfe-Nca57gbFjoPIAQ&bvm=bv.92885102,d.bGQ&psig=AFQjCNFjZKlptdHCd6K7Wo0fG63GN789-w&ust=1431430099169339

  • 2

    7

    Introduction to Neural Networks

    Cukurova University

    Result

    • Each output unit moves to the center of mass of a cluster of input vectors clustering

    8

    Introduction to Neural Networks

    Cukurova University

    Competitive Learning, Cntd

    • It is important to break the symmetry in the initial random weights

    • Final configuration depends on initialization

    – A winning unit has more chances of winning the next time a similar input is seen

    – Some outputs may never fire

    – This can be compensated by updating the non winning units with a smaller update

    Department of Electrical-Electronics Eng.

    Cukurova University

    Cukurova University

    Introduction to Neural Networks

    9

    Self Organized Map (SOM)

    Neural Network

    10

    Introduction to Neural Networks

    Cukurova University

    10

    Self Organized Map (SOM)

    • The self-organizing map (SOM) is a method for unsupervised learning, based on a grid of artificial neurons whose weights are adapted to match input vectors in a training set.

    • It was first described by the Finnish professor Teuvo Kohonen and is thus sometimes referred to as a Kohonen map.

    • SOM is one of the most popular neural computation methods in use, and several thousand scientific articles have been written about it. SOM is especially good at producing visualizations of high-dimensional data.

    11

    Introduction to Neural Networks

    Cukurova University 12

    Introduction to Neural Networks

    Cukurova University

    Self Organizing Networks

    Discover significant patterns or features in the input data

    Discovery is done without a teacher Synaptic weights are changed according to

    local rules The changes affect a neuron’s immediate

    environment until a final configuration develops The models are produced by a learning algorithm

    that automatically orders them on the two-dimensional grid along with their mutual similarity

    http://www.google.com.tr/url?sa=i&rct=j&q=&esrc=s&frm=1&source=images&cd=&cad=rja&uact=8&ved=0CAcQjRw&url=http://www.gewamed.net/index.php?mod=partners&idp=20&ei=YpJQVfe-Nca57gbFjoPIAQ&bvm=bv.92885102,d.bGQ&psig=AFQjCNFjZKlptdHCd6K7Wo0fG63GN789-w&ust=1431430099169339

  • 3

    13

    Introduction to Neural Networks

    Cukurova University

    Self-Organizing Networks

    • Kohonen maps (SOM)

    • Learning Vector Quantization (VQ)

    • Principal Components Networks (PCA)

    • Adaptive Resonance Theory (ART)

    14

    Introduction to Neural Networks

    Cukurova University

    Why SOM ?

    • Unsupervised Learning

    • Clustering

    • Classification

    • Monitoring

    • Data Visualization

    • Potential for combination between SOM and other neural network (MLP-RBF)

    15

    Introduction to Neural Networks

    Cukurova University

    The goal

    • We have to find values for the weight vectors of the links from the input layer to the nodes of the lattice, in such a way that adjacent neurons will have similar weight vectors.

    • For an input, the output of the neural network will be the neuron whose weight vector is most similar (with respect to Euclidean distance) to that input.

    • In this way, each (weight vector of a) neuron is the center of a cluster containing all the input examples which are mapped to that neuron.

    16

    Introduction to Neural Networks

    Cukurova University

    Network Architecture

    • Two layers of units– Input: n units (length of training vectors)

    – Output: m units (number of categories)

    • Input units fully connected with weights to output units

    • Intralayer (lateral) connections– Within output layer

    – Defined according to some topology

    – Not weights, but used in algorithm for updating weights

    17

    Introduction to Neural Networks

    Cukurova University

    SOM - Architecture

    • Lattice of neurons (‘nodes’) accepts and responds to set of input signals

    • Responses compared; ‘winning’ neuron selected from lattice• Selected neuron activated together with ‘neighbourhood’

    neurons• Adaptive process changes weights to more closely inputs

    2d array of neurons

    Set of input signals

    Weights

    x1 x2 x3 xn...

    wj1 wj2 wj3 wjn

    j

    18

    Introduction to Neural Networks

    Cukurova University

    Architecture• The input is connected with each neuron of a lattice.

    • Lattice Topology: It determines a neighbourhood structure of the neurons.

    1-dimensional topology

    2-dimensional topology

    Two possible neighbourhoods

    A small neighbourhood

  • 4

    19

    Introduction to Neural Networks

    Cukurova University

    Concept of the SOMInput spaceInput layer

    Reduced feature spaceMap layer

    s1

    s2Mn

    Sr

    Ba

    Clustering and ordering of the cluster centers in a two dimensional grid

    Cluster centers (code vectors) Place of these code vectors in the reduced space

    20

    Introduction to Neural Networks

    Cukurova University

    Measuring distances between nodes• Distances between output

    neurons will be used in the learning process.

    • It may be based upon:a) Rectangular latticeb) Hexagonal lattice

    • Let d(i,j) be the distance between the output nodes i,j

    • d(i,j) = 1 if node j is in the first outer rectangle/hexagon of node i

    • d(i,j) = 2 if node j is in the second outer rectangle/hexagon of node i

    • And so on..

    21

    Introduction to Neural Networks

    Cukurova University

    •Each neuron is a node containing a template against which input patterns are matched.

    •All Nodes are presented with the same input pattern in parallel and compute the distance between their template and the input in parallel.

    •Only the node with the closest match between the input and its template produces an active output.

    •Each Node therefore acts like a separate decoder (or pattern detector, feature detector) for the same input and the interpretation of the input derives from the presence or absence of an active response at each location (rather than the magnitude of response or an input-output transformation as in feedforward or feedback networks).

    22

    Introduction to Neural Networks

    Cukurova University

    SOM: interpretation

    • Each SOM neuron can be seen as representing a cluster containing all the input examples which are mapped to that neuron.

    • For a given input, the output of SOM is the neuron with weight vector most similar (with respect to Euclidean distance) to that input.

    23

    Introduction to Neural Networks

    Cukurova University

    Types of Mapping

    • Familiarity – the net learns how similar is a given new input to the typical (average) pattern it has seen before

    • The net finds Principal Components in the data• Clustering – the net finds the appropriate

    categories based on correlations in the data• Encoding – the output represents the input,

    using a smaller amount of bits • Feature Mapping – the net forms a topographic

    map of the input

    24

    Introduction to Neural Networks

    Cukurova University

    More about SOM learning

    • Upon repeated presentations of the training examples, the weight vectors of the neurons tend to follow the distribution of the examples.

    • This results in a topological ordering of the neurons, where neurons adjacent to each other tend to have similar weight vectors.

    • The input space of patterns is mapped onto a discrete output space of neurons.

  • 5

    25

    Introduction to Neural Networks

    Cukurova University

    SOM – Learning Algorithm

    1. Randomly initialise all weights

    2. Select input vector x = [x1, x2, x3, … , xn] from training set

    3. Compare x with weights wj for each neuron j to

    4. Determine winner find unit j with the minimum distance

    5. Update winner so that it becomes more like x, together with the winner’s neighbours for units within the radius according to

    6. Adjust parameters: learning rate & ‘neighbourhood function’

    7. Repeat from (2) until … ?

    i

    iijj xwd2)(

    )]()[()()1( nwxnnwnw ijiijij

    1)1()(0 nn

    Note that: Learning rate generally decreases with time:

    26

    Introduction to Neural Networks

    Cukurova University

    Winning neuron

    wi

    neuron i

    Input vector XX=[x1,x2,…xn] R

    n

    wi=[wi1,wi2,…,win] Rn

    Kohonen layer

    Architecture

    SOM - Architecture

    27

    Introduction to Neural Networks

    Cukurova University

    The learning process (1)

    An informal description:

    • Given: an input pattern x

    • Find: the neuron i which has closest weight vector by competition (wiT x will be the highest).

    • For each neuron j in the neighbourhood N(i) of the winning neuron i: – update the weight vector of j.

    28

    Introduction to Neural Networks

    Cukurova University

    The learning process (2)

    • Neurons which are not in the neighbourhood are left unchanged.

    • The SOM algorithm:

    – Starts with large neighbourhood size and gradually reduces it.

    – Gradually reduces the learning rate .

    29

    Introduction to Neural Networks

    Cukurova University

    The learning process (3)

    • There are basically three essential processes:– competition

    – cooperation

    – weight adaption

    30

    Introduction to Neural Networks

    Cukurova University

    The learning process (3)

    • Competition:

    – Competitive process: Find the best match of input vector x with weight vectors:

    – The input space of patterns is mapped onto a discrete output space of neurons by a process of competition among the neurons of the network.

    , ... ,2 ,1 ||||minarg)( jwxxi jj

    winning neuron

    total number

    of neurons

  • 6

    31

    Introduction to Neural Networks

    Cukurova University

    The learning process (3)

    • Cooperation:

    – Cooperative process: The winning neuron locates the center of a topological neighbourhood of cooperating neurons.

    – The topological neighbourhood depends on lateral distance dji between the winner neuron i and neuron j.

    32

    Introduction to Neural Networks

    Cukurova University

    Learning Process –

    neighbourhood function (4)

    – Gaussian neighbourhood function

    2

    2

    2exp)(

    ij

    iji

    ddh

    0

    hi

    dji

    1.0

    33

    Introduction to Neural Networks

    Cukurova University

    N13(1) N13(2)

    34

    Introduction to Neural Networks

    Cukurova University

    Neighbourhood function-4

    0

    0.5

    1

    -10 -8 -6 -4 -2 0 2 4 6 8 10

    0

    0.5

    1

    -10 -8 -6 -4 -2 0 2 4 6 8 10

    Degree of

    neighbourhood

    Distance from winner

    Degree of

    neighbourhood

    Distance from winner

    Time

    Time

    35

    Introduction to Neural Networks

    Cukurova University

    Learning process (5)

    • Applied to all neurons inside the neighbourhood of the winning neuron i.

    jjjj wygxyw )(

    Hebbian term forgetting term

    scalar function of response yj

    )(-x )( )()()1(

    )(

    )(

    )(,

    nwnhnnwnw

    hy

    yyg

    jxijjj

    xjij

    jj

    2

    0 exp)( Tnn exponential decay update:

    36

    Introduction to Neural Networks

    Cukurova University

    Two phases of weight adaption

    • Self organising or ordering phase:– Topological ordering of weight vectors.– May take 1000 or more iterations of SOM algorithm.

    • Important choice of parameter values:– (n): 0 = 0.1 T2 = 1000

    decrease gradually (n) 0.01– hji(x)(n): 0 big enough T1 =

    – Initially the neighbourhood of the winning neuron includes almost all neurons in the network, then it shrinks slowly with time.

    1000

    log (0)

  • 7

    37

    Introduction to Neural Networks

    Cukurova University

    Two phases of weight adaption

    • Convergence phase:– Fine tune feature map.

    – Must be at least 500 times the number of neurons in the network thousands or tens of thousands of iterations.

    • Choice of parameter values:– (n) maintained on the order of 0.01.

    – hji(x)(n) contains only the nearest neighbours of the winning neuron. It eventually reduces to one or zero neighbouring neurons.

    38

    Introduction to Neural Networks

    Cukurova University

    A summary of SOM • Initialization: choose random small values for weight

    vectors such that wj(0) is different for all neurons j.• Sampling: drawn a sample example x from the input

    space.• Similarity matching: find the best matching winning

    neuron i(x) at step n:

    • Updating: adjust synaptic weight vectors

    • Continuation: go to Sampling step until no noticeable changes in the feature map are observed.

    ] , ... ,2 ,1[ ||)(||minarg)( jwnxxi jj

    )(-x )( )()()1( )( nwnhnnwnw jxijjj

    39

    Introduction to Neural Networks

    Cukurova University

    3

    9

    Example

    An SOFM network with three inputs and two cluster units is to be trained using the four training vectors:

    [0.8 0.7 0.4], [0.6 0.9 0.9], [0.3 0.4 0.1], [0.1 0.1 02] and initial weights

    The initial radius is 0 and the learning rate is 0.5 . Calculate the weight changes during the first cycle through the data, taking the training vectors in the given order.

    5.08.0

    2.06.0

    4.05.0

    weights to the first cluster unit

    0.5

    0.6

    0.8

    40

    Introduction to Neural Networks

    Cukurova University

    4

    0

    Solution

    The Euclidian distance of the input vector 1 to cluster unit 1 is:

    The Euclidian distance of the input vector 1 to cluster unit 2 is:

    Input vector 1 is closest to cluster unit 1 so update weights to cluster unit 1:

    26.04.08.07.06.08.05.0 2221 d

    42.04.05.07.02.08.04.0 2222 d

    )8.04.0(5.08.06.0

    )6.07.0(5.06.065.0

    )5.08.0(5.05.065.0

    )]([5.0)()1(

    nwxnwnw ijiijij

    5.060.0

    2.065.0

    4.065.0

    41

    Introduction to Neural Networks

    Cukurova University

    4

    1

    Solution

    The Euclidian distance of the input vector 2 to cluster unit 1 is:

    The Euclidian distance of the input vector 2 to cluster unit 2 is:

    Input vector 2 is closest to cluster unit 1 so update weights to cluster unit 1 again:

    155.09.06.09.065.06.065.0 2221 d

    69.09.05.09.02.06.04.0 2222 d

    )60.09.0(5.060.0750.0

    )65.09.0(5.065.0775.0

    )65.06.0(5.065.0625.0

    )]([5.0)()1(

    nwxnwnw ijiijij

    5.0750.0

    2.0775.0

    4.0625.0

    Repeat the same update procedure for input vector 3 and 4 also.

    42

    Introduction to Neural Networks

    Cukurova University

    Illustration of learning for

    Kohonen mapsInputs: coordinates (x,y) of points drawn from a square

    Display neuron j at position xj,yj where its sj is maximum

    Random initial positions

    x

    y

  • 8

    43

    Introduction to Neural Networks

    Cukurova University

    Self-organizing Feature Map-example

    X

    X

    X

    X

    X

    X

    X

    44

    Introduction to Neural Networks

    Cukurova University

    Self-organizing Feature Map-- Example

    X

    X

    X

    X

    X

    X

    X

    45

    Introduction to Neural Networks

    Cukurova University

    Self-organizing Feature Map—

    Example

    X

    X

    X

    X

    X

    X

    X

    46

    Introduction to Neural Networks

    Cukurova University

    Self-organizing Feature Map

    Example

    X

    X

    X

    X

    X

    X

    X

    47

    Introduction to Neural Networks

    Cukurova University

    Two-phases learning approach

    – Self-organizing or ordering phase. The learning rate and spread of the Gaussian neighborhood function are adapted during the execution of SOM, using for instance the exponential decay update rule.

    – Convergence phase. The learning rate and Gaussian spread have small fixed values during the execution of SOM.

    48

    Introduction to Neural Networks

    Cukurova University

    Convergence Phase

    • Convergence phase:– Fine tune the weight vectors.

    – Must be at least 500 times the number of neurons in the network thousands or tens of thousands of iterations.

    • Choice of parameter values:– (n) maintained on the order of 0.01.

    – Neighborhood function such that the neighbor of the winning neuron contains only the nearest neighbors. It eventually reduces to one or zero neighboring neurons.

  • 9

    49

    Introduction to Neural Networks

    Cukurova University 50

    Introduction to Neural Networks

    Cukurova University

    Another Self-Organizing Map (SOM)

    Example• From Fausett (1994)• n = 4, m = 2

    – More typical of SOM application– Smaller number of units in output than in

    input; dimensionality reduction

    • Training samplesi1: (1, 1, 0, 0)i2: (0, 0, 0, 1)i3: (1, 0, 0, 0)i4: (0, 0, 1, 1)

    Input units:

    Output units: 1 2

    What should we expect as outputs?

    Network Architecture

    51

    Introduction to Neural Networks

    Cukurova University

    What are the Euclidean Distances

    Between the Data Samples?

    • Training samples

    i1: (1, 1, 0, 0)

    i2: (0, 0, 0, 1)

    i3: (1, 0, 0, 0)

    i4: (0, 0, 1, 1)

    i1 i2 i3 i4

    i1 0

    i2 0

    i3 0

    i4 0

    52

    Introduction to Neural Networks

    Cukurova University

    Euclidean Distances Between Data

    Samples• Training samples

    i1: (1, 1, 0, 0)

    i2: (0, 0, 0, 1)

    i3: (1, 0, 0, 0)

    i4: (0, 0, 1, 1) i1 i2 i3 i4

    i1 0

    i2 3 0

    i3 1 2 0

    i4 4 1 3 0

    Input units:

    Output units:1 2

    What might we expect from the SOM?

    53

    Introduction to Neural Networks

    Cukurova University

    Example Details• Training samples

    i1: (1, 1, 0, 0)

    i2: (0, 0, 0, 1)

    i3: (1, 0, 0, 0)

    i4: (0, 0, 1, 1)

    • With only 2 outputs, neighborhood = 0– Only update weights associated with winning output unit

    (cluster) at each iteration• Learning rate

    (t) = 0.6; 1

  • 10

    55

    Introduction to Neural Networks

    Cukurova University

    Second Weight Update

    • Training sample: i2– Unit 1 weights

    • d2 = (.2-0)2 + (.6-0)2 + (.5-0)2 + (.9-1)2 = .66

    – Unit 2 weights• d2 = (.92-0)2 + (.76-0)2 + (.28-0)2 + (.12-1)2 = 2.28

    – Unit 1 wins– Weights on winning unit are updated

    – Giving an updated weight matrix:

    Unit 1:

    Unit 2:

    i1: (1, 1, 0, 0)

    i2: (0, 0, 0, 1)

    i3: (1, 0, 0, 0)

    i4: (0, 0, 1, 1)

    ])9.5.6.2.[-1] 0 0 [0(6.0]9.5.6.2.[1 weightsunitnew

    .96] .20 .24 [.08

    Unit 1:

    Unit 2:

    12.

    9.

    28.

    5.

    76.

    6.

    92.

    2.

    12.

    96.

    28.

    20.

    76.

    24.

    92.

    08.

    56

    Introduction to Neural Networks

    Cukurova University

    Third Weight Update

    • Training sample: i3

    – Unit 1 weights• d2 = (.08-1)2 + (.24-0)2 + (.2-0)2 + (.96-0)2 = 1.87

    – Unit 2 weights• d2 = (.92-1)2 + (.76-0)2 + (.28-0)2 + (.12-0)2 = 0.68

    – Unit 2 wins

    – Weights on winning unit are updated

    – Giving an updated weight matrix:

    Unit 1:

    Unit 2:

    i1: (1, 1, 0, 0)

    i2: (0, 0, 0, 1)

    i3: (1, 0, 0, 0)

    i4: (0, 0, 1, 1)

    ])12.28.76.92.[-0] 0 0 [1(6.0]12.28.76.92.[2 weightsunitnew

    .05] .11 .30 [.97

    Unit 1:

    Unit 2:

    12.

    96.

    28.

    20.

    76.

    24.

    92.

    08.

    05.

    96.

    11.

    20.

    30.

    24.

    97.

    08.

    57

    Introduction to Neural Networks

    Cukurova University

    Fourth Weight Update

    • Training sample: i4

    – Unit 1 weights• d2 = (.08-0)2 + (.24-0)2 + (.2-1)2 + (.96-1)2 = .71

    – Unit 2 weights• d2 = (.97-0)2 + (.30-0)2 + (.11-1)2 + (.05-1)2 = 2.74

    – Unit 1 wins

    – Weights on winning unit are updated

    – Giving an updated weight matrix:

    Unit 1:

    Unit 2:

    i1: (1, 1, 0, 0)

    i2: (0, 0, 0, 1)

    i3: (1, 0, 0, 0)

    i4: (0, 0, 1, 1)

    ])96.20.24.08.[-1] 1 0 [0(6.0]96.20.24.08.[1 weightsunitnew

    .98] .68 .10 [.03

    Unit 1:

    Unit 2:

    05.

    98.

    11.

    68.

    30.

    10.

    97.

    03.

    05.

    96.

    11.

    20.

    30.

    24.

    97.

    08.

    58

    Introduction to Neural Networks

    Cukurova University

    Applying the SOM Algorithm

    time (t) 1 2 3 4 D(t) (t)

    1 Unit 2 0 0.6

    2 Unit 1 0 0.6

    3 Unit 2 0 0.6

    4 Unit 1 0 0.6

    Data sample utilized

    ‘winning’ output unit

    Unit 1:

    Unit 2:

    0

    0.1

    0

    5.

    5.

    0

    0.1

    0

    After many iterations (epochs)

    through the data set:

    Did we get the clustering that we expected?

    59

    Introduction to Neural Networks

    Cukurova University

    What clusters do the

    data samples fall into?

    Unit 1:

    Unit 2:

    0

    0.1

    0

    5.

    5.

    0

    0.1

    0

    WeightsInput units:

    Output units: 1 2

    Training samples

    i1: (1, 1, 0, 0)

    i2: (0, 0, 0, 1)

    i3: (1, 0, 0, 0)

    i4: (0, 0, 1, 1)

    60

    Introduction to Neural Networks

    Cukurova University

    Solution

    • Sample: i1

    – Distance from unit1 weights• (1-0)2 + (1-0)2 + (0-.5)2 + (0-1.0)2 = 1+1+.25+1=3.25

    – Distance from unit2 weights• (1-1)2 + (1-.5)2 + (0-0)2 + (0-0)2 = 0+.25+0+0=.25 (winner)

    • Sample: i2

    – Distance from unit1 weights• (0-0)2 + (0-0)2 + (0-.5)2 + (1-1.0)2 = 0+0+.25+0 (winner)

    – Distance from unit2 weights• (0-1)2 + (0-.5)2 + (0-0)2 + (1-0)2 =1+.25+0+1=2.25

    Unit 1:

    Unit 2:

    0

    0.1

    0

    5.

    5.

    0

    0.1

    0

    Weights

    Input units:

    Output units: 1 2

    Training samples

    i1: (1, 1, 0, 0)

    i2: (0, 0, 0, 1)

    i3: (1, 0, 0, 0)

    i4: (0, 0, 1, 1)

    2

    1 ,,))((

    n

    k kjkltwid2 = (Euclidean distance)2 =

  • 11

    61

    Introduction to Neural Networks

    Cukurova University

    Solution

    • Sample: i3

    – Distance from unit1 weights• (1-0)2 + (0-0)2 + (0-.5)2 + (0-1.0)2 = 1+0+.25+1=2.25

    – Distance from unit2 weights• (1-1)2 + (0-.5)2 + (0-0)2 + (0-0)2 = 0+.25+0+0=.25 (winner)

    • Sample: i4

    – Distance from unit1 weights• (0-0)2 + (0-0)2 + (1-.5)2 + (1-1.0)2 = 0+0+.25+0 (winner)

    – Distance from unit2 weights• (0-1)2 + (0-.5)2 + (1-0)2 + (1-0)2 = 1+.25+1+1=3.25

    Unit 1:

    Unit 2:

    0

    0.1

    0

    5.

    5.

    0

    0.1

    0

    Weights

    Input units:

    Output units: 1 2

    Training samples

    i1: (1, 1, 0, 0)

    i2: (0, 0, 0, 1)

    i3: (1, 0, 0, 0)

    i4: (0, 0, 1, 1)

    2

    1 ,,))((

    n

    k kjkltwid2 = (Euclidean distance)2 = 62

    Introduction to Neural Networks

    Cukurova University

    Word categories

    63

    Introduction to Neural Networks

    Cukurova University

    Summary

    • Unsupervised learning is very common • US learning requires redundancy in the stimuli• Self organization is a basic property of the

    brain’s computational structure• SOMs are based on

    – competition (wta units)– cooperation– synaptic adaptation

    • SOMs conserve topological relationships between the stimuli

    • Artificial SOMs have many applications in computational neuroscience


Recommended