+ All Categories
Home > Documents > Institute for Advanced Studies in Basic Sciences – Zanjan

Institute for Advanced Studies in Basic Sciences – Zanjan

Date post: 16-Jan-2016
Category:
Upload: umeko
View: 20 times
Download: 0 times
Share this document with a friend
Description:
Institute for Advanced Studies in Basic Sciences – Zanjan. Supervised Kohonen Artificial Neural Networks. Mahdi Vasighi. Introduction. Data Analysis: Linear transformations and data reduction techniques like Principal Component Analysis ( PCA ). Advantage : - PowerPoint PPT Presentation
29
Institute for Advanced Institute for Advanced Studies in Basic Sciences – Studies in Basic Sciences – Zanjan Zanjan Mahdi Vasighi Mahdi Vasighi Supervised Kohonen Supervised Kohonen Artificial Neural Networks Artificial Neural Networks
Transcript
Page 1: Institute for Advanced Studies  in Basic Sciences – Zanjan

Institute for Advanced Studies in Basic Institute for Advanced Studies in Basic Sciences – ZanjanSciences – Zanjan

Mahdi VasighiMahdi Vasighi

Supervised Kohonen Supervised Kohonen Artificial Neural NetworksArtificial Neural Networks

Page 2: Institute for Advanced Studies  in Basic Sciences – Zanjan

Data Analysis:

Linear transformations and data reduction techniques like Principal Component Analysis (PCA)

Advantage: 1.Projection from high dim. onto a low dim. coordinate system.2.Does not require a high level of modelling expertise.

Disadvantages: 1.Assuming the topology of the input data can be reduced in a linear fashion.2.Outliers disturb the quality of the projection. 3.Visualization power deteriorates considerably if the number of relevant dimensions in the multivariate space remains high after a PCA analysis.

IntroductionIntroduction

Page 3: Institute for Advanced Studies  in Basic Sciences – Zanjan

An alternative is nonlinear mapping:

This explicitly aims to map objects in a low-dimensional space (usually two-dimensional) in such a way that the distances between objects are preserved in the mapping.

The Kohonen maps (self organizing maps) incorporates in an unsupervised way the topology present in the data.

The unsupervised problem means that one deals with a set of experimental data which have no specific associated answers (or supplemental information) attached.

Page 4: Institute for Advanced Studies  in Basic Sciences – Zanjan

A supervised modelling technique is able to capture the relationship between the input data (measurements, observations) and output data (properties of interest).

MLR, PLS, for regression problems LDA for classification problems

These modelling techniques fail if:

Nonlinearity or topologically incoherent relationship and Considerable number of outliers.

ANNs and SVMs can tackle such nonlinear relationships in a convincing way. However, the visualization and interpretation of these models is severely hard due to the fact that they are more or less ‘black-box’ techniques.

X(input)

Y(output)

Page 5: Institute for Advanced Studies  in Basic Sciences – Zanjan

Kohonen Artificial Neural Networks

The Kohonen network is probably the closest of all artificial neural networks architectures and learning schemes to the

biological neuron network

As a rule, the Kohonen type of net is based on a

single layer of neurons arranged in a two-dimensional plane

having a well defined topology

A defined topology means that each neuron has a defined number of neurons as nearest neighbors, second-nearest

neighbor, etc.

Page 6: Institute for Advanced Studies  in Basic Sciences – Zanjan

WW

Input vector

Output

Weight vector

Similarity is the basis of selection of the winner neuron.

In other words, there is a competition between neurons for winning. (competitive learning)

Page 7: Institute for Advanced Studies  in Basic Sciences – Zanjan

The Kohonen learning concept tries to map the input so that similar signals excite neurons that are very close together.

cout

m

1isijij xwmaxoutmax cout

2si

m

1iWjiXmin

Page 8: Institute for Advanced Studies  in Basic Sciences – Zanjan

Top Map

After the training process accomplished, the complete set of the training vectors is once more

run through the KANN. In this last run the labeling of the neurons excited by the input vector is made

into the table called top map.

e

XS

d

c

b

a

Trained KANNTrained KANN

e

d b

c

a

Top MapTop Map

Page 9: Institute for Advanced Studies  in Basic Sciences – Zanjan

Weight Map

The number of weights in each neuron is equal to the dimension m of the input vector. Hence, in each level of

weight only data of one specific variable are handled.

Trained KANNTrained KANN

0 0 0 0 0

1 0 0 0 0

1 1 0 0 0

4 3 1 1 0

5 6 2 1 1

1 3 0 1 2

3 2 2 1 3

2 1 1 2 3

1 2 1 0 1

3 2 1 1 2XS

Input Vector

L L L L

L L L

H

H H H H

H H H

Top MapTop Map

Page 10: Institute for Advanced Studies  in Basic Sciences – Zanjan

Applications of Kohonen ANNs

Sampling from large amount of data Disease Diagnosis QSAR & QSPR Analysis of genomics data

Making model using corrupt object (missing values)

cout

2si

m

1iWjiXmin Typical similarity criteria

Specific similarity criteria

Number of missing values

Page 11: Institute for Advanced Studies  in Basic Sciences – Zanjan

Counter Propagation network (CPN)Counter Propagation network (CPN)

CPANN has the same structure as Kohonen network with an additional output layer with same layout as input layer.

Page 12: Institute for Advanced Studies  in Basic Sciences – Zanjan

Output layer

Input Kohonen layer

Input

Target

Input Similarity map

Winner

Based on the location of the winning unit in the input map (i.e., the unit which is most similar or closest to the presented object X), the input map and the output map are updated simultaneously at the same spatial locations.

If the CPN network is trained, it can be used for prediction. Simply, an unknown input object is presented to the network. The position of the winning unit in the input map then is used to look-up the predicted value of the corresponding unit in the output map.

Page 13: Institute for Advanced Studies  in Basic Sciences – Zanjan

Applications of CP-ANN

Building predictive calibration and classification models

Spectrum

Class Membership

Molecular Descriptor

Medicinal Activity

Process Descriptor

Process Condition

Reaction Descriptor

Reaction Mechanism

Page 14: Institute for Advanced Studies  in Basic Sciences – Zanjan

Detection of faulty condition in a process

Page 15: Institute for Advanced Studies  in Basic Sciences – Zanjan

CPN was not able to model the inverse relationship between the output and the input.

The output properties are not involved in the formation of the directing Kohonen input map. Hence, the CPN model cannot be considered as being a true supervised method.

One can easily look inside the driving input layer and relationship with output.

in a CPN network the flow of information is directed from the input layer units towards the output layer. For this reason, we prefer to denote the CPN as being a pseudo-supervised strategy.

Page 16: Institute for Advanced Studies  in Basic Sciences – Zanjan

Supervised Kohonen network (SKN)Supervised Kohonen network (SKN)In a SKN network, the input layer and the output layer are ‘glued’ together, thereby forming a combined input-output layer.

Because in a SKN information present in the objects X and Y is used explicitly during the update of the units in the map, the topological formation of the concatenated map is driven by X and Y in a truly supervised way.

Page 17: Institute for Advanced Studies  in Basic Sciences – Zanjan

After training, the input and output maps are decoupled. Then, for a new input object its class membership is estimated according to the procedure outlined for the CPN network.

The variables of the objects X and Y in the training set must be scaled properly , but it is not trivial how to deal with the relative weight of the number of variables in X and the number of variables in Y during the training of a SKN network.

User must determine beforehand the proper balance between the influence of the input and output objects: in general, correct scaling of the input and output variables is of utmost importance.

Page 18: Institute for Advanced Studies  in Basic Sciences – Zanjan

Output layer

Input Kohonen layer

Input

Target

Output Similarity map

Input Similarity map

FusedSimilarity map

Winner

By using a ‘fused’ similarity measure based on a weighted combination of thesimilarities between an object X and all units in the input layer, and the similarities between the corresponding target and the units in the output layer, the common winning unit for both maps is determined.

α(t) decreases linearly in epochs

Chemo. Int. lab. 83 (2006) 99–113XY-Fused networksXY-Fused networks

Page 19: Institute for Advanced Studies  in Basic Sciences – Zanjan

Simulated data sets

The first data set, referred to as Odd–Even, contains 400 rows (input objects) with 8 variables: each integer valued variable was varied randomly between 1 and 100.

Odd–Even data set

Data Matrix(8400)

Class (1400)

if per row the total number of even values was greater than the total number of odd values, the class membership for that row was assigned to be 1, otherwise the class membership was set to −1

Page 20: Institute for Advanced Studies  in Basic Sciences – Zanjan

Output map by CPNNCPNN for Odd-Even data

Output layer

Input layer

Input

Class

CPNNCPNN

Scattered pattern of output map No real relationship between the multivariate topological structure present in the input space and the associated class membership of the objects. Does not take into account the class information during the formation of the input and output maps.

Page 21: Institute for Advanced Studies  in Basic Sciences – Zanjan

Output map by SKNNSKNN for Odd-Even data

Input

Class

SKNNSKNN

Kohonenlayer

Scattered pattern of output map Imbalance between the number of input (8) and output (1) variables Does not take into account the class information during the formation of the input and output maps.

Page 22: Institute for Advanced Studies  in Basic Sciences – Zanjan

Output layer

Output Similarity map

FusedSimilarity map

Input Similarity map

Input

Class

Input Kohonen layer

X-Y Fused Network Output map by XYFNXYFN for Odd-Even data

Nice coherent output maps Indicating a certain ‘hidden’ relationship present between the input and output space.

Page 23: Institute for Advanced Studies  in Basic Sciences – Zanjan

Class 1Class 2Class 3

Set 1

X

Y

Z

Class 1

Data Matrix (3450)

Set 2 Set 3

Class 2

Class 1

100

010

001

Three normally distributed clouds of data points in three dimensions. the first 150 objects belong to a multivariate normal distribution around the origin (class 3), whereas the other two classes 1 and 2, each consisting of 150 objects as well, are normally distributed around the centroids (5,3,4) and (5.5, 3.5, 4.5).

Overlap data set

Inputlayer

Outputlayer

We will compares the quality of the 8×8 input and outputmaps for the CPN and XYF networks for the Overlap data set.

Page 24: Institute for Advanced Studies  in Basic Sciences – Zanjan

Xmap Ymap

Class map

CPNN Inputlayer

Outputlayer

Page 25: Institute for Advanced Studies  in Basic Sciences – Zanjan

XY-Fused Network forOverlap data set

Class map

YmapXmap

Page 26: Institute for Advanced Studies  in Basic Sciences – Zanjan

Neural Networks For Chemists, An Introduction. (Weinheim/VCH Publishers )

Chem.Int.Lab.sys. 83 (2006) 99–113

Chem.Int.Lab.sys 90 (2008) 84–91

Chem.Int.Lab.sys. 38 (1997) 1-23

Current Computer-Aided Drug Design, 2005, 1, 73-78

References

Page 27: Institute for Advanced Studies  in Basic Sciences – Zanjan

Thanks

Page 28: Institute for Advanced Studies  in Basic Sciences – Zanjan

0.20.40.1

0.40.50.5

0.10.30.6

0.60.80.0

0.70.20.9

0.20.40.3

0.30.10.8

0.90.20.4

0.50.10.5

0.00.60.3

0.70.00.1

0.20.90.1

1.00.00.1

0.10.20.3

0.80.70.4

0.70.20.7

4×4×2

1.00.20.6

Input vector

output

0.34 0.80 0.52 0.76

1.28 0.46 0.80 1.18

0.82 0.30 0.76 0.44

1.06 0.32 1.18 1.16

cout

m

1isijij xwmaxoutmax

Winner

Page 29: Institute for Advanced Studies  in Basic Sciences – Zanjan

0.20.40.1

0.40.50.5

0.10.30.6

0.60.80.0

0.70.20.9

0.20.40.3

0.30.10.8

0.90.20.4

0.50.10.5

0.00.60.3

0.70.00.1

0.20.90.1

1.00.00.1

0.10.20.3

0.80.70.4

0.70.20.7

1.00.20.6

Input vector

0.8-0.20.5

0.6-0.30.1

0.9-0.10.0

0.4-0.60.6

0.30.0-0.3

0.8-0.20.3

0.70.1-0.2

0.10.00.2

0.50.10.1

1.0-0.40.3

0.30.20.5

0.8-0.70.5

0.00.20.5

0.90.00.3

0.2-0.50.2

0.30.0-0.1

1× 0.9×

0.8×0.9×

0.6×0.9×

× 0.4×0.9

oldjiSiji wXtjcdatw ),()(

minmax

maxminmax 1

)()( at

ttaat

amax=0.9amin=0.1 max)1( at=1 (first epoch)

Neighbor function: LinearoldjiSi wX

win

ner

oldjiw

d


Recommended