Date post: | 20-Dec-2015 |
Category: |
Documents |
View: | 219 times |
Download: | 0 times |
Biological inspiration
Animals are able to react adaptively to changes in their external and internal environment, and they use their nervous system to perform these behaviours.
An appropriate model/simulation of the nervous system should be able to produce similar responses and behaviours in artificial systems.
The nervous system is build by relatively simple units, the neurons, so copying their behavior and functionality should be the solution.
Biological inspiration
Dendrites
Soma (cell body)
Axon
Biological inspiration
synapses
axon dendrites
The information transmission happens at the synapses.
Artificial neurons
Neurons work by processing information. They receive and provide information in form of spikes.
The McCullogh-Pitts model
Inputs
Outputw2
w1
w3
wn
wn-1
. . .
x1
x2
x3
…
xn-1
xn
y)(;
1
zHyxwzn
iii ==∑
=
Artificial neural networks
Inputs
Output
An artificial neural network is composed of many artificial neurons that are linked together according to a specific network architecture. The objective of the neural network is to transform the inputs into meaningful outputs.
Learning in biological systems
Learning = learning by adaptation
The young animal learns that the green fruits are sour, while the yellowish/reddish ones are sweet. The learning happens by adapting the fruit picking behavior.
At the neural level the learning happens by changing of the synaptic strengths, eliminating some synapses, and building new ones.
Neural network mathematics
Inputs
Output
),(
),(
),(
),(
144
14
133
13
122
12
111
11
wxfy
wxfy
wxfy
wxfy
=
=
=
=
),(
),(
),(
23
123
22
122
21
121
wyfy
wyfy
wyfy
=
=
=
⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜
⎝
⎛
=
14
13
12
11
1
y
y
y
y
y ),( 31
2 wyfyOut =⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜
⎝
⎛
=23
23
23
2
y
y
y
y
Neural network approximation
Task specification:
Data: set of value pairs: (xt, yt), yt=g(xt) + zt; zt is random measurement noise.
Objective: find a neural network that represents the input / output transformation (a function) F(x,W) such that
F(x,W) approximates g(x) for every x
Learning with MLP neural networks
MLP neural network:
with p layers
Data: ),(),...,,(),,( 22
11
NN yxyxyx
Error: 22 ));(())(()( tt
tout yWxFytytE −=−=
It is very complicated to calculate the weight changes.
xyout
1 2 … p-1 p
Learning with backpropagation
Solution of the complicated learning:
• calculate first the changes for the synaptic weights of the output neuron;
• calculate the changes backward starting from layer p-1, and propagate backward the local error terms.
The method is still relatively complicated but it is much simpler than the original optimisation problem.
.2 .8.3 .4
.1 .6
Train = 1
.15
1 0
.2*1+.8*0=.2.3*1+.4*0=.3
.1*.3+.6*.2=.15
Error = 1-.15=.85
.6 .8.4 .4
.2 .8
Train = 1
.72
0 1
.6*0+.8*1=.8.4*0+.4*1=.4
.2*.4+.8*.8=.72
Error = 1-.72=.28
Train 1Error 0.85
Value 0.15Weights 0.1 0.6Percent 20% 80%Error 0.17 0.68
Values 0.3 0.2Weights 0.3 0.4 0.2 0.8Percent 100% 0% 100% 0%Error 0.17 0 0.68 0
Inputs 1 0
Train 1.00Error 0.28
Value 0.72Weights 0.20 0.8Percent 11% 89%Error 0.03 0.25
Values 0.4 0.8Weights 0.4 0.4 0.6 0.8Percent 0% 100% 0% 100%Error 0.00 0.03 0.00 0.25
Inputs 0 1
.6 .9.4 .45
.25 .9
Train = 0
1.56
1 1
.6*1+.9*1=1.5.4*1+.45*1=.85
.25*.85+.9*.1.5=1.56
Error = 0-1.56=-1.56
Train 0Error -1.56
Value 1.56Weights 0.25 0.9Percent 14% 86%Error -0.21 -1.35
Values 0.85 1.5Weights 0.4 0.45 0.6 0.9Percent 47% 53% 40% 60%Error -0.10 -0.11 -0.54 -0.81
Inputs 1 1
Artificial Neural Network
PredictsStructure at this point
Danger
You may train the network on your training set, but it may not generalize to other data
Perhaps we should train several ANNs and then let them vote on the structure
Profile network from HeiDelbergfamily (alignment is used as input) instead of just the new sequenceOn the first level, a window of length 13 around the residue is used The window slides down the sequence, making a prediction for each residueThe input includes the frequency of amino acids occurring in each position in the multiple alignment (In the example, there are 5 sequences in the multiple alignment)The second level takes these predictions from neural networks that are centered on neighboring proteins The third level does a jury selection
PHD
Predicts 4
Predicts 6Predicts 6
Predicts 5Predicts 5