8/10/2019 4. ANN Backpropagation
1/46
8/10/2019 4. ANN Backpropagation
2/46
Multiple-Layer Networksand
Backpropagation Algorithms
Backpropagation is the generalization of the Widrow-Hoff learning rule to
multiple-layer networks and nonlinear differentiable transfer functions.
Input vectors and the corresponding target vectors are used to train a
network until it can approximate a function, associate input vectors with
specific output vectors, or classifyinput vectors in an appropriate way as
defined by you.
8/10/2019 4. ANN Backpropagation
3/46
8/10/2019 4. ANN Backpropagation
4/46
ArchitectureNeuron Model
An elementary neuron with R inputs is shown below. Each input is
weighted with an appropriate w. The sum of the weighted inputs and the
bias forms the input to the transfer function f. Neurons can use any
differentiable transfer function f to generate their output.
8/10/2019 4. ANN Backpropagation
5/46
ArchitectureNeuron Model
Transfer Functions (Activition Function)Multilayer networks often use thelog-sigmoid transfer function logsig.
The function logsig generates outputs between 0and 1as the neuron's
net input goes from negative to positive infinity
8/10/2019 4. ANN Backpropagation
6/46
ArchitectureNeuron Model
Transfer Functions (Activition Function)Alternatively, multilayer networks can usethe tan-sigmoidtransfer
function-tansig.
The function logsig generates outputs between-1 and +1 as the neuron's
net input goes from negative to positive infinity
8/10/2019 4. ANN Backpropagation
7/46
ArchitectureFeedforward Network
A single-layer network of S logsig neurons having R inputs is shown
below in full detail on the left and with a layer diagram on the right.
8/10/2019 4. ANN Backpropagation
8/46
ArchitectureFeedforward Network
Feedforward networks often have one or more hidden layers of sigmoid neurons followed
by an output layer of linear neurons.
Multiple layers of neurons with nonlinear transfer functions allow the network to learn
nonlinear and linear relationships between input and output vectors.
The linear output layer lets the network produce values outside the range -1 to +1. On the
other hand, if you want to constrain the outputs of a network (such as between 0 and 1),then the output layer should use a sigmoid transfer function (such as logsig).
8/10/2019 4. ANN Backpropagation
9/46
Learning Algorithm:
BackpropagationThe following slides describes teaching process of multi-layer neural network
employing backpropagationalgorithm. To illustrate this process the three layer neural
network with two inputs and one output,which is shown in the picture below, is used:
8/10/2019 4. ANN Backpropagation
10/46
Learning Algorithm:
BackpropagationEach neuron is composed of two units. First unit adds products of weights coefficients and
input signals. The second unit realise nonlinear function, called neuron transfer (activation)
function. Signal eis adder output signal, and y = f(e)is output signal of nonlinear element.
Signal yis also output signal of neuron.
8/10/2019 4. ANN Backpropagation
11/46
Learning Algorithm:
BackpropagationTo teach the neural network we need training data set. The training data set consists of
input signals (x1andx2) assigned with corresponding target (desired output) z.
The network training is an iterative process. In each iteration weights coefficients of nodes
are modified using new data from training data set. Modification is calculated using
algorithm described below:
Each teaching step starts with forcing both input signals from training set. After this stage
we can determine output signals values for each neuron in each network layer.
8/10/2019 4. ANN Backpropagation
12/46
Learning Algorithm:
BackpropagationPictures below illustrate how signal is propagating through the network,
Symbols w(xm)nrepresent weights of connections between network inputxmand
neuron nin input layer. Symbols ynrepresents output signal of neuron n.
8/10/2019 4. ANN Backpropagation
13/46
Learning Algorithm:
Backpropagation
8/10/2019 4. ANN Backpropagation
14/46
Learning Algorithm:
Backpropagation
8/10/2019 4. ANN Backpropagation
15/46
Learning Algorithm:
BackpropagationPropagation of signals through the hidden layer. Symbols wmnrepresent weights
of connections between output of neuron mand input of neuron nin the next
layer.
8/10/2019 4. ANN Backpropagation
16/46
Learning Algorithm:
Backpropagation
8/10/2019 4. ANN Backpropagation
17/46
8/10/2019 4. ANN Backpropagation
18/46
Learning Algorithm:
BackpropagationPropagation of signals through the output layer.
8/10/2019 4. ANN Backpropagation
19/46
Learning Algorithm:
BackpropagationIn the next algorithm step the output signal of the network yis
compared with the desired output value (the target), which is found in
training data set. The difference is called error signal dof output layer
neuron
8/10/2019 4. ANN Backpropagation
20/46
Learning Algorithm:
BackpropagationThe idea is to propagate error signal d(computed in single teaching step)
back to all neurons, which output signals were input for discussed
neuron.
8/10/2019 4. ANN Backpropagation
21/46
Learning Algorithm:
BackpropagationThe idea is to propagate error signal d(computed in single teaching step)
back to all neurons, which output signals were input for discussed
neuron.
8/10/2019 4. ANN Backpropagation
22/46
Learning Algorithm:
BackpropagationThe weights' coefficients wmnused to propagate errors back are equal to
this used during computing output value. Only the direction of data flow
is changed (signals are propagated from output to inputs one after the
other). This technique is used for all network layers. If propagated errors
came from few neurons they are added. The illustration is below:
8/10/2019 4. ANN Backpropagation
23/46
Learning Algorithm:
BackpropagationWhen the error signal for each neuron is computed, the weights
coefficients of each neuron input node may be modified. In formulas
below df(e)/derepresents derivative of neuron activation function
(which weights are modified).
8/10/2019 4. ANN Backpropagation
24/46
Learning Algorithm:
BackpropagationWhen the error signal for each neuron is computed, the weights
coefficients of each neuron input node may be modified. In formulas
below df(e)/derepresents derivative of neuron activation function
(which weights are modified).
8/10/2019 4. ANN Backpropagation
25/46
8/10/2019 4. ANN Backpropagation
26/46
Inisialisasi jaringan Backpropagation
net= newff(PR,[S1 S2...SN], (TF1 TF2....TFN), BTF,BLF, PF)Keterangan
Net= jaringan backpro yg terdiri dari n layer
PR= matriks ordo Rx2 berisi nilai min dan maxSi(i=1,2,....n)= jml unit pd layer kei
TFi(i=1,2,....n)=fungsi aktivasi, default=tansig(sigmoid
bipolar)
BTF= fungsi pelatihan jaringan. Default= traindx
BLF=fungsi perubahan bobot.default= learngdm
PF= fungsi perhitungan error. Default=mse
8/10/2019 4. ANN Backpropagation
27/46
Latihan 1
Buat inisialisasi backpro yg melatih jaringan yg
terdiri dari 2 masukan, sebuah layer
tersembunyi yg terdiri dari 3 unit dan sebuah
keluaran (2-3-1). Dengan data sbb:
x1 x2 t
-1 0 -1
-1 5 -1
2 0 1
2 5 1
8/10/2019 4. ANN Backpropagation
28/46
p= [ -1 -1 2 2; 0 5 0 5]
t= [-1 -1 1 1]
net=newff ([-1 2; 0 5], [3 1], {tansig, purelin})
8/10/2019 4. ANN Backpropagation
29/46
Inisialisasi Bobot
Matlab akan memberikan nilai bobot dan bias
awal dengan bilangan acak kecil.
Bobot dan bias akan berubah setiap kali kita
membentuk jaringan
8/10/2019 4. ANN Backpropagation
30/46
Lapisan tersembunyi
1
Unit Masukan
x1 x2 bias
z1 -1.3 0.7 0.3
z2 0.5 0 -0.1
z3 1.3 -0.4 -0.9
z4 -0.1 1.2 0.5
Contoh 15.2
8/10/2019 4. ANN Backpropagation
31/46
>> net=newff([-1 2;-1 2], [4,3,1])
>> net.IW{1,1}
ans =-1.4034 1.2308
1.4855 -1.1304
1.8399 -0.3149
-0.0656 1.8655
>> net.b{1}
ans =
2.8863
-1.1109
0.1708
-3.6999
8/10/2019 4. ANN Backpropagation
32/46
8/10/2019 4. ANN Backpropagation
33/46
>> net.LW{2,1}
ans =
0.9173 -1.4575 0.5611 -0.3383
1.3008 0.9883 0.7296 0.4401
0.4656 1.2975 0.7269 -0.9830
>> net.b{2}
ans =
-1.8425
01.8425
8/10/2019 4. ANN Backpropagation
34/46
Keluaran Lapisan Tersembunyi 2
v1 v2 v3 bias
y1 0.4 0.9 -0.1 -1
8/10/2019 4. ANN Backpropagation
35/46
>> net.LW{3,2}
ans =
0.5169 -1.1745 -0.5597
Coba ketikkan
net.IW {2,1}
net.LW{3,1}
net.LW{1,2}net.IW {2,2}
Apa yang terjadi? Berikan alasan saudara!
8/10/2019 4. ANN Backpropagation
36/46
Untuk mengubah nilai bobot dan bias seperti tabel maka
dapat dilakukan sbb:
>> net.IW{1,1}=[-1.3 0.7; 0.5 0; 1.3 -0.4; -0.1 1.2];
>> net.b{1}=[0.3; -0.1; -0.9 ; 0.5];
>> net.LW{2,1}= [0.4 0.3 -1 -0.3; 0.6 0 -0.6 -1.2; 0.4 -0.3 0.2 0.9];
>> net.b{2}=[0.5 ; -1.3 ; -0.3];
>> net.LW{3,2}=[0.4 0.9 -0.1];>> net.b{3}= [-1];
8/10/2019 4. ANN Backpropagation
37/46
Simulasi Jaringan
Hitunglah keluaran jaringan contoh 15.2 jika
diberikan masukanx1=0.5 dan x2=1.3.
8/10/2019 4. ANN Backpropagation
38/46
p=[0.5 ;1.3];
net=newff([-1 2;-1 2], [4,3,1])
net.IW{1,1}=[-1.3 0.7; 0.5 0; 1.3 -0.4; -0.1 1.2];
net.b{1}=[0.3; -0.1; -0.9 ; 0.5];
net.LW{2,1}= [0.4 0.3 -1 -0.3; 0.6 0 -0.6 -1.2; 0.4
-0.3 0.2 0.9];
net.b{2}=[0.5 ; -1.3 ; -0.3];
net.LW{3,2}=[0.4 0.9 -0.1];
net.b{3}= [-1];
y=sim(net,p)
8/10/2019 4. ANN Backpropagation
39/46
Jika menginginkan target =1 maka tambahkan
sbb:
t=[1];
[y, Pf,Af,e,perf]=sim (net,p,[],[],t)
8/10/2019 4. ANN Backpropagation
40/46
Pelatihan backpropagation
Jika diketahui data sbb:
x1 X2 T
-1 0 -1
-1 5 -12 0 1
2 5 1
8/10/2019 4. ANN Backpropagation
41/46
p=[-1 -1 2 2 ; 0 5 0 5];
t=[-1 -1 1 1];
net=newff (minmax (p), [3,1],
{'tansig','purelin'}, 'traingd');
net.IW{1,1} % melihat bobot awal
net.b{1} % melihat bias awal
net.LW{2,1}% melihat bobot pada lapisan 2
net.b{2} %melihat bobot lapisan 2
[y, Pf,Af,e,perf]=sim (net,p,[],[],t)% simulasi
jaringan
net=train(net,p,t) % melatih jaringan
8/10/2019 4. ANN Backpropagation
42/46
Mengetahui bobot setelah dilatih
net.IW{1,1} % melihat bobot awal
net.b{1} % melihat bias awal
net.LW{2,1}% melihat bobot pada lapisan 2
net.b{2} %melihat bobot lapisan 2
[y, Pf,Af,e,perf]=sim (net,p,[],[],t)% simulasi
jaringan
8/10/2019 4. ANN Backpropagation
43/46
Penambahan parameter pelatihanp=[-1 -1 2 2 ; 0 5 0 5];
t=[-1 -1 1 1];
net=newff (minmax (p), [3,1], {'tansig','purelin'}, 'traingd');
net=init(net);
net.trainParam.show=100;net.trainParam.mc=0.5;
net.trainParam.lr=0.1;
net.trainParam.epochs=100;
net.trainParam.goal=0.0001;
net.trainParam.time;
net=train(net,p,t) % melatih jaringan
8/10/2019 4. ANN Backpropagation
44/46
Ubahlah nilai tiap paramater, apa kesimpulan
anda?
Ubahlah nilai tiap paramater, apa kesimpulan
anda?
8/10/2019 4. ANN Backpropagation
45/46
8/10/2019 4. ANN Backpropagation
46/46
Fungsi Pelatihan Jumah epoch pd toleransi 10e-5
traingd
traingdm
traingda
traingdx
trainrp
traincgf
traincgp
traincgb