+ All Categories
Home > Documents > Perceptrons - utcluj.ro · A Perceptron is a binary classifier that maps its input x (a real-valued...

Perceptrons - utcluj.ro · A Perceptron is a binary classifier that maps its input x (a real-valued...

Date post: 29-Jul-2018
Category:
Upload: lenguyet
View: 219 times
Download: 0 times
Share this document with a friend
12
Perceptrons
Transcript
Page 1: Perceptrons - utcluj.ro · A Perceptron is a binary classifier that maps its input x (a real-valued vector) to an output value y (y single binary value, 0 or 1; -1 or 1) •Rosenblatt

Perceptrons

Page 2: Perceptrons - utcluj.ro · A Perceptron is a binary classifier that maps its input x (a real-valued vector) to an output value y (y single binary value, 0 or 1; -1 or 1) •Rosenblatt

A Perceptron is a binary classifier that maps its input x

(a real-valued vector) to an output value y (y single

binary value, 0 or 1; -1 or 1)

• Rosenblatt [Rose61] created many variations of the perceptron.

• One of the simplest: single-layer network whose weights and biases could be trained to produce a correct target vector when presented with the corresponding input vector.

• The training technique is called the perceptron learning rule.

• The perceptron generated great interest due to its ability to generalize from its training vectors and learn from initially randomly distributed connections.

• The perceptron is especially suited for simple problems in pattern classification.

• They are fast and reliable networks for the problems they can solve.

Page 3: Perceptrons - utcluj.ro · A Perceptron is a binary classifier that maps its input x (a real-valued vector) to an output value y (y single binary value, 0 or 1; -1 or 1) •Rosenblatt

Perceptron

bwxs

TNxxxx ,...,, 21

Nwwww ,...,, 21

Input vector

Weight vector

Page 4: Perceptrons - utcluj.ro · A Perceptron is a binary classifier that maps its input x (a real-valued vector) to an output value y (y single binary value, 0 or 1; -1 or 1) •Rosenblatt

1

1

1

2

1

b

w

w

121

2211

xxs

bxwxws

bwxs

1

01

12

21

xx

xx

1

01

12

21

xx

xx

0,1 sify 0,0 sify

The input space of a two-input perceptron - illustration

Page 5: Perceptrons - utcluj.ro · A Perceptron is a binary classifier that maps its input x (a real-valued vector) to an output value y (y single binary value, 0 or 1; -1 or 1) •Rosenblatt

1

1

1

2

1

b

w

w

L - decision boundary WL

For b = 0, L – passes through the origin

❖ pick weight and bias values to orient and move the decision boundary to classify the input space as desired

02211 bxwxw

0121 xx

112 xxL:

1

0

y

y

The input space of a two-input perceptron – illustration – cont.

Page 6: Perceptrons - utcluj.ro · A Perceptron is a binary classifier that maps its input x (a real-valued vector) to an output value y (y single binary value, 0 or 1; -1 or 1) •Rosenblatt

Learning Rule (training algorithm)

A learning rule (training algorithm) is defined as a procedure for modifying the weights and biases of a network.

• Supervised learning • Unsupervised learning

In supervised learning, the learning rule is provided with a set of examples

(training set) of proper network behavior

- is an input to the network (vector)

- the corresponding correct (target) output

• As the inputs are applied to the network, the network outputs are compared to the targets.

• The learning rule is then used to adjust the weights and biases of the network in order to move the network outputs closer to the targets.

• The perceptron learning rule falls in supervised learning category.

},{...,},,{...,},,{},{ 2211 QQqq txtxtxtx

qxqt

Page 7: Perceptrons - utcluj.ro · A Perceptron is a binary classifier that maps its input x (a real-valued vector) to an output value y (y single binary value, 0 or 1; -1 or 1) •Rosenblatt

Learning Rules (Training algorithm) - cont.

The objective is to reduce the error e, which is the difference between the neuron response y and the target vector t.

e = t – y

CASE 1. If an input vector is presented and the output of the neuron is correct

(y = t and e = t – y = 0), then the weight vector w is not altered.

CASE 2. If the neuron output is 0 and should have been 1 (y = 0 and t = 1, and

e = t – y = 1), the input vector x is added to the weight vector w.

This makes the weight vector point closer to the input vector, increasing the

chance that the input vector will be classified as a 1 in the future.

CASE 3. If the neuron output is 1 and should have been 0 (y = 1 and t = 0, and

e = t – y = –1), the input vector x is subtracted from the weight vector w. This

makes the weight vector point farther away from the input vector, increasing

the chance that the input vector will be classified as a 0 in the future.

Page 8: Perceptrons - utcluj.ro · A Perceptron is a binary classifier that maps its input x (a real-valued vector) to an output value y (y single binary value, 0 or 1; -1 or 1) •Rosenblatt

CASE 1. If e = 0, then make a change Δw equal to 0.

CASE 2. If e = 1, then make a change Δw equal to xT.

CASE 3. If e = –1, then make a change Δw equal to –xT.

Δw = (t – y) xT= e xT

You can get the expression for changes in a neuron’s bias by noting that thebias is simply a weight that always has an input of 1:

Δb =(t – y) · 1= e

The perceptron learning rule can be summarized as follows:

Learning Rules (Training algorithm) - cont.

ebb

exww

oldnew

Toldnew

yte

Page 9: Perceptrons - utcluj.ro · A Perceptron is a binary classifier that maps its input x (a real-valued vector) to an output value y (y single binary value, 0 or 1; -1 or 1) •Rosenblatt

The process of finding new weights (and biases) can be repeated until there are no errors.

The perceptron learning rule is guaranteed to converge in a finite number of steps for all problems that can be solved by a perceptron.

These include all classification problems that are linearly separable.

The objects to be classified in such cases can be separated by a single line

nnd4pr – matlab demo

- decision boundaries

- perceptron rules

Learning Rules (Training algorithm) - cont.

Page 10: Perceptrons - utcluj.ro · A Perceptron is a binary classifier that maps its input x (a real-valued vector) to an output value y (y single binary value, 0 or 1; -1 or 1) •Rosenblatt

0101

0

2

1

2

1

32

313

22

212

12

111

t

x

xx

x

xx

x

xx

x 1 x 2

x 3

x2 x1

0.1090 -0.5290 -0.7230

1.0000 0 0-1.0000 1.0000 0

Learning using

]21[1]171.0319.0[]17.2681.0[

1]552.0[]488.0[

4.1090 5.4710 -1.72301.0000 1.0000 0-1.0000 0 0

]21[)1(]171.2681.0[]171.0681.1[

)1(]488.0[]552.0[

-1.8910 1.4710 -0.72300 1.0000 00 0 0

s: y: e:

s: y: e:

321 xxx321 xxx 321 xxx

s: y: e:

Learning using

Page 11: Perceptrons - utcluj.ro · A Perceptron is a binary classifier that maps its input x (a real-valued vector) to an output value y (y single binary value, 0 or 1; -1 or 1) •Rosenblatt

Se consideră un perceptron cu două intrări x1 și x2, utilizat intr-o aplicatie de

clasificare binara. Pentru instruirea perceptronului se foloseste un set de antrenare de

dimensiune 2, după cum urmează:

La momentul initial s-au generat aleator vectorul ponderilor w = [1 -0.5] și

polarizarea b = -2. Perceptronul este antrenat utilizand regula de invatare

supervizata a perceptronului. a) Reprezentati grafic cei doi vectori de intrare și linia de separare (decision boundary) definita de perceptronul initial (neinstruit)b) Cum sunt clasificati cei doi vectori de intrare?c) Determinati noile valori ale ponderilor si polarizarii perceptronului in urma instruirii cu primul vector din setul de antrenare. Reprezentati grafic noua linia de separare (decision boundary). Cum sunt clasificati acum cei doi vectori de intrare?d) Determinati noile valori ale ponderilor si polarizarii perceptronului in urma instruirii perceptronului obtinut la punctul anterior, utilizand al doilea vector din setul de antrenare.e) Reprezentati grafic noua linia de separare (decision boundary) si determinati daca cei doi vectori din setul de antrenare sunt clasificati corect.

Problema

Page 12: Perceptrons - utcluj.ro · A Perceptron is a binary classifier that maps its input x (a real-valued vector) to an output value y (y single binary value, 0 or 1; -1 or 1) •Rosenblatt

Problem

12 / 20

Write the Python code for the perceptron learning rule.The initial weights and bias will be randomly generated in the [-1; 1] range.

Prove the right operation of your implementation using an examplefor a perceptron with at least two inputs and at least three objects to be classified.


Recommended