+ All Categories
Home > Documents > Chapter 9 Perceptrons and their generalizations

Chapter 9 Perceptrons and their generalizations

Date post: 12-Jan-2016
Category:
Upload: jeri
View: 28 times
Download: 0 times
Share this document with a friend
Description:
Chapter 9 Perceptrons and their generalizations. Rosenblatt ’ s perceptron Proofs of the theorem Method of stochastic approximation and sigmoid approximation of indicator functions Method of potential functions and Radial basis functions Three theorem of optimization theory Neural Networks. - PowerPoint PPT Presentation
Popular Tags:
68
Chapter 9 Perceptrons and their ge neralizations
Transcript
Page 1: Chapter 9 Perceptrons and their generalizations

Chapter 9Perceptrons and their generalizations

Page 2: Chapter 9 Perceptrons and their generalizations

Rosenblatt’s perceptron Proofs of the theorem Method of stochastic approximation and sig

moid approximation of indicator functions Method of potential functions and Radial ba

sis functions Three theorem of optimization theory Neural Networks

Page 3: Chapter 9 Perceptrons and their generalizations

Perceptrons (Rosenblatt, 1950s)

Page 4: Chapter 9 Perceptrons and their generalizations

Recurrent Procedure

Page 5: Chapter 9 Perceptrons and their generalizations
Page 6: Chapter 9 Perceptrons and their generalizations
Page 7: Chapter 9 Perceptrons and their generalizations
Page 8: Chapter 9 Perceptrons and their generalizations
Page 9: Chapter 9 Perceptrons and their generalizations
Page 10: Chapter 9 Perceptrons and their generalizations
Page 11: Chapter 9 Perceptrons and their generalizations
Page 12: Chapter 9 Perceptrons and their generalizations
Page 13: Chapter 9 Perceptrons and their generalizations
Page 14: Chapter 9 Perceptrons and their generalizations
Page 15: Chapter 9 Perceptrons and their generalizations
Page 16: Chapter 9 Perceptrons and their generalizations
Page 17: Chapter 9 Perceptrons and their generalizations
Page 18: Chapter 9 Perceptrons and their generalizations
Page 19: Chapter 9 Perceptrons and their generalizations
Page 20: Chapter 9 Perceptrons and their generalizations
Page 21: Chapter 9 Perceptrons and their generalizations

Proofs of the theorems

Page 22: Chapter 9 Perceptrons and their generalizations

Method of stochastic approximation and sigmoid approximation of indicator functions

Page 23: Chapter 9 Perceptrons and their generalizations
Page 24: Chapter 9 Perceptrons and their generalizations

Method of Stochastic Approximation

Page 25: Chapter 9 Perceptrons and their generalizations
Page 26: Chapter 9 Perceptrons and their generalizations
Page 27: Chapter 9 Perceptrons and their generalizations
Page 28: Chapter 9 Perceptrons and their generalizations

Sigmoid Approximation of Indicator Functions

Page 29: Chapter 9 Perceptrons and their generalizations

Basic Frame for learning process

Use the sigmoid approximation at the stage of estimating the coefficients

Use the indicator functions at the stage of recognition.

Page 30: Chapter 9 Perceptrons and their generalizations

Method of potential functions and Radial Basis Functions

Page 31: Chapter 9 Perceptrons and their generalizations

Potential function On-line Only one element of the training data

RBFs (mid-1980s) Off-line

Page 32: Chapter 9 Perceptrons and their generalizations

Method of potential functions in asymptotic learning theory

Separable condition Deterministic setting of the PR

Non-separable condition Stochastic setting of the PR problem

Page 33: Chapter 9 Perceptrons and their generalizations

Deterministic Setting

Page 34: Chapter 9 Perceptrons and their generalizations

Stochastic Setting

Page 35: Chapter 9 Perceptrons and their generalizations

RBF Method

Page 36: Chapter 9 Perceptrons and their generalizations
Page 37: Chapter 9 Perceptrons and their generalizations

Three Theorems of optimization theory

Fermat’s theorem (1629) Entire space, without constraints

Lagrange multipliers rule (1788) Conditional optimization problem

Kuhn-Tucker theorem (1951) Convex optimizaiton

Page 38: Chapter 9 Perceptrons and their generalizations
Page 39: Chapter 9 Perceptrons and their generalizations
Page 40: Chapter 9 Perceptrons and their generalizations
Page 41: Chapter 9 Perceptrons and their generalizations
Page 42: Chapter 9 Perceptrons and their generalizations

To find the stationary points of functions

It is necessary to solve a system of n equations with n unknown values.

Page 43: Chapter 9 Perceptrons and their generalizations

Lagrange Multiplier Rules (1788)

Page 44: Chapter 9 Perceptrons and their generalizations
Page 45: Chapter 9 Perceptrons and their generalizations
Page 46: Chapter 9 Perceptrons and their generalizations
Page 47: Chapter 9 Perceptrons and their generalizations
Page 48: Chapter 9 Perceptrons and their generalizations
Page 49: Chapter 9 Perceptrons and their generalizations
Page 50: Chapter 9 Perceptrons and their generalizations
Page 51: Chapter 9 Perceptrons and their generalizations

Kuhn-Tucker Theorem (1951)

Convex optimization Minimize a certain type of (convex)

objective function under certain (convex) constraints of inequality type.

Page 52: Chapter 9 Perceptrons and their generalizations
Page 53: Chapter 9 Perceptrons and their generalizations
Page 54: Chapter 9 Perceptrons and their generalizations
Page 55: Chapter 9 Perceptrons and their generalizations
Page 56: Chapter 9 Perceptrons and their generalizations
Page 57: Chapter 9 Perceptrons and their generalizations
Page 58: Chapter 9 Perceptrons and their generalizations
Page 59: Chapter 9 Perceptrons and their generalizations

Remark

Page 60: Chapter 9 Perceptrons and their generalizations

Neural Networks

A learning machine: Nonlinearly mapped input vector x in

feature space U Constructed a linear function into this

space.

Page 61: Chapter 9 Perceptrons and their generalizations

Neural Networks

The Back-Propagation method The BP algorithm Neural Networks for the

Regression estimation problem Remarks on the BP method.

Page 62: Chapter 9 Perceptrons and their generalizations

The Back-Propagation method

Page 63: Chapter 9 Perceptrons and their generalizations

The BP algorithm

Page 64: Chapter 9 Perceptrons and their generalizations
Page 65: Chapter 9 Perceptrons and their generalizations
Page 66: Chapter 9 Perceptrons and their generalizations

For the regression estimation problem

Page 67: Chapter 9 Perceptrons and their generalizations

Remark

The empirical risk functional has many local minima

The convergence of the gradient based method is rather slow.

The sigmoid function has a scaling factor that affects the quality of the approximation.

Page 68: Chapter 9 Perceptrons and their generalizations

Neural-networks are not well-controlled learning machines

In many practical applications, however, demonstrates good results.


Recommended