+ All Categories
Home > Documents > COSC 522 –Machine Learning Lecture 10–Backpropagation and...

COSC 522 –Machine Learning Lecture 10–Backpropagation and...

Date post: 01-Feb-2021
Category:
Upload: others
View: 4 times
Download: 2 times
Share this document with a friend
30
COSC 522 – Machine Learning Lecture 10 – Backpropagation and MLP Hairong Qi, Gonzalez Family Professor Electrical Engineering and Computer Science University of Tennessee, Knoxville http://www.eecs.utk.edu/faculty/qi Email: [email protected] Course Website: http://web.eecs.utk.edu/~hqi/cosc522/
Transcript
  • COSC 522 – Machine Learning

    Lecture 10 – Backpropagation and MLP

    Hairong Qi, Gonzalez Family ProfessorElectrical Engineering and Computer ScienceUniversity of Tennessee, Knoxvillehttp://www.eecs.utk.edu/faculty/qiEmail: [email protected]

    Course Website: http://web.eecs.utk.edu/~hqi/cosc522/

    http://www.eecs.utk.edu/faculty/qihttp://web.eecs.utk.edu/~hqi/cosc522/

  • Racap - Decision Rules• Supervised learning

    – Baysian based - Maximum Posterior Probability (MPP): For a given x, if P(w1|x) > P(w2|x), then x belongs to class 1, otherwise 2.

    – Parametric Learning– Case 1: Minimum Euclidean Distance (Linear Machine), Si = s2I– Case 2: Minimum Mahalanobis Distance (Linear Machine), Si = S– Case 3: Quadratic classifier, Si = arbitrary– Estimate Gaussian parameters using MLE

    – Nonparametric Learning– K-Nearest Neighbor

    – Neural network– Perceptron– BPNN

    – Least-square based• Unsupervised learning

    – Kmeans– Winner-takes-all

    • Supporting preprocessing techniques– Normalization– Dimensionality Reduction (FLD, PCA)– Performance Evaluation (metrics, confusion matrices, ROC, cross validation)

    2

    ( ) ( ) ( )( )xpPxp

    xP jjjωω

    ω|

    | =

  • Syllabus

    3

    COSC 522 - Machine Learning (Fall 2020) SyllabusLecture Date Content Tests Assignment Due Date

    1 8/20 Introduction HW0 8/252 8/25 Baysian Decision Theory - MPP HW1 9/13 8/27 Discriminant Function - MD Proj1 - Supervised Learning 9/104 9/1 Parametric Learning - MLE5 9/3 Non-parametric Learning - kNN6 9/8 Unsupervised Learning - kmeans HW2 9/157 9/10 Dimensionality Reduction - FLD Proj2 - Unsupervised Learning and DR 9/248 9/15 Dimentionality Reduction - PCA9 9/17 Linear Regression

    10 9/22 Performance Evaluation HW3 9/2911 9/24 Fusion Proj3 - Regression 10/812 9/29 Midterm Exam Final Project - Milestone 1: Forming Team 9/2913 10/1 Gradient Descent 14 10/6 Neural Network - Perceptron HW4 10/1315 10/8 Neural Network - BPNN Proj4 - BPNN 10/2216 10/13 Neural Network - Practices Final Project - Milestone 2: Choosing Topic 10/1317 10/15 Kernel Methods - SVM18 10/20 Kernel Methods - SVM HW5 10/2719 10/22 Kernel Methods - SVM Proj5 - SVM & DT 11/520 10/27 Decision Tree Final Project - Milestone 3: Literature Survey 10/2721 10/29 Random Forest22 11/3 HW6 11/1023 11/5 From PCA to t-SNE24 11/10 From Gaussian to Mixture and EM25 11/12 From Supervised/Unsupervised to RL26 11/17 From Classification/Regression to Generation Final Project - Milestone 4: Prototype 11/1727 11/19 From NN to CNN28 11/24 Final Exam

    12/3 8:00-10:15 Final Presentation Final Project - Report 12/4

  • Questions• Differences between feedback and feedforward neural networks• Limitations of perceptron• Why go deeper?• BPNN structure• BPNN cost function and optimization method• The importance of the threshold function• Relationship between BPNN and MPP• Various aspects of practical improvements of BPNN

    4

  • Types of NN

    Recurrent (feedback during operation)nHopfieldnKohonennAssociative memory

    FeedforwardnNo feedback during operation or testing (only during

    determination of weights or training)nPerceptronnMultilayer perceptron and backpropagation

    5

  • Limitations of Perceptron

    • The output only has two values (1 or 0)• Can only classify samples which are linearly

    separable (straight line or straight plane)• Single layer: can only train AND, OR, NOT• Can’t train a network functions like XOR

    6

  • Why deeper?

    7

    A Tutorial on Deep LearningPart 1: Nonlinear Classifiers and The Backpropagation Algorithm

    Quoc V. [email protected]

    Google Brain, Google Inc.1600 Amphitheatre Pkwy, Mountain View, CA 94043

    December 13, 2015

    1 Introduction

    In the past few years, Deep Learning has generated much excitement in Machine Learning and industrythanks to many breakthrough results in speech recognition, computer vision and text processing. So, whatis Deep Learning?

    For many researchers, Deep Learning is another name for a set of algorithms that use a neural network asan architecture. Even though neural networks have a long history, they became more successful in recentyears due to the availability of inexpensive, parallel hardware (GPUs, computer clusters) and massiveamounts of data.

    In this tutorial, we will start with the concept of a linear classifier and use that to develop the conceptof neural networks. I will present two key algorithms in learning with neural networks: the stochasticgradient descent algorithm and the backpropagation algorithm. Towards the end of the tutorial, I willexplain some simple tricks and recent advances that improve neural networks and their training. For that,let’s start with a simple example.

    2 An example of movie recommendations

    It’s Friday night, and I am trying to decide whether I should watch the movie “Gravity” or not. I ask myclose friends Mary and John, who watched the movie last night to hear their opinions about the movie.Both of them give the movie a rating of 3 in the scale between 1 to 5. Not outstanding but perhaps worthwatching?

    Given these ratings, it is di�cult for me to decide if it is worth watching the movie, but thankfully, Ihave kept a table of their ratings for some movies in the past. For each movie, I also noted whether I likedthe movie or not. Maybe I can use this data to decide if I should watch the movie. The data look like this:

    Movie name Mary’s rating John’s rating I like?

    Lord of the Rings II 1 5 No... ... ... ...

    Star Wars I 4.5 4 Yes

    Gravity 3 3 ?

    Let’s visualize the data to see if there is any trend:

    1

    In the above figure, I represent each movie as a red “O” or a blue “X” which correspond to “I like themovie” and “I dislike the movie”, respectively. The question is with the rating of (3, 3), will I like Gravity?Can I use the past data to come up with a sensible decision?

    3 A bounded decision function

    Let’s write a computer program to answer this question. For every movie, we construct an example xwhich has two dimensions: the first dimension x1 is Mary’s rating and the second dimension x2 is John’srating. Every past movie is also associated with a label y to indicate whether I like the movie or not. Fornow, let’s say y is a scalar that should have one of the two values, 0 to mean “I do not like” or 1 to mean“I do like” the movie. Our goal is to come up with a decision function h(x) to approximate y.

    Our decision function can be as simple as a weighted linear combination of Mary’s and John’s ratings:

    h(x; ✓, b) = ✓1x1 + ✓2x2 + b, which can also be written as h(x; ✓, b) = ✓Tx+ b (1)

    In the equation above, the value of function h(x) depends on ✓1, ✓2 and b, hence I rewrite it as h(x; (✓1, ✓2), b)or in vector form h(x; ✓, b).

    The decision function h unfortunately has a problem: its values can be arbitrarily large or small. Wewish its values to fall between 0 and 1 because those are the two extremes of y that we want to approximate.

    A simple way to force h to have values between 0 and 1 is to map it through another function calledthe sigmoid function, which is bounded between 0 and 1:

    h(x; ✓, b) = g(✓Tx+ b), where g(z) =1

    1 + exp(�z) , (2)

    which graphically should look like this:

    The value of function h is now bounded between 0 and 1.

    2

    Apply the chain rule, and note that @g@z = [1� g(z)]g(z), we have:

    @

    @✓1g(✓Tx(i) + b) =

    @g(✓Tx(i) + b)

    @(✓Tx(i) + b)

    @(✓Tx(i) + b)

    @✓1

    =⇥1� g(✓Tx(i) + b)

    ⇤g(✓Tx(i) + b)

    @(✓1x(i)1 + ✓2x

    (i)2 + b)

    @✓1

    =⇥1� g(✓Tx(i) + b)

    ⇤g(✓Tx(i) + b)x(i)1

    Plug this to Equation 6, we have:

    �✓1 = 2⇥g(✓Tx(i) + b)� y(i)

    ⇤⇥1� g(✓Tx(i) + b)

    ⇤g(✓Tx(i) + b)x(i)1 (7)

    where

    g(✓Tx(i) + b) =1

    1 + exp(�✓Tx(i) � b)(8)

    Similar derivations should lead us to:

    �✓2 = 2⇥g(✓Tx(i) + b)� y(i)

    ⇤⇥1� g(✓Tx(i) + b)

    ⇤g(✓Tx(i) + b)x(i)2 (9)

    �b = 2⇥g(✓Tx(i) + b)� y(i)

    ⇤⇥1� g(✓Tx(i) + b)

    ⇤g(✓Tx(i) + b) (10)

    Now, we have the stochastic gradient descent algorithm to learn the decision function h(x; ✓, b):

    1. Initialize the parameters ✓, b at random,

    2. Pick a random example {x(i), y(i)},

    3. Compute the partial derivatives ✓1, ✓2 and b by Equations 7, 9 and 10,

    4. Update parameters using Equations 3, 4 and 5, then back to step 2.

    We can stop stochastic gradient descent when the parameters do not change or the number of iterationexceeds a certain upper bound. At convergence, we will obtain a function h(x; ✓, b) which can be used topredict whether I like a new movie x or not: h > 0.5 means I will like the movie, otherwise I do not likethe movie. The values of x’s that cause h(x; ✓, b) to be 0.5 is the “decision boundary.” We can plot this“decision boundary” to have:

    The green line is the “decision boundary.” Any point lying above the decision boundary is a movie that Ishould watch, and any point lying below the decision boundary is a movie that I should not watch. With

    4

    this decision boundary, it seems that “Gravity” is slightly on the negative side, which means I should notwatch it.

    By the way, here is a graphical illustration of the decision function h we just built (“M” and “J” indicatethe input data which is the ratings from Mary and John respectively):

    This network means that to compute the value of the decision function, we need the multiply Mary’s ratingwith ✓1, John’s rating with ✓2, then add two values and b, then apply the sigmoid function.

    6 The limitations of linear decision function

    In the above case, I was lucky because the the examples are linearly separable: I can draw a linear decisionfunction to separate the positive and the negative instances.

    My friend Susan has di↵erent movie tastes. If we plot her data, the graph will look rather di↵erent:

    Susan likes some of the movies that Mary and John rated poorly. The question is how we can come upwith a decision function for Susan. From looking at the data, the decision function must be more complexthan the decision we saw before.

    My experience tells me that one way to solve a complex problem is to decompose it into smallerproblems that we can solve. We know that if we throw away the “weird” examples from the bottom leftcorner of the figure, the problem is simple. Similarly, if we throw the “weird” examples on the top rightfigure, the problem is again also simple. In the figure below, I solve for each case using our algorithm andthe decision functions look like this:

    5

    http://ai.stanford.edu/~quocle/tutorial2.pdf

    http://ai.stanford.edu/~quocle/tutorial2.pdf

  • Why deeper?

    8

    this decision boundary, it seems that “Gravity” is slightly on the negative side, which means I should notwatch it.

    By the way, here is a graphical illustration of the decision function h we just built (“M” and “J” indicatethe input data which is the ratings from Mary and John respectively):

    This network means that to compute the value of the decision function, we need the multiply Mary’s ratingwith ✓1, John’s rating with ✓2, then add two values and b, then apply the sigmoid function.

    6 The limitations of linear decision function

    In the above case, I was lucky because the the examples are linearly separable: I can draw a linear decisionfunction to separate the positive and the negative instances.

    My friend Susan has di↵erent movie tastes. If we plot her data, the graph will look rather di↵erent:

    Susan likes some of the movies that Mary and John rated poorly. The question is how we can come upwith a decision function for Susan. From looking at the data, the decision function must be more complexthan the decision we saw before.

    My experience tells me that one way to solve a complex problem is to decompose it into smallerproblems that we can solve. We know that if we throw away the “weird” examples from the bottom leftcorner of the figure, the problem is simple. Similarly, if we throw the “weird” examples on the top rightfigure, the problem is again also simple. In the figure below, I solve for each case using our algorithm andthe decision functions look like this:

    5

    this decision boundary, it seems that “Gravity” is slightly on the negative side, which means I should notwatch it.

    By the way, here is a graphical illustration of the decision function h we just built (“M” and “J” indicatethe input data which is the ratings from Mary and John respectively):

    This network means that to compute the value of the decision function, we need the multiply Mary’s ratingwith ✓1, John’s rating with ✓2, then add two values and b, then apply the sigmoid function.

    6 The limitations of linear decision function

    In the above case, I was lucky because the the examples are linearly separable: I can draw a linear decisionfunction to separate the positive and the negative instances.

    My friend Susan has di↵erent movie tastes. If we plot her data, the graph will look rather di↵erent:

    Susan likes some of the movies that Mary and John rated poorly. The question is how we can come upwith a decision function for Susan. From looking at the data, the decision function must be more complexthan the decision we saw before.

    My experience tells me that one way to solve a complex problem is to decompose it into smallerproblems that we can solve. We know that if we throw away the “weird” examples from the bottom leftcorner of the figure, the problem is simple. Similarly, if we throw the “weird” examples on the top rightfigure, the problem is again also simple. In the figure below, I solve for each case using our algorithm andthe decision functions look like this:

    5

    this decision boundary, it seems that “Gravity” is slightly on the negative side, which means I should notwatch it.

    By the way, here is a graphical illustration of the decision function h we just built (“M” and “J” indicatethe input data which is the ratings from Mary and John respectively):

    This network means that to compute the value of the decision function, we need the multiply Mary’s ratingwith ✓1, John’s rating with ✓2, then add two values and b, then apply the sigmoid function.

    6 The limitations of linear decision function

    In the above case, I was lucky because the the examples are linearly separable: I can draw a linear decisionfunction to separate the positive and the negative instances.

    My friend Susan has di↵erent movie tastes. If we plot her data, the graph will look rather di↵erent:

    Susan likes some of the movies that Mary and John rated poorly. The question is how we can come upwith a decision function for Susan. From looking at the data, the decision function must be more complexthan the decision we saw before.

    My experience tells me that one way to solve a complex problem is to decompose it into smallerproblems that we can solve. We know that if we throw away the “weird” examples from the bottom leftcorner of the figure, the problem is simple. Similarly, if we throw the “weird” examples on the top rightfigure, the problem is again also simple. In the figure below, I solve for each case using our algorithm andthe decision functions look like this:

    5

    Is it possible to combine these two decision functions into one final decision function for the original data?The answer turns out to be yes and I’ll show you how.

    7 A decision function of decision functions

    Let’s suppose, as stated above, the two decision functions are h1(x; (✓1, ✓2), b1) and h2(x; (✓3, ✓4), b2). Forevery example x(i), we can then compute h1(x(i); (✓1, ✓2), b1) and h2(x(i); (✓3, ✓4), b2)

    If we lay out the data in a table, it would look like the first table that we saw:

    Movie name Output by Output by Susan likes?

    decision function h1 decision function h2Lord of the Rings II h1(x(1)) h2(x(2)) No

    ... ... ... ...Star Wars I h1(x(n)) h2(x(n)) Yes

    Gravity h1(x(n+1)) h2(x(n+1)) ?

    Now, once again, the problem becomes finding a new parameter set to weigh these two decision functions toapproximate y. Let’s call these parameters !, c, and we want to find them such that h((h1(x), h2(x));!, c)can approximate the label y. This can be done, again, by stochastic gradient descent.

    In summary, we can find the decision function for Susan by following two steps:

    1. Partition the data into two sets. Each set can be simply classified by a linear decision. Then use theprevious sections to find the decision function for each set,

    2. Use the newly-found decision functions and compute the decision values for each example. Thentreat these values as input to another decision function. Use stochastic gradient descent to find thefinal decision function.

    A graphical way to visualize the above process is the following figure:

    What you just saw is a special architecture in machine learning known as “neural networks.” This instanceof neural networks has one hidden layer, which has two “neurons.” The first neuron computes values forfunction h1 and the second neuron computes values for function h2. The sigmoid function that maps realvalue to bounded values between 0, 1 is also known as “the nonlinearity” or the “activation function.”Since we are using sigmoid, the activation function is also called “sigmoid activation function.” In thefuture, you may encounter other kinds of activation functions. The parameters inside the network, suchas ✓,! are called “weights” where as b, c are called “biases.”

    If you have a more complex function that you want to approximate, you may want to have a deepernetwork, maybe one that looks like this:

    6

    Is it possible to combine these two decision functions into one final decision function for the original data?The answer turns out to be yes and I’ll show you how.

    7 A decision function of decision functions

    Let’s suppose, as stated above, the two decision functions are h1(x; (✓1, ✓2), b1) and h2(x; (✓3, ✓4), b2). Forevery example x(i), we can then compute h1(x(i); (✓1, ✓2), b1) and h2(x(i); (✓3, ✓4), b2)

    If we lay out the data in a table, it would look like the first table that we saw:

    Movie name Output by Output by Susan likes?

    decision function h1 decision function h2Lord of the Rings II h1(x(1)) h2(x(2)) No

    ... ... ... ...Star Wars I h1(x(n)) h2(x(n)) Yes

    Gravity h1(x(n+1)) h2(x(n+1)) ?

    Now, once again, the problem becomes finding a new parameter set to weigh these two decision functions toapproximate y. Let’s call these parameters !, c, and we want to find them such that h((h1(x), h2(x));!, c)can approximate the label y. This can be done, again, by stochastic gradient descent.

    In summary, we can find the decision function for Susan by following two steps:

    1. Partition the data into two sets. Each set can be simply classified by a linear decision. Then use theprevious sections to find the decision function for each set,

    2. Use the newly-found decision functions and compute the decision values for each example. Thentreat these values as input to another decision function. Use stochastic gradient descent to find thefinal decision function.

    A graphical way to visualize the above process is the following figure:

    What you just saw is a special architecture in machine learning known as “neural networks.” This instanceof neural networks has one hidden layer, which has two “neurons.” The first neuron computes values forfunction h1 and the second neuron computes values for function h2. The sigmoid function that maps realvalue to bounded values between 0, 1 is also known as “the nonlinearity” or the “activation function.”Since we are using sigmoid, the activation function is also called “sigmoid activation function.” In thefuture, you may encounter other kinds of activation functions. The parameters inside the network, suchas ✓,! are called “weights” where as b, c are called “biases.”

    If you have a more complex function that you want to approximate, you may want to have a deepernetwork, maybe one that looks like this:

    6

    this decision boundary, it seems that “Gravity” is slightly on the negative side, which means I should notwatch it.

    By the way, here is a graphical illustration of the decision function h we just built (“M” and “J” indicatethe input data which is the ratings from Mary and John respectively):

    This network means that to compute the value of the decision function, we need the multiply Mary’s ratingwith ✓1, John’s rating with ✓2, then add two values and b, then apply the sigmoid function.

    6 The limitations of linear decision function

    In the above case, I was lucky because the the examples are linearly separable: I can draw a linear decisionfunction to separate the positive and the negative instances.

    My friend Susan has di↵erent movie tastes. If we plot her data, the graph will look rather di↵erent:

    Susan likes some of the movies that Mary and John rated poorly. The question is how we can come upwith a decision function for Susan. From looking at the data, the decision function must be more complexthan the decision we saw before.

    My experience tells me that one way to solve a complex problem is to decompose it into smallerproblems that we can solve. We know that if we throw away the “weird” examples from the bottom leftcorner of the figure, the problem is simple. Similarly, if we throw the “weird” examples on the top rightfigure, the problem is again also simple. In the figure below, I solve for each case using our algorithm andthe decision functions look like this:

    5

    http://ai.stanford.edu/~quocle/tutorial2.pdf

    http://ai.stanford.edu/~quocle/tutorial2.pdf

  • Questions• Differences between feedback and feedforward neural networks• Limitations of perceptron• Why go deeper?• BPNN structure• BPNN cost function and optimization method• The importance of the threshold function• Relationship between BPNN and MPP• Various aspects of practical improvements of BPNN

    9

  • XOR (3-layer NN)

    10

    S

    S

    S

    x1

    x2

    S1

    S2

    w13

    w23

    w14

    w24

    S3

    S4

    S5w35

    w45

    S1, S2 are identity functionsS3, S4, S5 are sigmoidw13 = 1.0, w14 = -1.0w24 = 1.0, w23 = -1.0w35 = 0.11, w45 = -0.1The input takes on only –1 and 1

    Sw1w2wd

    x1x2xd

    1

    y

    -b

    ……

  • BP – 3-Layer Network

    11

    Sq Sq Sj Sjxihq yj S(yj)Si

    wiq wqj

    E = 12

    Tj − S y j( )( )j∑

    2

    Choose a set of initial ωst

    ωstk+1 =ωst

    k − ck ∂Ek

    ∂ωstk

    The problem is essentially “how to choose weight w to minimize the error between the expected output and the actual output”

    The basic idea behind BP is gradient descentwst is the weight connecting

    input s at neuron t

  • Exercise

    12

    ( ) ( )

    iiq

    q

    iiq

    i

    qiqiq

    qqqj

    j

    qqj

    q

    jqjqqj

    xh

    xh

    xh

    hSy

    Sy

    hSy

    =∂

    ∂=

    ∂⇒=

    =∂

    ∂=

    ∂⇒=

    ωωω

    ωωω

    and

    and

    Sq Sq Sj Sjxihq yj S(yj)Si

    wiq wqj

  • *The Derivative – Chain Rule

    13

    Δωqj = −∂E∂ωqj

    = −∂E∂Sj

    ∂Sj∂yj

    ∂yj∂ωqj

    = − Tj − Sj( ) $Sj( ) Sq hq( )( )

    Δωiq = −∂E∂ωiq

    =∂E∂Sj

    ∂Sj∂yj

    ∂yj∂Sqj

    ∑&

    '((

    )

    *++

    ∂Sq∂hq

    ∂hq∂ωiq

    = Tj − Sj( ) Sj$( ) ωqj( )j∑&

    '((

    )

    *++Sq$( ) xi( )

    Sq Sq Sj Sjxihq yj S(yj)Si

    wiq wqj

  • Threshold Function

    Traditional threshold function as proposed by McCulloch-Pitts is binary functionThe importance of differentiableA threshold-like but differentiable form for S (25 years)The sigmoid

    14

    ( )( )x

    xS−+

    =exp11

  • BP vs. MPP

    15€

    E(ω) = [gk (x;w) −Tk ]2 = [gk (x;w) −1]

    2

    x∈ω k

    ∑ + [gk (x;w) − 0]2x∉ω k

    ∑x∑

    = n nkn

    1nk

    [gk (x;w) −1]2

    x∈ω k

    ∑ + n − nkn1

    n − nk[gk (x;w)]

    2

    x∉ω k

    ∑' ( )

    * )

    + , )

    - )

    limn→∞

    1nE(w) = P(ωk ) [gk (x;w) −1]

    2 p(x |ωk )dx∫ + P(ω i≠k ) gk2(x;w)p(x |wi≠k )dx∫ = [gk

    2(x;w) − 2gk (x;w) +1]p(x,ωk )dx∫ + gk2(x;w)p(x,wi≠k )dx∫ = gk

    2(x;w)p(x)dx∫ − 2 gk (x;w)p(x,ωk )dx∫ + p(x,ωk )dx∫ = [gk (x;w) − P(ωk | x)]

    2 p(x)dx∫ + C

  • Questions• Differences between feedback and feedforward neural networks• Limitations of perceptron• Why go deeper?• BPNN structure• BPNN cost function and optimization method• The importance of the threshold function• Relationship between BPNN and MPP• Various aspects of practical improvements of BPNN

    16

  • Activation (Threshold) Function• The signum function

    • The sigmoid function– Nonlinear– Saturate– Continuity and smoothness– Monotonicity (so S’(x) > 0)

    • Improved– Centered at zero– Antisymmetric (odd) – leads to faster learning– a = 1.716, b = 2/3, to keep S’(0) -> 1, the linear range is –1

  • 18

  • Data Standardization

    • Problem in the units of the inputs– Different units cause magnitude of difference– Same units cause magnitude of difference

    • Standardization – scaling input– Shift the input pattern

    – The average over the training set of each feature is zero– Scale the full data set

    – Have the same variance in each feature component (around 1.0)

    19

  • Target Values (output)

    Instead of one-of-c (c is the number of classes), we use +1/-1n+1 indicates target categoryn -1 indicates non-target category

    For faster convergence

    20

  • Number of Hidden Layers

    • The number of hidden layers governs the expressive power of the network, and also the complexity of the decision boundary

    • More hidden layers -> higher expressive power -> better tuned to the particular training set -> poor performance on the testing set

    • Rule of thumb– Choose the number of weights to be roughly n/10, where n is the

    total number of samples in the training set– Start with a “large” number of hidden units, and “decay”, prune,

    or eliminate weights

    21

  • Number of Hidden Layers

    22

  • Initializing Weight

    • Can’t start with zero• Fast and uniform learning

    – All weights reach their final equilibrium values at about the same time

    – Choose weights randomly from a uniform distribution to help ensure uniform learning

    – Equal negative and positive weights – Set the weights such that the integration value at a hidden unit is

    in the range of –1 and +1– Input-to-hidden weights: (-1/sqrt(d), 1/sqrt(d))– Hidden-to-output weights: (-1/sqrt(nH), 1/sqrt(nH)), nH is the

    number of connected units

    23

  • Learning Rate

    The optimal learning raten Calculate the 2nd derivative of the objective function with respect

    to each weightn Set the optimal learning rate separately for each weightn A learning rate of 0.1 is often adequate

    24

    1

    2

    2 −

    ""#

    $%%&

    '

    ∂=

    ωMSE

    copt

  • Plateaus or Flat Surface in S’

    PlateausnRegions where the derivative is very small nWhen the sigmoid function saturates

    MomentumnAllows the network to learn more quickly when

    plateaus in the error surface exist

    25

    ∂E∂ωst

    ωstk+1 =ωst

    k − ck ∂Ek

    ∂ωstk

    ωstk+1 =ωst

    k + (1− ck )Δωbpk + ck (ωst

    k −ωstk−1)

  • 26

  • Weight Decay

    Should almost always lead to improved performance

    27

    ω new =ω old (1−ε)

  • Batch Training vs. On-line Training

    Batch trainingnAdd up the weight changes for all the training patterns

    and apply them in one gonGD

    On-line trainingnUpdate all the weights immediately after processing

    each training patternnNot true GD but faster learning rate

    28

  • Other Improvements

    Other error function (Minkowski error)

    29

  • Further Discussions

    • How to draw the decision boundary of BPNN?• How to set the range of valid output

    – 0-0.5 and 0.5-1?– 0-0.2 and 0.8-1?– 0.1-0.2 and 0.8-0.9?

    • The importance of having symmetric initial input

    30


Recommended