+ All Categories
Home > Documents > Ri ChtReview, Catch-up,A &i Qt Q uestion&Answerrickl/courses/cs-271/2013...– Iff time Viola &...

Ri ChtReview, Catch-up,A &i Qt Q uestion&Answerrickl/courses/cs-271/2013...– Iff time Viola &...

Date post: 29-Jan-2021
Category:
Upload: others
View: 1 times
Download: 0 times
Share this document with a friend
93
R i Cth Q ti &A Review, Catch-up, Question&Answer
Transcript
  • R i C t h Q ti &AReview, Catch-up, Question&Answer

  • Outline

    • Dear Prof. Lathrop, Would you mind explain more about the statistic learning (chapter 20) while you are going through the material on today's class. It is very difficult for me to material on today s class. It is very difficult for me to understand and a few of my classmates have the same concern. Thank you – Reading assigned was Chapters 14.1, 14.2, plus:– 20.1-20.3.2 (3rd ed.)– 20.1-20.7, but not the part of 20.3 after “Learning Bayesian

    Networks” (2nd ed.)

    • Machine Learning

    • Probability and Uncertainty• Probability and Uncertainty

    • Question & Answer– Iff time Viola & Jones 2004Iff time, Viola & Jones, 2004

  • Computing with Probabilities: Law of Total Probability

    Law of Total Probability (aka “summing out” or marginalization)P(a) = b P(a, b)

    = b P(a | b) P(b) where B is any random variable

    Why is this useful?given a joint distribution (e.g., P(a,b,c,d)) we can obtain any “marginal”

    probability (e g P(b)) by summing out the other variables e gprobability (e.g., P(b)) by summing out the other variables, e.g.,

    P(b) = a c d P(a, b, c, d)

    Less obvious: we can also compute any conditional probability of interest given a Less obvious: we can also compute any conditional probability of interest given a joint distribution, e.g.,

    P(c | b) = a d P(a, c, d | b) = (1 / P(b)) P(a c d b)= (1 / P(b)) a d P(a, c, d, b)

    where (1 / P(b)) is just a normalization constant

    Thus, the joint distribution contains the information we need to compute any probability of interestprobability of interest.

  • Computing with Probabilities: The Chain Rule or Factoring

    We can always writeP(a, b, c, … z) = P(a | b, c, …. z) P(b, c, … z)

    (by definition of joint probability)(by definition of joint probability)

    Repeatedly applying this idea we can writeRepeatedly applying this idea, we can writeP(a, b, c, … z) = P(a | b, c, …. z) P(b | c,.. z) P(c| .. z)..P(z)

    This factorization holds for any ordering of the variables

    Thi i th h i l f b bilitiThis is the chain rule for probabilities

  • Conditional Independence

    • 2 random variables A and B are conditionally independent given C iff

    P(a, b | c) = P(a | c) P(b | c) for all values a, b, c

    • More intuitive (equivalent) conditional formulation– A and B are conditionally independent given C iff

    P(a | b, c) = P(a | c) OR P(b | a, c) =P(b | c), for all values a, b, c

    – Intuitive interpretation:P(a | b, c) = P(a | c) tells us that learning about b, given that we

    already know c, provides no change in our probability for a, i.e., b contains no information about a beyond what c provides

    • Can generalize to more than 2 random variables– E.g., K different symptom variables X1, X2, … XK, and C = diseaseg y p

    – P(X1, X2,…. XK | C) = P(Xi | C)– Also known as the naïve Bayes assumption

  • “ probability theory is more fundamentally concerned with …probability theory is more fundamentally concerned with the structure of reasoning and causation than with numbers.”

    Glenn Shafer and Judea PearlI t d ti t R di i U t i R iIntroduction to Readings in Uncertain Reasoning,Morgan Kaufmann, 1990

  • Bayesian Networks

    • A Bayesian network specifies a joint distribution in a structured form

    • Represent dependence/independence via a directed graph Nodes random variables– Nodes = random variables

    – Edges = direct dependence

    • Structure of the graph Conditional independence relationsIn generalIn general,

    p(X1, X2,....XN) = p(Xi | parents(Xi ) )

    • Requires that graph is acyclic (no directed cycles)

    The full joint distribution The graph-structured approximation

    • Requires that graph is acyclic (no directed cycles)

    • 2 components to a Bayesian network– The graph structure (conditional independence assumptions)– The numerical probabilities (for each variable given its parents)p ( g p )

  • Example of a simple Bayesian network

    A B

    p(A,B,C) = p(C|A,B)p(A)p(B)

    C

    • Probability model has simple factored form

    • Directed edges > direct dependence • Directed edges => direct dependence

    • Absence of an edge => conditional independence

    Also known as belief networks graphical models causal networks• Also known as belief networks, graphical models, causal networks

    • Other formulations, e.g., undirected graphical models

  • Examples of 3-way Bayesian Networks

    A CB Marginal Independence:p(A,B,C) = p(A) p(B) p(C)

  • Examples of 3-way Bayesian Networks

    Conditionally independent effects:

    A

    Conditionally independent effects:p(A,B,C) = p(B|A)p(C|A)p(A)

    B and C are conditionally independent

    CB

    Given A

    e.g., A is a disease, and we model B and C as conditionally independentB and C as conditionally independentsymptoms given A

  • Examples of 3-way Bayesian Networks

    A B

    C

    Independent Causes:p(A,B,C) = p(C|A,B)p(A)p(B)

    C“Explaining away” effect:Given C, observing A makes B less likelye.g., earthquake/burglary/alarm exampleg , q g y p

    A and B are (marginally) independent but become dependent once C is known

  • Examples of 3-way Bayesian Networks

    A CB Markov dependence:A CB Markov dependence:p(A,B,C) = p(C|B) p(B|A)p(A)

  • Example

    • Consider the following 5 binary variables:– B = a burglary occurs at your house– E = an earthquake occurs at your houseE = an earthquake occurs at your house– A = the alarm goes off– J = John calls to report the alarm– M = Mary calls to report the alarm

    – What is P(B | M, J) ? (for example)

    W th f ll j i t di t ib ti t thi ti– We can use the full joint distribution to answer this question• Requires 25 = 32 probabilities

    • Can we use prior domain knowledge to come up with a • Can we use prior domain knowledge to come up with a Bayesian network that requires fewer probabilities?

  • Constructing a Bayesian Network: Step 1

    • Order the variables in terms of causality (may be a partial order)

    e g {E B} > {A} > {J M}e.g., {E, B} -> {A} -> {J, M}

    P(J M A E B) P(J M | A E B) P(A| E B) P(E B)• P(J, M, A, E, B) = P(J, M | A, E, B) P(A| E, B) P(E, B)

    ≈ P(J, M | A) P(A| E, B) P(E) P(B)

    ≈ P(J | A) P(M | A) P(A| E, B) P(E) P(B)

    These CI assumptions are reflected in the graph structure of the Bayesian network

  • The Resulting Bayesian Network

  • Constructing this Bayesian Network: Step 2

    • P(J, M, A, E, B) = P(J | A) P(M | A) P(A | E, B) P(E) P(B)

    • There are 3 conditional probability tables (CPDs) to be determined:P(J | A), P(M | A), P(A | E, B) – Requiring 2 + 2 + 4 = 8 probabilities– Requiring 2 + 2 + 4 = 8 probabilities

    • And 2 marginal probabilities P(E), P(B) -> 2 more probabilities

    • Where do these probabilities come from?– Expert knowledge– From data (relative frequency estimates)– Or a combination of both - see discussion in Section 20.1 and 20.2 (optional)

  • The Bayesian network

  • Number of Probabilities in Bayesian Networks

    • Consider n binary variables

    • Unconstrained joint distribution requires O(2n) probabilities• Unconstrained joint distribution requires O(2n) probabilities

    If we have a Bayesian network with a maximum of k parents • If we have a Bayesian network, with a maximum of k parents for any node, then we need O(n 2k) probabilities

    • Example• Example– Full unconstrained joint distribution

    • n = 30: need 109 probabilities for full joint distribution– Bayesian networky

    • n = 30, k = 4: need 480 probabilities

  • The Bayesian Network from a different Variable Ordering

  • The Bayesian Network from a different Variable Ordering

  • Given a graph, can we “read off” conditional independencies?p

    The “Markov Blanket” of X(the gray area in the figure)

    X is conditionally independent of everything else, GIVEN the values of:

    * X’s parents* X’s parents* X’s children* X’s children’s parents

    X is conditionally independent of its non-descendants, GIVEN the values of its parents.

  • General Strategy for inference

    • Want to compute P(q | e)

    Step 1:Step 1:P(q | e) = P(q,e)/P(e) = P(q,e), since P(e) is constant wrt Q

    Step 2:Step 2:

    P(q,e) = a..z P(q, e, a, b, …. z), by the law of total probability

    Step 3:Step 3:

    a..z P(q, e, a, b, …. z) = a..z i P(variable i | parents i) (using Bayesian network factoring)

    Step 4:Distribute summations across product terms for efficient computation

  • Naïve Bayes Model

    X1 X2 X3 Xn

    CC

    P(C | X1,…Xn) = P(Xi | C) P (C)( | 1 n) ( i | ) ( )

    Features X are conditionally independent given the class variable C

    Widely used in machine learningWidely used in machine learninge.g., spam email classification: X’s = counts of words in emails

    Probabilities P(C) and P(Xi | C) can easily be estimated from labeled data

  • Naïve Bayes Model (2)

    P(C | X1,…Xn) = P(Xi | C) P (C)

    Probabilities P(C) and P(Xi | C) can easily be estimated from labeled dataProbabilities P(C) and P(Xi | C) can easily be estimated from labeled data

    P(C = cj) ≈ #(Examples with class label cj) / #(Examples)

    P(Xi ik | C j)P(Xi = xik | C = cj)≈ #(Examples with Xi value xik and class label cj)

    / #(Examples with class label cj)

    Usually easiest to work with logslog [ P(C | X1,…Xn) ]

    = log log P(Xi | C) + log P (C) ]

    DANGER: Suppose ZERO examples with Xi value xik and class label cj ?An unseen example with Xi value xik will NEVER predict class label cj !

    Practical solutions: Pseudocounts, e.g., add 1 to every #() , etc.Theoretical solutions: Bayesian inference, beta distribution, etc.

  • Hidden Markov Model (HMM)

    Y Y Y3 YObserved

    Y1 Y2 Y3 Yn

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

    S1 S2 S3 SnHidden

    Two key assumptions:1. hidden state sequence is Markov2 observation Y is CI of all other variables given S2. observation Yt is CI of all other variables given St

    Widely used in speech recognition, protein sequence models

    Since this is a Bayesian network polytree inference is linear in nSince this is a Bayesian network polytree, inference is linear in n

  • Summary

    • Bayesian networks represent a joint distribution using a graph

    • The graph encodes a set of conditional independence • The graph encodes a set of conditional independence assumptions

    • Answering queries (or inference or reasoning) in a Bayesian • Answering queries (or inference or reasoning) in a Bayesian network amounts to efficient computation of appropriate conditional probabilities

    • Probabilistic inference is intractable in the general case– But can be carried out in linear time for certain classes of Bayesian

    networks

  • Outline

    • Dear Prof. Lathrop, Would you mind explain more about the statistic learning (chapter 20) while you are going through the material on today's class. It is very difficult for me to material on today s class. It is very difficult for me to understand and a few of my classmates have the same concern. Thank you – Reading assigned was Chapters 14.1, 14.2, plus:– 20.1-20.3.2 (3rd ed.)– 20.1-20.7, but not the part of 20.3 after “Learning Bayesian

    Networks” (2nd ed.)

    • Machine Learning

    • Probability and Uncertainty• Probability and Uncertainty

    • Question & Answer– Iff time Viola & Jones 2004Iff time, Viola & Jones, 2004

  • Terminology

    • Attributes– Also known as features, variables, independent variables,

    covariatesco a a es

    • Target Variable– Also known as goal predicate, dependent variable, …

    • Classification– Also known as discrimination, supervised classification, …

    • Error functionObjective function loss function – Objective function, loss function, …

  • Inductive learning

    • Let x represent the input vector of attributes

    • Let f(x) represent the value of the target variable for x• Let f(x) represent the value of the target variable for x– The implicit mapping from x to f(x) is unknown to us– We just have training data pairs, D = {x, f(x)} available

    • We want to learn a mapping from x to f, i.e., h(x; ) is “close” to f(x) for all training data points x

    are the parameters of our predictor h(..)

    • Examples:• Examples:– h(x; ) = sign(w1x1 + w2x2+ w3)

    – hk(x) = (x1 OR x2) AND (x3 OR NOT(x4))k( ) ( ) ( ( ))

  • Decision Tree Representations

    • Decision trees are fully expressive– can represent any Boolean function– Every path in the tree could represent 1 row in the truth table

    Yi ld ti ll l t– Yields an exponentially large tree• Truth table is of size 2d, where d is the number of attributes

  • Pseudocode for Decision tree learning

  • Information Gain

    • H(p) = entropy of class distribution at a particular node

    • H(p | A) conditional entropy average entropy of • H(p | A) = conditional entropy = average entropy of conditional class distribution, after we have partitioned the data according to the values in A

    • Gain(A) = H(p) – H(p | A)

    • Simple rule in decision tree learningSimple rule in decision tree learning– At each internal node, split on the node with the largest

    information gain (or equivalently, with smallest H(p|A))

    • Note that by definition, conditional entropy can’t be greater than the entropy

  • How Overfitting affects Prediction

    P di ti

    OverfittingUnderfittingPredictive

    Error

    Error on Test Data

    Model Complexity

    Error on Training Data

    Model Complexity

    Ideal Rangefor Model Complexity

  • Disjoint Validation Data Sets

    Full Data Set

    Validation Data

    Training Data

    1st partition

  • Disjoint Validation Data Sets

    Full Data Set

    Validation DataValidation

    Training Data

    Validation Data

    1st partition 2nd partition

  • Classification in Euclidean Space

    • A classifier is a partition of the space x into disjoint decision regions– Each region has a label attached Each region has a label attached – Regions with the same label need not be contiguous– For a new test point, find what decision region it is in, and predict

    the corresponding label

    • Decision boundaries = boundaries between decision regions– The “dual representation” of decision regions

    • We can characterize a classifier by the equations for its decision boundaries

    • Learning a classifier searching for the decision boundaries that optimize our objective function

  • Decision Tree Example

    DebtDebtIncome > t1

    t2Debt > t2

    t1t3 Income

    Income > t3Note: tree boundaries are linear and axis-parallel

  • Another Example: Nearest Neighbor Classifier

    • The nearest-neighbor classifier– Given a test point x’, compute the distance between x’ and each

    input data point pu da a po– Find the closest neighbor in the training data– Assign x’ the class label of this neighbor– (sort of generalizes minimum distance classifier to exemplars)

    • If Euclidean distance is used as the distance measure (the most common choice), the nearest neighbor classifier results in piecewise linear decision boundariesin piecewise linear decision boundaries

    • Many extensions– e g kNN vote based on k-nearest neighbors– e.g., kNN, vote based on k-nearest neighbors– k can be chosen by cross-validation

  • Linear Classifiers

    • Linear classifier single linear decision boundary(for 2-class case)

    • We can always represent a linear decision boundary by a linear equation:• We can always represent a linear decision boundary by a linear equation:

    w1 x1 + w2 x2 + … + wd xd = wj xj = wt x = 0

    • In d dimensions, this defines a (d-1) dimensional hyperplane– d=3 we get a plane; d=2 we get a line– d=3, we get a plane; d=2, we get a line

    • For prediction we simply see if wj xj > 0

    • The wi are the weights (parameters)• The wi are the weights (parameters)– Learning consists of searching in the d-dimensional weight space for the set of weights

    (the linear boundary) that minimizes an error measure– A threshold can be introduced by a “dummy” feature that is always one; it weight

    corresponds to (the negative of) the threshold

    • Note that a minimum distance classifier is a special (restricted) case of a linear classifier

  • 8

    Mi i E

    6

    7Minimum ErrorDecision Boundary

    5

    6

    2

    3

    4

    FEA

    TUR

    E

    2

    3

    1

    0 1 2 3 4 5 6 7 80

    FEATURE 1

  • The Perceptron Classifier (pages 740-743 in text)

    • The perceptron classifier is just another name for a linear classifier for 2-class data, i.e.,

    output(x) = sign( w x )output(x) = sign( wj xj )

    • Loosely motivated by a simple model of how neurons fire

    • For mathematical convenience, class labels are +1 for one class and -1 for the other

    • Two major types of algorithms for training perceptrons– Objective function = classification accuracy (“error correcting”)– Objective function = squared error (use gradient descent)

    – Gradient descent is generally faster and more efficient – but there is a problem! No gradient!

  • Two different types of perceptron output

    x-axis below is f(x) = f = weighted sum of inputsy-axis is the perceptron output

    Thresholded outputo(f)

    Thresholded output,takes values +1 or -1

    f

    f

    f) Sigmoid output, takesreal values between -1 and +1

    h i id i i ff i iThe sigmoid is in effect an approximationto the threshold function above, buthas a gradient that we can use for learning

  • Gradient Descent Update Equation

    • From basic calculus, for perceptron with sigmoid, and squared error objective function, gradient for a single input x(i) is

    ( E[w] ) = - ( y(i) – [f(i)] ) [f(i)] xj(i)

    • Gradient descent weight update rule:

    wj = wj + ( y(i) – [f(i)] ) [f(i)] xj(i)

    – can rewrite as:

    w = w + * error * c * x (i)wj = wj + * error * c * xj(i)

  • Pseudo-code for Perceptron Training

    Initialize each wj (e.g.,randomly) j

    While (termination condition not satisfied)for i = 1: N % loop over data points (an iteration)

    for j= 1 : d % loop over weightsfor j= 1 : d % loop over weightsdeltawj = ( y(i) – [f(i)] ) [f(i)] xj(i)wj = wj + deltawj

    endcalculate termination conditionend

    • Inputs: N features, N targets (class labels), learning rate • Outputs: a set of learned weights

  • Multi-Layer Perceptrons (p744-747 in text)

    • What if we took K perceptrons and trained them in parallel and then took a weighted sum of their sigmoidal outputs?– This is a multi-layer neural network with a single “hidden” layer This is a multi layer neural network with a single hidden layer

    (the outputs of the first set of perceptrons)– If we train them jointly in parallel, then intuitively different

    perceptrons could learn different parts of the solutionM th ti ll th d fi diff t l l d i i b d i • Mathematically, they define different local decision boundaries in the input space, giving us a more powerful model

    • How would we train such a model?How would we train such a model?– Backpropagation algorithm = clever way to do gradient descent– Bad news: many local minima and many parameters

    • training is hard and slow– Neural networks generated much excitement in AI research in the

    late 1980’s and 1990’s• But now techniques like boosting and support vector machines

    are often preferredare often preferred

  • Boosting Example

  • First classifier

  • First 2 classifiers

  • First 3 classifiers

  • Final Classifier learned by Boosting

  • Final Classifier learned by Boosting

  • Outline

    • Dear Prof. Lathrop, Would you mind explain more about the statistic learning (chapter 20) while you are going through the material on today's class. It is very difficult for me to material on today s class. It is very difficult for me to understand and a few of my classmates have the same concern. Thank you – Reading assigned was Chapters 14.1, 14.2, plus:– 20.1-20.3.2 (3rd ed.)– 20.1-20.7, but not the part of 20.3 after “Learning Bayesian

    Networks” (2nd ed.)

    • Machine Learning

    • Probability and Uncertainty• Probability and Uncertainty

    • Question & Answer– Iff time Viola & Jones 2004Iff time, Viola & Jones, 2004

  • Syntax

    •Basic element: random variable•Similar to propositional logic: possible worlds defined by

    assignment of values to random variablesassignment of values to random variables.

    •Booleanrandom variablese.g., Cavity (= do I have a cavity?)g , y ( y )

    •Discreterandom variablese.g., Weather is one of

    •Domain values must be exhaustive and mutually exclusive

    •Elementary proposition is an assignment of a value to a random variable:e g Weather = sunny; Cavity = false(abbreviated as e.g., Weather = sunny; Cavity = false(abbreviated as ¬cavity)

    •Complex propositions formed from elementary propositions p p p y p pand standard logical connectives :e.g., Weather = sunny Cavity = false

  • Probability

    • P(a) is the probability of proposition “a”– E.g., P(it will rain in London tomorrow)– The proposition a is actually true or false in the real-world– P(a) = “prior” or marginal or unconditional probability– Assumes no other information is available

    • Axioms:

    – 0

  • Conditional Probability

    • P(a|b) is the conditional probability of proposition a, conditioned on knowing that b is true,– E.g., P(rain in London tomorrow | raining in London today)– P(a|b) is a “posterior” or conditional probability– The updated probability that a is true, now that we know b– P(a|b) = P(a AND b) / P(b)

    S P( | b) i h b bili f i h b i – Syntax: P(a | b) is the probability of a given that b is true• a and b can be any propositional sentences• e.g., p( John wins OR Mary wins | Bob wins AND Jack loses)

    • P(a|b) obeys the same rules as probabilities,– E.g., P(a | b) + P(NOT(a) | b) = 1– All probabilities in effect are conditional probabilitiesp p

    • E.g., P(a) = P(a | our background knowledge)

  • Random Variables

    • A is a random variable taking values a1, a2, … am– Events are A= a1, A= a2, ….– We will focus on discrete random variables

    • Mutual exclusionP(A = ai AND A = aj) = 0

    • Exhaustive P(ai) = 1

    MEE (Mutually Exclusive and Exhaustive) assumption is often MEE (Mutually Exclusive and Exhaustive) assumption is often useful

    (but not always appropriate, e.g., disease-state for a patient)

    For finite m can represent P(A) as a table of m probabilitiesFor finite m, can represent P(A) as a table of m probabilities

    For infinite m (e.g., number of tosses before “heads”) we can represent P(A) by a function (e.g., geometric)

  • Joint Distributions

    • Consider 2 random variables: A, B– P(a, b) is shorthand for P(A = a AND B=b)

    a b P(a, b) = 1– Can represent P(A, B) as a table of m2 numbers

    • Generalize to more than 2 random variables– E.g., A, B, C, … Z a b… z P(a, b, …, z) = 1

    K– P(A, B, …. Z) is a table of mK numbers, K = # variables• This is a potential problem in practice, e.g., m=2, K = 20

  • Linking Joint and Conditional Probabilities

    • Basic fact:P(a, b) = P(a | b) P(b)

    – Why? Probability of a and b occurring is the same as probability of a occurring given b is true, times the probability of b occurring

    • Bayes rule:P(a, b) = P(a | b) P(b)

    = P(b | a) P(a) by definition

    => P(b | a) = P(a | b) P(b) / P(a) [Bayes rule]

    Why is this useful?

    Often much more natural to express knowledge in a particular “direction”, e.g., in the causal direction

    e.g., b = disease, a = symptomsMore natural to encode knowledge as P(a|b) than as P(b|a)

  • Sequential Bayesian Reasoning

    • h = hypothesis, e1, e2, .. en = evidence

    • P(h) = prior

    • P(h | e1) proportional to P(e1 | h) P(h)• P(h | e1) proportional to P(e1 | h) P(h)= likelihood of e1 x prior(h)

    • P(h | e1 e2) proportional to P(e1 e2 | h) P(h)• P(h | e1, e2) proportional to P(e1, e2 | h) P(h)in turn can be written as P(e2| h, e1) P(e1|h) P(h)

    ~ likelihood of e2 x “prior”(h given e1)

    • Bayes rule supports sequential reasoning– Start with prior P(h)– New belief (posterior) = P(h | e1)

    This becomes the “new prior”– This becomes the new prior– Can use this to update to P(h | e1, e2), and so on…..

  • Computing with Probabilities: Law of Total Probability

    Law of Total Probability (aka “summing out” or marginalization)P(a) = b P(a, b)

    = b P(a | b) P(b) where B is any random variable

    Why is this useful?Given a joint distribution (e.g., P(a,b,c,d)) we can obtain any

    “marginal” probability (e.g., P(b)) by summing out the other variables, e.g.,

    P(b) = a c d P(a, b, c, d) P(b) a c d P(a, b, c, d)

    We can compute any conditional probability given a joint distribution, e.g.,

    P(c | b) = a d P(a, c, d | b) = a d P(a, c, d, b) / P(b)

  • Computing with Probabilities:The Chain Rule or Factoring

    We can always writeP(a, b, c, … z) = P(a | b, c, …. z) P(b, c, … z)

    (by definition of joint probability)(by definition of joint probability)

    Repeatedly applying this idea we can writeRepeatedly applying this idea, we can writeP(a, b, c, … z) = P(a | b, c, …. z) P(b | c,.. z) P(c| .. z)..P(z)

    This factorization holds for any ordering of the variables

    Thi i th h i l f b bilitiThis is the chain rule for probabilities

  • Independence

    • 2 random variables A and B are independent iffP(a, b) = P(a) P(b) for all values a, b

    • More intuitive (equivalent) conditional formulation– A and B are independent iff

    P(a | b) = P(a) OR P(b | a) P(b), for all values a, b

    – Intuitive interpretation:P(a | b) = P(a) tells us that knowing b provides no change in our

    probability for a, i.e., b contains no information about a

    C li t th 2 d i bl• Can generalize to more than 2 random variables

    • In practice true independence is very rare– “butterfly in China” effect– Weather and dental example in the text– Conditional independence is much more common and useful

    • Note: independence is an assumption we impose on our model of the ld it d t f ll f b i iworld - it does not follow from basic axioms

  • Conditional Independence

    • 2 random variables A and B are conditionally independent given C iff

    P(a, b | c) = P(a | c) P(b | c) for all values a, b, c

    • More intuitive (equivalent) conditional formulation– A and B are conditionally independent given C iff

    P(a | b, c) = P(a | c) OR P(b | a, c) P(b | c), for all values a, b, c

    – Intuitive interpretation:P(a | b, c) = P(a | c) tells us that learning about b, given that we

    already know c, provides no change in our probability for a, i.e., b contains no information about a beyond what c provides

    • Can generalize to more than 2 random variables– E.g., K different symptom variables X1, X2, … XK, and C = diseaseg y p

    – P(X1, X2,…. XK | C) = P(Xi | C)– Also known as the naïve Bayes assumption

  • Outline

    • Dear Prof. Lathrop, Would you mind explain more about the statistic learning (chapter 20) while you are going through the material on today's class. It is very difficult for me to material on today s class. It is very difficult for me to understand and a few of my classmates have the same concern. Thank you – Reading assigned was Chapters 14.1, 14.2, plus:– 20.1-20.3.2 (3rd ed.)– 20.1-20.7, but not the part of 20.3 after “Learning Bayesian

    Networks” (2nd ed.)

    • Machine Learning

    • Probability and Uncertainty• Probability and Uncertainty

    • Question & Answer– Iff time Viola & Jones 2004Iff time, Viola & Jones, 2004

  • Learning to Detect FacesLearning to Detect Faces

    A Large-Scale Application of Machine Learning

    (This material is not in the text: for further information see the paper by P. Viola and M. Jones, International Journal of Computer Vision, 2004

  • Viola-Jones Face Detection Algorithm

    • Overview : – Viola Jones technique overview– FeaturesFeatures– Integral Images– Feature Extraction– Weak Classifiers– Boosting and classifier evaluation– Cascade of boosted classifiers– Example Results

  • Viola Jones Technique Overview

    • Three major contributions/phases of the algorithm : – Feature extraction

    Learning using boosting and decision stumps– Learning using boosting and decision stumps– Multi-scale detection algorithm

    F t t ti d f t l ti• Feature extraction and feature evaluation.– Rectangular features are used, with a new image representation

    their calculation is very fast.

    • Classifier learning using a method called boosting

    • A combination of simple classifiers is very effectiveA combination of simple classifiers is very effective

  • Features

    • Four basic types.– They are easy to calculate.

    Th hi b d f h bl k – The white areas are subtracted from the black ones.– A special representation of the sample called the integral

    image makes feature extraction faster.

  • Integral images

    • Summed area tables

    • A representation that means any rectangle’s values can be calculated in four accesses of the integral imagecalculated in four accesses of the integral image.

  • Fast Computation of Pixel Sums

  • Feature Extraction

    • Features are extracted from sub windows of a sample image.

    Th b i f b i d i 24 b 24 i l– The base size for a sub window is 24 by 24 pixels.– Each of the four feature types are scaled and shifted across

    all possible combinations• In a 24 pixel by 24 pixel sub window there are ~160 000 • In a 24 pixel by 24 pixel sub window there are ~160,000

    possible features to be calculated.

  • Learning with many features

    • We have 160,000 features – how can we learn a classifier with only a few hundred training examples without overfitting?

    • Idea:– Learn a single very simple classifier (a “weak classifier”)– Classify the data

    Look at where it makes errors– Look at where it makes errors– Reweight the data so that the inputs where we made errors get

    higher weight in the learning process– Now learn a 2nd simple classifier on the weighted data

    t d f– Combine the 1st and 2nd classifier and weight the data according to where they make errors

    – Learn a 3rd classifier on the weighted data

    – … and so on until we learn T simple classifiers

    – Final classifier is the combination of all T classifiers

    – This procedure is called “Boosting” – works very well in practice.

  • “Decision Stumps”

    • Decision stumps = decision tree with only a single root node– Certainly a very weak learner!

    – Say the attributes are real-valued– Decision stump algorithm looks at all possible thresholds for each

    attribute– Selects the one with the max information gainSelects the one with the max information gain– Resulting classifier is a simple threshold on a single feature

    • Outputs a +1 if the attribute is above a certain threshold• Outputs a -1 if the attribute is below the threshold

    – Note: can restrict the search for to the n-1 “midpoint” locations between a sorted list of attribute values for each feature. So complexity is n log n per attribute.

    – Note this is exactly equivalent to learning a perceptron with a single intercept term (so we could also learn these stumps via gradient descent and mean squared error)

  • Boosting Example

  • First classifier

  • First 2 classifiers

  • First 3 classifiers

  • Final Classifier learned by Boosting

  • Final Classifier learned by Boosting

  • Boosting with Decision Stumps

    • Viola-Jones algorithm– With K attributes (e.g., K = 160,000) we have 160,000 different

    decision stumps to choose fromdec s o s u ps o c oose o

    – At each stage of boosting • given reweighted data from previous stage• Train all K (160,000) single-feature perceptrons• Select the single best classifier at this stage• Combine it with the other previously selected classifiers• Reweight the data• Reweight the data• Learn all K classifiers again, select the best, combine, reweight• Repeat until you have T classifiers selected

    – Very computationally intensive• Learning K decision stumps T times• E.g., K = 160,000 and T = 1000

  • How is classifier combining done?

    • At each stage we select the best classifier on the current iteration and combine it with the set of classifiers learned so farfar

    • How are the classifiers combined?– Take the weight*feature for each classifier, sum these up, and g , p,

    compare to a threshold (very simple)

    – Boosting algorithm automatically provides the appropriate weight for each classifier and the thresholdfor each classifier and the threshold

    – This version of boosting is known as the AdaBoost algorithm

    – Some nice mathematical theory shows that it is in fact a very powerful machine learning technique

  • Reduction in Error as Boosting adds Classifiers

  • Useful Features Learned by Boosting

  • A Cascade of Classifiers

  • Detection in Real Images

    • Basic classifier operates on 24 x 24 subwindows

    • Scaling:• Scaling:– Scale the detector (rather than the images)– Features can easily be evaluated at any scale– Scale by factors of 1.25Scale by factors of 1.25

    • Location:– Move detector around the image (e.g., 1 pixel increments)

    • Final Detections– A real face may result in multiple nearby detections – Postprocess detected subwindows to combine overlapping

    detections into a single detection

  • Training

    • Examples of 24x24 images with faces

  • Small set of 111 Training Images

  • Sample results using the Viola-Jones Detector

    • Notice detection at multiple scales

  • More Detection Examples

  • Practical implementation

    • Details discussed in Viola-Jones paper

    • Training time weeks (with 5k faces and 9 5k non faces)• Training time = weeks (with 5k faces and 9.5k non-faces)

    • Final detector has 38 layers in the cascade, 6060 features

    • 700 Mhz processor:– Can process a 384 x 288 image in 0.067 seconds (in 2003 when

    paper was written)paper was written)


Recommended