Post on 23-Feb-2016
description
transcript
1
Classification III
Tamara BergCS 590-133 Artificial Intelligence
Many slides throughout the course adapted from Svetlana Lazebnik, Dan Klein, Stuart Russell, Andrew Moore, Percy Liang, Luke Zettlemoyer, Rob Pless, Killian Weinberger, Deva Ramanan
2
Announcements
Pick-up your midterm from TAs if you haven’t gotten it yet
Assignment 4 due today
3
Discriminant Function• It can be arbitrary functions of x, such as:
Nearest Neighbor
Decision Tree
LinearFunctions
Linear classifier
• Find a linear function to separate the classes
f(x) = sgn(w1x1 + w2x2 + … + wDxD) = sgn(w x)
Perceptron
x1
x2
xD
w1
w2
w3
x3
wD
Input
Weights
.
.
.
Output: sgn(wx + b)
Can incorporate bias as component of the weight vector by always including a feature with value set to 1
Loose inspiration: Human neurons
Perceptron training algorithm• Initialize weights• Cycle through training examples in multiple
passes (epochs)• For each training example:
– If classified correctly, do nothing– If classified incorrectly, update weights
Perceptron update rule• For each training instance x with label y:
– Classify with current weights: y’ = sgn(wx)– Update weights:
– α is a learning rate that should decay as 1/t, e.g., 1000/(1000+t)
– What happens if answer is correct?– Otherwise, consider what happens to individual
weights: • If y = 1 and y’ = −1, wi will be increased if xi is positive or
decreased if xi is negative −> wx gets bigger• If y = −1 and y’ = 1, wi will be decreased if xi is positive or
increased if xi is negative −> wx gets smaller
Implementation details• Bias (add feature dimension with value fixed to 1)
vs. no bias• Initialization of weights: all zeros vs. random• Number of epochs (passes through the training
data)• Order of cycling through training examples
Multi-class perceptrons
• Need to keep a weight vector wc for each class c
• Decision rule: • Update rule: suppose an example from
class c gets misclassified as c’– Update for c: – Update for c’:
Differentiable perceptron
x1
x2
xd
w1
w2
w3
x3
wd
Sigmoid function:
Input
Weights
.
.
.te
t
11)(
Output: (wx + b)
Update rule for differentiable perceptron• Define total classification error or loss on the
training set:
• Update weights by gradient descent:
• For a single training point, the update is:
)()(,)()(1
2jj
N
jjj ffyE xwxxw ww
N
jjjjjj
N
jjjjj
fy
fyE
1
1
))(1)(()(2
)()(')(2
xxwxwx
xww
xwxw
www
E
xxwxwxww ))(1)(()( fy
Multi-Layer Neural Network
• Can learn nonlinear functions• Training: find network weights to minimize the error between true and
estimated labels of training examples:
• Minimization can be done by gradient descent provided f is differentiable– This training method is called back-propagation
N
iii fyfE
1
2)()( x
Deep convolutional neural networks
Zeiler, M., and Fergus, R. Visualizing and Understanding Convolutional Neural Networks, tech report, 2013.Krizhevsky, A., Sutskever, I., and Hinton, G.E. ImageNet classication with deep convolutional neural networks. NIPS, 2012.
17
Linear classifier
• Find a linear function to separate the classes
f(x) = sgn(w1x1 + w2x2 + … + wDxD) = sgn(w x)
18
Linear Discriminant Function• f(x) is a linear function:
x1
x2
wT x + b = 0
wT x + b < 0
wT x + b > 0
A hyper-plane in the feature space
denotes +1denotes -1
x1
19
• How would you classify these points using a linear discriminant function in order to minimize the error rate?
Linear Discriminant Function
denotes +1denotes -1
x1
x2
Infinite number of answers!
20
• How would you classify these points using a linear discriminant function in order to minimize the error rate?
Linear Discriminant Function
x1
x2
Infinite number of answers!
denotes +1denotes -1
21
• How would you classify these points using a linear discriminant function in order to minimize the error rate?
Linear Discriminant Function
x1
x2
Infinite number of answers!
denotes +1denotes -1
22
x1
x2• How would you classify these points using a linear discriminant function in order to minimize the error rate?
Linear Discriminant Function
Infinite number of answers!
Which one is the best?
denotes +1denotes -1
23
Large Margin Linear Classifier
“safe zone”• The linear discriminant
function (classifier) with the maximum margin is the best
Margin is defined as the width that the boundary could be increased by before hitting a data point
Why it is the best? strong generalization ability
Margin
x1
x2
Linear SVM
24
Large Margin Linear Classifier
x1
x2 Margin
wT x + b = 0
wT x + b = -1wT x + b = 1
x+
x+
x-
Support Vectors
Support vector machines• Find hyperplane that maximizes the margin between the positive and
negative examples
1:1)(negative1:1)( positive
byby
wxxwxx
MarginSupport vectors
C. Burges, A Tutorial on Support Vector Machines for Pattern Recognition, Data Mining and Knowledge Discovery, 1998
Distance between point and hyperplane: ||||
||wwx b
For support vectors, 1 bwx
Therefore, the margin is 2 / ||w||
Finding the maximum margin hyperplane
1. Maximize margin 2 / ||w||2. Correctly classify all training data:
Quadratic optimization problem:
C. Burges, A Tutorial on Support Vector Machines for Pattern Recognition, Data Mining and Knowledge Discovery, 1998
1)(subject to21min 2
, by iib
xwww
27
Solving the Optimization Problem The linear discriminant function is:
Notice it relies on a dot product between the test point x and the support vectors xi
28
Linear separability
29
Non-linear SVMs: Feature Space General idea: the original input space can be mapped to
some higher-dimensional feature space where the training set is separable:
Φ: x → φ(x)
Slide courtesy of www.iro.umontreal.ca/~pift6080/documents/papers/svm_tutorial.ppt
30
Nonlinear SVMs: The Kernel Trick With this mapping, our discriminant function becomes:
SV
( ) ( ) ( ) ( )T Ti i
i
g b b
x w x x x
No need to know this mapping explicitly, because we only use the dot product of feature vectors in both the training and test.
A kernel function is defined as a function that corresponds to a dot product of two feature vectors in some expanded feature space:
( , ) ( ) ( )Ti j i jK x x x x
31
Nonlinear SVMs: The Kernel Trick
Linear kernel:
2
2( , ) exp( )2i j
i jK
x xx x
( , ) Ti j i jK x x x x
( , ) (1 )T pi j i jK x x x x
0 1( , ) tanh( )Ti j i jK x x x x
Examples of commonly-used kernel functions:
Polynomial kernel:
Gaussian (Radial-Basis Function (RBF) ) kernel:
Sigmoid:
32
Support Vector Machine: Algorithm
1. Choose a kernel function
2. Choose a value for C and any other parameters (e.g. σ)
3. Solve the quadratic programming problem (many software packages available)
4. Classify held out validation instances using the learned model
5. Select the best learned model based on validation accuracy 6. Classify test instances using the final selected model
33
Some Issues• Choice of kernel - Gaussian or polynomial kernel is default - if ineffective, more elaborate kernels are needed - domain experts can give assistance in formulating appropriate
similarity measures
• Choice of kernel parameters - e.g. σ in Gaussian kernel - In the absence of reliable criteria, applications rely on the use of a
validation set or cross-validation to set such parameters.
This slide is courtesy of www.iro.umontreal.ca/~pift6080/documents/papers/svm_tutorial.ppt
34
Summary: Support Vector Machine
1. Large Margin Classifier – Better generalization ability & less over-fitting
2. The Kernel Trick– Map data points to higher dimensional space in order
to make them linearly separable.– Since only dot product is needed, we do not need to
represent the mapping explicitly.
35
SVMs in Computer Vision
Detection
features?
classify+1 pos
-1 neg
• We slide a window over the image• Extract features for each window• Classify each window into pos/neg
x F(x) y
? ?
37
Sliding Window Detection
38
Representation
39
40
41