+ All Categories
Home > Technology > Lecture9 multi kernel_svm

Lecture9 multi kernel_svm

Date post: 14-Jun-2015
Category:
Upload: stephane-canu
View: 90 times
Download: 3 times
Share this document with a friend
Popular Tags:
24
Lecture 9: Multi Kernel SVM Stéphane Canu [email protected] Sao Paulo 2014 April 16, 2014
Transcript
Page 1: Lecture9 multi kernel_svm

Lecture 9: Multi Kernel SVM

Stéphane [email protected]

Sao Paulo 2014

April 16, 2014

Page 2: Lecture9 multi kernel_svm

Roadmap

1 Tuning the kernel: MKLThe multiple kernel problemSparse kernel machines for regression: SVRSimpleMKL: the multiple kernel solution

Page 3: Lecture9 multi kernel_svm

Standard Learning with Kernels

User

Learning Machine

kernel k data

f

http://www.cs.nyu.edu/~mohri/icml2011-tutorial/tutorial-icml2011-2.pdf

Stéphane Canu (INSA Rouen - LITIS) April 16, 2014 3 / 21

Page 4: Lecture9 multi kernel_svm

Learning Kernel framework

User

Learning Machine

kernelfamily

km

data

f , k(., .)

http://www.cs.nyu.edu/~mohri/icml2011-tutorial/tutorial-icml2011-2.pdf

Stéphane Canu (INSA Rouen - LITIS) April 16, 2014 3 / 21

Page 5: Lecture9 multi kernel_svm

from SVM

→ to Multiple Kernel Learning (MKL)

SVM: single kernel k

MKL: set of M kernels k1, . . . , km, . . . , kMI learn classier and combination weightsI can be cast as a convex optimization problem

f (x) =n∑

i=1

αi

M∑m=1

dm

k

m

(x, xi ) + b

M∑m=1

dm = 1 and 0 ≤ dm

=

n∑i=1

αiK (x, xi ) + b with K (x, xi ) =M∑

m=1

dm km(x, xi )

http://www.nowozin.net/sebastian/talks/ICCV-2009-LPbeta.pdf

Stéphane Canu (INSA Rouen - LITIS) April 16, 2014 4 / 21

Page 6: Lecture9 multi kernel_svm

from SVM → to Multiple Kernel Learning (MKL)

SVM: single kernel kMKL: set of M kernels k1, . . . , km, . . . , kM

I learn classier and combination weightsI can be cast as a convex optimization problem

f (x) =n∑

i=1

αi

M∑m=1

dm km(x, xi ) + bM∑

m=1

dm = 1 and 0 ≤ dm

=

n∑i=1

αiK (x, xi ) + b with K (x, xi ) =M∑

m=1

dm km(x, xi )

http://www.nowozin.net/sebastian/talks/ICCV-2009-LPbeta.pdf

Stéphane Canu (INSA Rouen - LITIS) April 16, 2014 4 / 21

Page 7: Lecture9 multi kernel_svm

from SVM → to Multiple Kernel Learning (MKL)

SVM: single kernel kMKL: set of M kernels k1, . . . , km, . . . , kM

I learn classier and combination weightsI can be cast as a convex optimization problem

f (x) =n∑

i=1

αi

M∑m=1

dm km(x, xi ) + bM∑

m=1

dm = 1 and 0 ≤ dm

=n∑

i=1

αiK (x, xi ) + b with K (x, xi ) =M∑

m=1

dm km(x, xi )

http://www.nowozin.net/sebastian/talks/ICCV-2009-LPbeta.pdf

Stéphane Canu (INSA Rouen - LITIS) April 16, 2014 4 / 21

Page 8: Lecture9 multi kernel_svm

Multiple Kernel

The model

f (x) =n∑

i=1

αi

M∑m=1

dmkm(x , xi ) + b,M∑

m=1

dm = 1 and 0 ≤ dm

Given M kernel functions k1, . . . , kM that are potentially well suited for agiven problem, find a positive linear combination of these kernels such thatthe resulting kernel k is “optimal”

k(x, x′) =M∑

m=1

dmkm(x, x′), with dm ≥ 0,∑m

dm = 1

Learning togetherThe kernel coefficients dm and the SVM parameters αi , b.

Stéphane Canu (INSA Rouen - LITIS) April 16, 2014 5 / 21

Page 9: Lecture9 multi kernel_svm

Multiple Kernel: illustration

Stéphane Canu (INSA Rouen - LITIS) April 16, 2014 6 / 21

Page 10: Lecture9 multi kernel_svm

Multiple Kernel Strategies

Wrapper method (Weston et al., 2000; Chapelle et al., 2002)I solve SVMI gradient descent on dm on criterion:

F margin criterionF span criterion

Kernel Learning & Feature SelectionI use Kernels as dictionary

Embedded Multi Kernel Learning (MKL)

Stéphane Canu (INSA Rouen - LITIS) April 16, 2014 7 / 21

Page 11: Lecture9 multi kernel_svm

Multiple Kernel functional LearningThe problem (for given C )

minf∈H,b,ξ,d

12‖f ‖2H + C

∑i

ξi

with yi(f (xi ) + b

)≥ 1+ ξi ; ξi ≥ 0 ∀i

M∑m=1

dm = 1 , dm ≥ 0 ∀m ,

f =∑m

fm and k(x, x′) =M∑

m=1

dmkm(x, x′), with dm ≥ 0

The functional framework

H =M⊕

m=1

H′m 〈f , g〉H′m=

1dm〈f , g〉Hm

Stéphane Canu (INSA Rouen - LITIS) April 16, 2014 8 / 21

Page 12: Lecture9 multi kernel_svm

Multiple Kernel functional Learning

The problem (for given C )

min{fm},b,ξ,d

12

∑m

1dm‖fm‖2Hm

+ C∑

i

ξi

with yi(∑

m

fm(xi ) + b)≥ 1+ ξi ; ξi ≥ 0 ∀i∑

m

dm = 1 , dm ≥ 0 ∀m ,

Treated as a bi-level optimization task

mind∈IRM

min{fm},b,ξ

12

∑m

1dm‖fm‖2Hm

+ C∑

i

ξi

with yi(∑

m

fm(xi ) + b)≥ 1+ ξi ; ξi ≥ 0 ∀i

s.t.∑m

dm = 1 , dm ≥ 0 ∀m ,

Stéphane Canu (INSA Rouen - LITIS) April 16, 2014 9 / 21

Page 13: Lecture9 multi kernel_svm

Multiple Kernel representer theorem and dual

The Lagrangian:

L =12

∑m

1dm‖fm‖2Hm

+ C∑

i

ξi −∑

i

αi

(yi(∑

m

fm(xi ) + b)− 1− ξi

)−∑

i

βiξi

Associated KKT stationarity conditions:

∇mL = 0 ⇔ 1dm

fm(•) =n∑

i=1

αiyikm(•, xi ) m = 1,M

Representer theorem

f (•) =∑m

fm(•) =n∑

i=1

αiyi

∑m

dmkm(•, xi )︸ ︷︷ ︸K(•,xi )

We have a standard SVM problem with respect to function f and kernel K .

Stéphane Canu (INSA Rouen - LITIS) April 16, 2014 10 / 21

Page 14: Lecture9 multi kernel_svm

Multiple Kernel Algorithm

Use a Reduced Gradient Algorithm1

mind∈IRM

J(d)

s.t.∑m

dm = 1 , dm ≥ 0 ∀m ,

SimpleMKL algorithm

set dm = 1M for m = 1, . . . ,M

while stopping criterion not met docompute J(d) using an QP solver with K =

∑m dmKm

compute ∂J∂dm

, and projected gradient as a descent direction Dγ ← compute optimal stepsized ← d + γD

end while−→ Improvement reported using the Hessian

1Rakotomamonjy et al. JMLR 08Stéphane Canu (INSA Rouen - LITIS) April 16, 2014 11 / 21

Page 15: Lecture9 multi kernel_svm

Computing the reduced gradientAt the optimal the primal cost = dual cost

12

∑m

1dm‖fm‖2Hm

+ C∑

i

ξi︸ ︷︷ ︸primal cost

=12α>Gα− e>α︸ ︷︷ ︸dual cost

with G =∑

m dmGm where Gm,ij = km(xi , xj)

Dual cost is easier for the gradient

∇dmJ(d) =12α>Gmα

Reduce (or project) to check the constraints∑

m dm = 1 →∑

m Dm = 0

Dm = ∇dmJ(d)−∇d1J(d) and D1 = −M∑

m=2

Dm

Stéphane Canu (INSA Rouen - LITIS) April 16, 2014 12 / 21

Page 16: Lecture9 multi kernel_svm

Complexity

For each iteration:SVM training: O(nnsv + n3

sv).Inverting Ksv,sv is O(n3

sv), but might already be available as aby-product of the SVM training.Computing H: O(Mn2

sv)

Finding d : O(M3).

The number of iterations is usually less than 10.

−→ When M < nsv, computing d is not more expensive than QP.

Stéphane Canu (INSA Rouen - LITIS) April 16, 2014 13 / 21

Page 17: Lecture9 multi kernel_svm

MKL on the 101-caltech dataset

http://www.robots.ox.ac.uk/~vgg/software/MKL/

Stéphane Canu (INSA Rouen - LITIS) April 16, 2014 14 / 21

Page 18: Lecture9 multi kernel_svm

Support vector regression (SVR)the t-insensitive loss {

minf∈H

12‖f ‖

2H

with |f (xi )− yi | ≤ t, i = 1, n

The support vector regression introduce slack variables

(SVR)

{minf∈H

12‖f ‖

2H + C

∑|ξi |

with |f (xi )− yi | ≤ t + ξi 0 ≤ ξi i = 1, n

a typical multi parametric quadratic program (mpQP)piecewise linear regularization path

α(C , t) = α(C0, t0) + (1C− 1

C0)u +

1C0

(t − t0)v

2d Pareto’s front (the tube width and the regularity)

Page 19: Lecture9 multi kernel_svm

Support vector regression illustration

0 1 2 3 4 5 6 7 8−1

−0.8

−0.6

−0.4

−0.2

0

0.2

0.4

0.6

0.8

1Support Vector Machine Regression

x

y

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5−1.5

−1

−0.5

0

0.5

1

1.5Support Vector Machine Regression

x

y

C large C small

there exists other formulations such as LP SVR...

Page 20: Lecture9 multi kernel_svm

Multiple Kernel Learning for regression

The problem (for given C and t)

min{fm},b,ξ,d

12

∑m

1dm‖fm‖2Hm

+ C∑

i

ξi

s.t.∣∣∣∑

m

fm(xi ) + b − yi

∣∣∣ ≤ t + ξi ∀iξi ≥ 0 ∀i∑m

dm = 1 , dm ≥ 0 ∀m ,

regularization formulationmin{fm},b,d

12

∑m

1dm‖fm‖2Hm

+ C∑

i

max(∣∣∣∑

m

fm(xi ) + b − yi

∣∣∣− t, 0)∑m

dm = 1 , dm ≥ 0 ∀m ,

Equivalentlymin

{fm},b,ξ,d

∑i

max(∣∣∑

m

fm(xi ) + b − yi∣∣− t, 0

)+

12C

∑m

1dm‖fm‖2Hm

+ µ∑m

|dm|

Stéphane Canu (INSA Rouen - LITIS) April 16, 2014 17 / 21

Page 21: Lecture9 multi kernel_svm

Multiple Kernel functional LearningThe problem (for given C and t)

min{fm},b,ξ,d

12

∑m

1dm‖fm‖2Hm

+ C∑

i

ξi

s.t.∣∣∣∑

m

fm(xi ) + b − yi

∣∣∣ ≤ t + ξi ∀iξi ≥ 0 ∀i∑m

dm = 1 , dm ≥ 0 ∀m ,

Treated as a bi-level optimization task

mind∈IRM

min{fm},b,ξ

12

∑m

1dm‖fm‖2Hm

+ C∑

i

ξi

s.t.∣∣∣∑

m

fm(xi ) + b − yi

∣∣∣ ≥ t + ξi ∀i

ξi ≥ 0 ∀is.t.

∑m

dm = 1 , dm ≥ 0 ∀m ,

Stéphane Canu (INSA Rouen - LITIS) April 16, 2014 18 / 21

Page 22: Lecture9 multi kernel_svm

Multiple Kernel experiments

0 0.2 0.4 0.6 0.8 1−1

−0.5

0

0.5

1LinChirp

0 0.2 0.4 0.6 0.8 1−2

−1

0

1

2

x

0 0.2 0.4 0.6 0.8 10.2

0.4

0.6

0.8

1Wave

0 0.2 0.4 0.6 0.8 10

0.5

1

x

0 0.2 0.4 0.6 0.8 10.2

0.4

0.6

0.8

1Blocks

0 0.2 0.4 0.6 0.8 10

0.2

0.4

0.6

0.8

x

0 0.2 0.4 0.6 0.8 10.2

0.4

0.6

0.8

1Spikes

0 0.2 0.4 0.6 0.8 10

0.5

1

x

Single Kernel Kernel Dil Kernel Dil-TransData Set Norm. MSE (%) #Kernel Norm. MSE #Kernel Norm. MSELinChirp 1.46 ± 0.28 7.0 1.00 ± 0.15 21.5 0.92 ± 0.20Wave 0.98 ± 0.06 5.5 0.73 ± 0.10 20.6 0.79 ± 0.07Blocks 1.96 ± 0.14 6.0 2.11 ± 0.12 19.4 1.94 ± 0.13Spike 6.85 ± 0.68 6.1 6.97 ± 0.84 12.8 5.58 ± 0.84

Table: Normalized Mean Square error averaged over 20 runs.

Stéphane Canu (INSA Rouen - LITIS) April 16, 2014 19 / 21

Page 23: Lecture9 multi kernel_svm

Conclusion on multiple kernel (MKL)

MKL: Kernel tuning, variable selection. . .I extention to classification and one class SVM

SVM KM: an efficient Matlab toolbox (available at MLOSS)2

Multiple Kernels for Image Classification: Software and Experimentson Caltech-1013

new trend: Multi kernel, Multi task and ∞ number of kernels

2http://mloss.org/software/view/33/

3http://www.robots.ox.ac.uk/~vgg/software/MKL/

Page 24: Lecture9 multi kernel_svm

Bibliography

A. Rakotomamonjy, F. Bach, S. Canu & Y. Grandvalet. SimpleMKL. J.Mach. Learn. Res. 2008, 9:2491–2521.

M. Gönen & E. Alpaydin Multiple kernel learning algorithms. J. Mach.Learn. Res. 2008;12:2211-2268.

http://www.cs.nyu.edu/~mohri/icml2011-tutorial/tutorial-icml2011-2.pdf

http://www.robots.ox.ac.uk/~vgg/software/MKL/

http://www.nowozin.net/sebastian/talks/ICCV-2009-LPbeta.pdf

Stéphane Canu (INSA Rouen - LITIS) April 16, 2014 21 / 21


Recommended