+ All Categories
Home > Documents > Sparse Coding and Dictionary Learning for Image Analysis eserved@d = *@let@token Part...

Sparse Coding and Dictionary Learning for Image Analysis eserved@d = *@let@token Part...

Date post: 07-Jul-2020
Category:
Upload: others
View: 4 times
Download: 0 times
Share this document with a friend
41
Sparse Coding and Dictionary Learning for Image Analysis Part I: Optimization for Sparse Coding Francis Bach, Julien Mairal, Jean Ponce and Guillermo Sapiro ICCV’09 tutorial, Kyoto, 28th September 2009 Francis Bach, Julien Mairal, Jean Ponce and Guillermo Sapiro Optimization for Sparse Coding 1/41
Transcript
Page 1: Sparse Coding and Dictionary Learning for Image Analysis eserved@d = *@let@token Part ...lear.inrialpes.fr › ... › mairal › tutorial_iccv09 › tuto_part1.pdf · 2012-10-05 ·

Sparse Coding and Dictionary Learning

for Image Analysis

Part I: Optimization for Sparse Coding

Francis Bach, Julien Mairal, Jean Ponce and Guillermo Sapiro

ICCV’09 tutorial, Kyoto, 28th September 2009

Francis Bach, Julien Mairal, Jean Ponce and Guillermo Sapiro Optimization for Sparse Coding 1/41

Page 2: Sparse Coding and Dictionary Learning for Image Analysis eserved@d = *@let@token Part ...lear.inrialpes.fr › ... › mairal › tutorial_iccv09 › tuto_part1.pdf · 2012-10-05 ·

What is a Sparse Linear Model?

Let x in Rm be a signal.

Let D = [d1, . . . ,dp] ∈ Rm×p be a set of

normalized “basis vectors”.We call it dictionary.

D is “adapted” to x if it can represent it with a few basis vectors—thatis, there exists a sparse vector α in R

p such that x ≈ Dα. We call α

the sparse code.

x

︸ ︷︷ ︸

x∈Rm

d1 d2 · · · dp

︸ ︷︷ ︸

D∈Rm×p

α[1]α[2]

...α[p]

︸ ︷︷ ︸

α∈Rp,sparse

Francis Bach, Julien Mairal, Jean Ponce and Guillermo Sapiro Optimization for Sparse Coding 2/41

Page 3: Sparse Coding and Dictionary Learning for Image Analysis eserved@d = *@let@token Part ...lear.inrialpes.fr › ... › mairal › tutorial_iccv09 › tuto_part1.pdf · 2012-10-05 ·

The Sparse Decomposition Problem

minα∈Rp

1

2||x−Dα||22

︸ ︷︷ ︸

data fitting term

+ λψ(α)︸ ︷︷ ︸

sparsity-inducingregularization

ψ induces sparsity in α. It can be

the ℓ0 “pseudo-norm”. ||α||0△

= #{i s.t. α[i ] 6= 0} (NP-hard)

the ℓ1 norm. ||α||1△

=∑p

i=1 |α[i ]| (convex)

. . .

This is a selection problem.

Francis Bach, Julien Mairal, Jean Ponce and Guillermo Sapiro Optimization for Sparse Coding 3/41

Page 4: Sparse Coding and Dictionary Learning for Image Analysis eserved@d = *@let@token Part ...lear.inrialpes.fr › ... › mairal › tutorial_iccv09 › tuto_part1.pdf · 2012-10-05 ·

Finding your way in the sparse coding literature. . .

. . . is not easy. The literature is vast, redundant, sometimesconfusing and many papers are claiming victory. . .

The main class of methods are

greedy procedures [Mallat and Zhang, 1993], [Weisberg, 1980]

homotopy [Osborne et al., 2000], [Efron et al., 2004],[Markowitz, 1956]

soft-thresholding based methods [Fu, 1998], [Daubechies et al.,2004], [Friedman et al., 2007], [Nesterov, 2007], [Beck andTeboulle, 2009], . . .

reweighted-ℓ2 methods [Daubechies et al., 2009],. . .

active-set methods [Roth and Fischer, 2008].

. . .

Francis Bach, Julien Mairal, Jean Ponce and Guillermo Sapiro Optimization for Sparse Coding 4/41

Page 5: Sparse Coding and Dictionary Learning for Image Analysis eserved@d = *@let@token Part ...lear.inrialpes.fr › ... › mairal › tutorial_iccv09 › tuto_part1.pdf · 2012-10-05 ·

1 Greedy Algorithms

2 Homotopy and LARS

3 Soft-thresholding based optimization

Francis Bach, Julien Mairal, Jean Ponce and Guillermo Sapiro Optimization for Sparse Coding 5/41

Page 6: Sparse Coding and Dictionary Learning for Image Analysis eserved@d = *@let@token Part ...lear.inrialpes.fr › ... › mairal › tutorial_iccv09 › tuto_part1.pdf · 2012-10-05 ·

Matching Pursuit

minα∈Rp

|| x−Dα︸ ︷︷ ︸

r

||22 s.t. ||α||0 ≤ L

1: α← 02: r← x (residual).3: while ||α||0 < L do

4: Select the atom with maximum correlation with the residual

ı← arg maxi=1,...,p

|dTi r|

5: Update the residual and the coefficients

α[ı] ← α[ı] + dTı r

r ← r − (dTı r)dı

6: end while

Francis Bach, Julien Mairal, Jean Ponce and Guillermo Sapiro Optimization for Sparse Coding 6/41

Page 7: Sparse Coding and Dictionary Learning for Image Analysis eserved@d = *@let@token Part ...lear.inrialpes.fr › ... › mairal › tutorial_iccv09 › tuto_part1.pdf · 2012-10-05 ·

Matching Pursuit α = (0, 0, 0)

d1

d2

d3

rz

x

y

Francis Bach, Julien Mairal, Jean Ponce and Guillermo Sapiro Optimization for Sparse Coding 7/41

Page 8: Sparse Coding and Dictionary Learning for Image Analysis eserved@d = *@let@token Part ...lear.inrialpes.fr › ... › mairal › tutorial_iccv09 › tuto_part1.pdf · 2012-10-05 ·

Matching Pursuit α = (0, 0, 0)

z

x

y

d1

d2

d3

r

< r, d3 > d3

Francis Bach, Julien Mairal, Jean Ponce and Guillermo Sapiro Optimization for Sparse Coding 8/41

Page 9: Sparse Coding and Dictionary Learning for Image Analysis eserved@d = *@let@token Part ...lear.inrialpes.fr › ... › mairal › tutorial_iccv09 › tuto_part1.pdf · 2012-10-05 ·

Matching Pursuit α = (0, 0, 0)

z

x

y

d1

d2

d3

rr− < r, d3 > d3

Francis Bach, Julien Mairal, Jean Ponce and Guillermo Sapiro Optimization for Sparse Coding 9/41

Page 10: Sparse Coding and Dictionary Learning for Image Analysis eserved@d = *@let@token Part ...lear.inrialpes.fr › ... › mairal › tutorial_iccv09 › tuto_part1.pdf · 2012-10-05 ·

Matching Pursuit α = (0, 0, 0.75)

d1

d2

d3

rz

x

y

Francis Bach, Julien Mairal, Jean Ponce and Guillermo Sapiro Optimization for Sparse Coding 10/41

Page 11: Sparse Coding and Dictionary Learning for Image Analysis eserved@d = *@let@token Part ...lear.inrialpes.fr › ... › mairal › tutorial_iccv09 › tuto_part1.pdf · 2012-10-05 ·

Matching Pursuit α = (0, 0, 0.75)

z

x

y

d1

d2

d3

r

Francis Bach, Julien Mairal, Jean Ponce and Guillermo Sapiro Optimization for Sparse Coding 11/41

Page 12: Sparse Coding and Dictionary Learning for Image Analysis eserved@d = *@let@token Part ...lear.inrialpes.fr › ... › mairal › tutorial_iccv09 › tuto_part1.pdf · 2012-10-05 ·

Matching Pursuit α = (0, 0, 0.75)

z

x

y

d1

d2

d3

r

Francis Bach, Julien Mairal, Jean Ponce and Guillermo Sapiro Optimization for Sparse Coding 12/41

Page 13: Sparse Coding and Dictionary Learning for Image Analysis eserved@d = *@let@token Part ...lear.inrialpes.fr › ... › mairal › tutorial_iccv09 › tuto_part1.pdf · 2012-10-05 ·

Matching Pursuit α = (0, 0.24, 0.75)

d1

d2

d3r

z

x

y

Francis Bach, Julien Mairal, Jean Ponce and Guillermo Sapiro Optimization for Sparse Coding 13/41

Page 14: Sparse Coding and Dictionary Learning for Image Analysis eserved@d = *@let@token Part ...lear.inrialpes.fr › ... › mairal › tutorial_iccv09 › tuto_part1.pdf · 2012-10-05 ·

Matching Pursuit α = (0, 0.24, 0.75)

z

x

y

d1

d2

d3r

Francis Bach, Julien Mairal, Jean Ponce and Guillermo Sapiro Optimization for Sparse Coding 14/41

Page 15: Sparse Coding and Dictionary Learning for Image Analysis eserved@d = *@let@token Part ...lear.inrialpes.fr › ... › mairal › tutorial_iccv09 › tuto_part1.pdf · 2012-10-05 ·

Matching Pursuit α = (0, 0.24, 0.75)

z

x

y

d1

d2

d3r

Francis Bach, Julien Mairal, Jean Ponce and Guillermo Sapiro Optimization for Sparse Coding 15/41

Page 16: Sparse Coding and Dictionary Learning for Image Analysis eserved@d = *@let@token Part ...lear.inrialpes.fr › ... › mairal › tutorial_iccv09 › tuto_part1.pdf · 2012-10-05 ·

Matching Pursuit α = (0, 0.24, 0.65)

d1

d2

d3r

z

x

y

Francis Bach, Julien Mairal, Jean Ponce and Guillermo Sapiro Optimization for Sparse Coding 16/41

Page 17: Sparse Coding and Dictionary Learning for Image Analysis eserved@d = *@let@token Part ...lear.inrialpes.fr › ... › mairal › tutorial_iccv09 › tuto_part1.pdf · 2012-10-05 ·

Orthogonal Matching Pursuit

minα∈Rp

||x−Dα||22 s.t. ||α||0 ≤ L

1: Γ = ∅.2: for iter = 1, . . . ,L do

3: Select the atom which most reduces the objective

ı← arg mini∈ΓC

{

minα

||x−DΓ∪{i}α′||22

}

4: Update the active set: Γ← Γ ∪ {ı}.5: Update the residual (orthogonal projection)

r← (I−DΓ(DTΓ DΓ)

−1DTΓ )x.

6: Update the coefficients

αΓ ← (DTΓ DΓ)

−1DTΓ x.

7: end for

Francis Bach, Julien Mairal, Jean Ponce and Guillermo Sapiro Optimization for Sparse Coding 17/41

Page 18: Sparse Coding and Dictionary Learning for Image Analysis eserved@d = *@let@token Part ...lear.inrialpes.fr › ... › mairal › tutorial_iccv09 › tuto_part1.pdf · 2012-10-05 ·

Orthogonal Matching Pursuit α = (0, 0, 0)

Γ = ∅

d1

d2

d3

xz

x

y

Francis Bach, Julien Mairal, Jean Ponce and Guillermo Sapiro Optimization for Sparse Coding 18/41

Page 19: Sparse Coding and Dictionary Learning for Image Analysis eserved@d = *@let@token Part ...lear.inrialpes.fr › ... › mairal › tutorial_iccv09 › tuto_part1.pdf · 2012-10-05 ·

Orthogonal Matching Pursuit α = (0, 0, 0.75)

Γ = {3}

z

x

y

d1

d2

d3

xr

π1

π2

π3

Francis Bach, Julien Mairal, Jean Ponce and Guillermo Sapiro Optimization for Sparse Coding 19/41

Page 20: Sparse Coding and Dictionary Learning for Image Analysis eserved@d = *@let@token Part ...lear.inrialpes.fr › ... › mairal › tutorial_iccv09 › tuto_part1.pdf · 2012-10-05 ·

Orthogonal Matching Pursuit α = (0, 0.29, 0.63)

Γ = {3, 2}

z

x

y

d1

d2

d3

xr

π31

π32

Francis Bach, Julien Mairal, Jean Ponce and Guillermo Sapiro Optimization for Sparse Coding 20/41

Page 21: Sparse Coding and Dictionary Learning for Image Analysis eserved@d = *@let@token Part ...lear.inrialpes.fr › ... › mairal › tutorial_iccv09 › tuto_part1.pdf · 2012-10-05 ·

Orthogonal Matching Pursuit

Contrary to MP, an atom can only be selected one time with OMP. It is,however, more difficult to implement efficiently. The keys for a goodimplementation in the case of a large number of signals are

Precompute the Gram matrix G = DTD once in for all,

Maintain the computation of DT r for each signal,

Maintain a Cholesky decomposition of (DTΓ DΓ)

−1 for each signal.

The total complexity for decomposing n L-sparse signals of size m with adictionary of size p is

O(p2m)︸ ︷︷ ︸

Gram matrix

+ O(nL3)︸ ︷︷ ︸

Cholesky

+ O(n(pm + pL2))︸ ︷︷ ︸

DT r

= O(np(m + L2))

It is also possible to use the matrix inversion lemma instead of aCholesky decomposition (same complexity, but less numerical stability)

Francis Bach, Julien Mairal, Jean Ponce and Guillermo Sapiro Optimization for Sparse Coding 21/41

Page 22: Sparse Coding and Dictionary Learning for Image Analysis eserved@d = *@let@token Part ...lear.inrialpes.fr › ... › mairal › tutorial_iccv09 › tuto_part1.pdf · 2012-10-05 ·

Example with the software SPAMS

Software available at http://www.di.ens.fr/willow/SPAMS/

>> I=double(imread(’data/lena.png’))/255;

>> %extract all patches of I

>> X=im2col(I,[8 8],’sliding’);

>> %load a dictionary of size 64 x 256

>> D=load(’dict.mat’);

>>

>> %set the sparsity parameter L to 10

>> param.L=10;

>> alpha=mexOMP(X,D,param);

On a 8-cores 2.83Ghz machine: 230000 signals processed per second!

Francis Bach, Julien Mairal, Jean Ponce and Guillermo Sapiro Optimization for Sparse Coding 22/41

Page 23: Sparse Coding and Dictionary Learning for Image Analysis eserved@d = *@let@token Part ...lear.inrialpes.fr › ... › mairal › tutorial_iccv09 › tuto_part1.pdf · 2012-10-05 ·

Why does the ℓ1-norm induce sparsity?Analysis of the norms in 1D

ψ(α) = α2

ψ′(α) = 2α

ψ(α) = |α|

ψ′−(α) = −1, ψ′

+(α) = 1

The gradient of the ℓ2-norm vanishes when α get close to 0. On itsdifferentiable part, the norm of the gradient of the ℓ1-norm is constant.

Francis Bach, Julien Mairal, Jean Ponce and Guillermo Sapiro Optimization for Sparse Coding 23/41

Page 24: Sparse Coding and Dictionary Learning for Image Analysis eserved@d = *@let@token Part ...lear.inrialpes.fr › ... › mairal › tutorial_iccv09 › tuto_part1.pdf · 2012-10-05 ·

Why does the ℓ1-norm induce sparsity?Exemple: quadratic problem in 1D

minα∈R

1

2(x − α)2 + λ|α|

Piecewise quadratic function with a kink at zero.

Derivative at 0+: g+ = −x + λ and 0−: g− = −x − λ.

Optimality conditions. α is optimal iff:

|α| > 0 and (x − α) + λ sign(α) = 0

α = 0 and g+ ≥ 0 and g− ≤ 0

The solution is a soft-thresholding:

α⋆ = sign(x)(|x | − λ)+.

Francis Bach, Julien Mairal, Jean Ponce and Guillermo Sapiro Optimization for Sparse Coding 24/41

Page 25: Sparse Coding and Dictionary Learning for Image Analysis eserved@d = *@let@token Part ...lear.inrialpes.fr › ... › mairal › tutorial_iccv09 › tuto_part1.pdf · 2012-10-05 ·

Why does the ℓ1-norm induce sparsity?Physical illustration

E1 = 0 E1 = 0

x0

Francis Bach, Julien Mairal, Jean Ponce and Guillermo Sapiro Optimization for Sparse Coding 25/41

Page 26: Sparse Coding and Dictionary Learning for Image Analysis eserved@d = *@let@token Part ...lear.inrialpes.fr › ... › mairal › tutorial_iccv09 › tuto_part1.pdf · 2012-10-05 ·

Why does the ℓ1-norm induce sparsity?Physical illustration

E1 = k12 (x0 − x)2

E2 = k22 x2 x

x

E1 = k12 (x0 − x)2

E2 = mgx

Francis Bach, Julien Mairal, Jean Ponce and Guillermo Sapiro Optimization for Sparse Coding 26/41

Page 27: Sparse Coding and Dictionary Learning for Image Analysis eserved@d = *@let@token Part ...lear.inrialpes.fr › ... › mairal › tutorial_iccv09 › tuto_part1.pdf · 2012-10-05 ·

Why does the ℓ1-norm induce sparsity?Physical illustration

E1 = k12 (x0 − x)2

E2 = k22 x2 x

x = 0 !!

E1 = k12 (x0 − x)2

E2 = mgx

Francis Bach, Julien Mairal, Jean Ponce and Guillermo Sapiro Optimization for Sparse Coding 27/41

Page 28: Sparse Coding and Dictionary Learning for Image Analysis eserved@d = *@let@token Part ...lear.inrialpes.fr › ... › mairal › tutorial_iccv09 › tuto_part1.pdf · 2012-10-05 ·

Why does the ℓ1-norm induce sparsity?The geometric explanation

x

y x

z

y

general quadratic problem: coupled soft-thresholding.

Francis Bach, Julien Mairal, Jean Ponce and Guillermo Sapiro Optimization for Sparse Coding 28/41

Page 29: Sparse Coding and Dictionary Learning for Image Analysis eserved@d = *@let@token Part ...lear.inrialpes.fr › ... › mairal › tutorial_iccv09 › tuto_part1.pdf · 2012-10-05 ·

Optimality conditions of the LassoNonsmooth optimization

Directional derivatives and subgradients are useful tools for studyingℓ1-decomposition problems:

minα∈Rp

1

2||x−Dα||22 + λ||α||1

In this tutorial, we use the directional derivatives to derive simpleoptimality conditions of the Lasso.

For more information on convex analysis and nonsmooth optimization,see the following books: [Boyd and Vandenberghe, 2004], [Nocedal andWright, 2006], [Borwein and Lewis, 2006], [Bonnans et al., 2006],[Bertsekas, 1999].

Francis Bach, Julien Mairal, Jean Ponce and Guillermo Sapiro Optimization for Sparse Coding 29/41

Page 30: Sparse Coding and Dictionary Learning for Image Analysis eserved@d = *@let@token Part ...lear.inrialpes.fr › ... › mairal › tutorial_iccv09 › tuto_part1.pdf · 2012-10-05 ·

Optimality conditions of the LassoDirectional derivatives

Directional derivative in the direction u at α:

∇f (α,u) = limt→0+

f (α + tu)− f (α)

t

Main idea: in non smooth situations, one may need to look at alldirections u and not simply p independent ones!

Proposition 1: if f is differentiable in α, ∇f (α,u) = ∇f (α)Tu.

Proposition 2: α is optimal iff for all u in Rp, ∇f (α,u) ≥ 0.

Francis Bach, Julien Mairal, Jean Ponce and Guillermo Sapiro Optimization for Sparse Coding 30/41

Page 31: Sparse Coding and Dictionary Learning for Image Analysis eserved@d = *@let@token Part ...lear.inrialpes.fr › ... › mairal › tutorial_iccv09 › tuto_part1.pdf · 2012-10-05 ·

Optimality conditions of the Lasso

minα∈Rp

1

2||x−Dα||22 + λ||α||1

α⋆ is optimal iff for all u in R

p, ∇f (α,u) ≥ 0—that is,

−uTDT (x−Dα⋆) + λ

i ,α[i ] 6=0

sign(α⋆[i ])u[i ] + λ∑

i ,α⋆[i ]=0

|ui | ≥ 0,

which is equivalent to the following conditions:

∀i = 1, . . . , p,

{|dT

i (x−Dα⋆)| ≤ λ if α

⋆[i ] = 0dT

i (x−Dα⋆) = λ sign(α⋆[i ]) if α

⋆[i ] 6= 0

Francis Bach, Julien Mairal, Jean Ponce and Guillermo Sapiro Optimization for Sparse Coding 31/41

Page 32: Sparse Coding and Dictionary Learning for Image Analysis eserved@d = *@let@token Part ...lear.inrialpes.fr › ... › mairal › tutorial_iccv09 › tuto_part1.pdf · 2012-10-05 ·

Homotopy

A homotopy method provides a set of solutions indexed by aparameter.

The regularization path (λ,α⋆(λ)) for instance!!

It can be useful when the path has some “nice” properties(piecewise linear, piecewise quadratic).

LARS [Efron et al., 2004] starts from a trivial solution, and followsthe regularization path of the Lasso, which is is piecewise linear.

Francis Bach, Julien Mairal, Jean Ponce and Guillermo Sapiro Optimization for Sparse Coding 32/41

Page 33: Sparse Coding and Dictionary Learning for Image Analysis eserved@d = *@let@token Part ...lear.inrialpes.fr › ... › mairal › tutorial_iccv09 › tuto_part1.pdf · 2012-10-05 ·

Homotopy, LARS[Osborne et al., 2000], [Efron et al., 2004]

∀i = 1, . . . , p,

{|dT

i (x−Dα⋆)| ≤ λ if α

⋆[i ] = 0dT

i (x−Dα⋆) = λ sign(α⋆[i ]) if α

⋆[i ] 6= 0(1)

The regularization path is piecewise linear:

DTΓ (x−DΓα

⋆Γ) = λ sign(α⋆

Γ)

α⋆Γ(λ) = (DT

Γ DΓ)−1(DT

Γ x− λ sign(α⋆Γ)) = A + λB

A simple interpretation of LARS

Start from the trivial solution (λ = ||DT x||∞,α⋆(λ) = 0).

Maintain the computations of |dTi (x−Dα

⋆(λ))| for all i .

Maintain the computation of the current direction B.

Follow the path by reducing λ until the next kink.

Francis Bach, Julien Mairal, Jean Ponce and Guillermo Sapiro Optimization for Sparse Coding 33/41

Page 34: Sparse Coding and Dictionary Learning for Image Analysis eserved@d = *@let@token Part ...lear.inrialpes.fr › ... › mairal › tutorial_iccv09 › tuto_part1.pdf · 2012-10-05 ·

Example with the software SPAMShttp://www.di.ens.fr/willow/SPAMS/

>> I=double(imread(’data/lena.png’))/255;

>> %extract all patches of I

>> X=normalize(im2col(I,[8 8],’sliding’));

>> %load a dictionary of size 64 x 256

>> D=load(’dict.mat’);

>>

>> %set the sparsity parameter lambda to 0.15

>> param.lambda=0.15;

>> alpha=mexLasso(X,D,param);

On a 8-cores 2.83Ghz machine: 77000 signals processed per second!

Note that it can also solve constrained version of the problem. Thecomplexity is more or less the same as OMP and uses the same tricks(Cholesky decomposition).

Francis Bach, Julien Mairal, Jean Ponce and Guillermo Sapiro Optimization for Sparse Coding 34/41

Page 35: Sparse Coding and Dictionary Learning for Image Analysis eserved@d = *@let@token Part ...lear.inrialpes.fr › ... › mairal › tutorial_iccv09 › tuto_part1.pdf · 2012-10-05 ·

Coordinate Descent

Coordinate descent + nonsmooth objective: WARNING: not

convergent in general

Here, the problem is equivalent to a convex smooth optimizationproblem with separable constraints

minα+,α−

1

2||x−D+α+ +D−α−||

22 +λα

T+1+λα

T−1 s.t. α−,α+ ≥ 0.

For this specific problem, coordinate descent is convergent.

Supposing ||di ||2 = 1, updating the coordinate i :

α[i ]← arg minβ

1

2|| x −

j 6=i

α[j]dj

︸ ︷︷ ︸

r

−βdi ||22 + λ|β|

← sign(dTi r)(|dT

i r| − λ)+

⇒ soft-thresholding!

Francis Bach, Julien Mairal, Jean Ponce and Guillermo Sapiro Optimization for Sparse Coding 35/41

Page 36: Sparse Coding and Dictionary Learning for Image Analysis eserved@d = *@let@token Part ...lear.inrialpes.fr › ... › mairal › tutorial_iccv09 › tuto_part1.pdf · 2012-10-05 ·

Example with the software SPAMShttp://www.di.ens.fr/willow/SPAMS/

>> I=double(imread(’data/lena.png’))/255;

>> %extract all patches of I

>> X=normalize(im2col(I,[8 8],’sliding’));

>> %load a dictionary of size 64 x 256

>> D=load(’dict.mat’);

>>

>> %set the sparsity parameter lambda to 0.15

>> param.lambda=0.15;

>> param.tol=1e-2;

>> param.itermax=200;

>> alpha=mexCD(X,D,param);

On a 8-cores 2.83Ghz machine: 93000 signals processed per second!

Francis Bach, Julien Mairal, Jean Ponce and Guillermo Sapiro Optimization for Sparse Coding 36/41

Page 37: Sparse Coding and Dictionary Learning for Image Analysis eserved@d = *@let@token Part ...lear.inrialpes.fr › ... › mairal › tutorial_iccv09 › tuto_part1.pdf · 2012-10-05 ·

first-order/proximal methods

minα∈Rp

f (α) + λψ(α)

f is strictly convex and differentiable with a Lipshitz gradient.

Generalize the idea of gradient descent

αk+1 ← arg minα∈R

f (αk)+∇f (αk)T (α−αk)+L

2||α−αk ||

22+λψ(α).

There exists an accelerated scheme (gradient method with“extrapolation”) [Nesterov, 2007, 1983]

Both are implemented in SPAMS.

suited for large-scale experiments.

Francis Bach, Julien Mairal, Jean Ponce and Guillermo Sapiro Optimization for Sparse Coding 37/41

Page 38: Sparse Coding and Dictionary Learning for Image Analysis eserved@d = *@let@token Part ...lear.inrialpes.fr › ... › mairal › tutorial_iccv09 › tuto_part1.pdf · 2012-10-05 ·

Summary of this part

Greedy methods can address directly the NP-hard ℓ0-decompositionproblem.

ℓ1 can be used as a convex relaxation for ℓ0.

Homotopy methods can be extremely efficient for small ormedium-sized problems, or when the solution is very sparse.

Coordinate descent provides in general quickly a solution with asmall/medium precision, but gets slower when there is a lot ofcorrelation in the dictionary.

First order methods are very attractive in the large scale setting.

Other good alternatives exists, active-set, reweighted ℓ2 methods,stochastic variants, variants of OMP,. . .

Francis Bach, Julien Mairal, Jean Ponce and Guillermo Sapiro Optimization for Sparse Coding 38/41

Page 39: Sparse Coding and Dictionary Learning for Image Analysis eserved@d = *@let@token Part ...lear.inrialpes.fr › ... › mairal › tutorial_iccv09 › tuto_part1.pdf · 2012-10-05 ·

References I

A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linearinverse problems. SIAM Journal on Imaging Sciences, 2(1):183–202, 2009.

D. P. Bertsekas. Nonlinear programming. Athena Scientific Belmont, Mass, 1999.

J.F. Bonnans, J.C. Gilbert, C. Lemarechal, and C.A. Sagastizabal. Numericaloptimization: theoretical and practical aspects. Springer-Verlag New York Inc,2006.

J. M. Borwein and A. S. Lewis. Convex analysis and nonlinear optimization: Theoryand examples. Springer, 2006.

S. P. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press,2004.

I. Daubechies, M. Defrise, and C. De Mol. An iterative thresholding algorithm forlinear inverse problems with a sparsity constraint. Comm. Pure Appl. Math, 57:1413–1457, 2004.

I. Daubechies, R. DeVore, M. Fornasier, and S. Gunturk. Iteratively re-weighted leastsquares minimization for sparse recovery. Commun. Pure Appl. Math, 2009.

B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani. Least angle regression. Annals ofstatistics, 32(2):407–499, 2004.

Francis Bach, Julien Mairal, Jean Ponce and Guillermo Sapiro Optimization for Sparse Coding 39/41

Page 40: Sparse Coding and Dictionary Learning for Image Analysis eserved@d = *@let@token Part ...lear.inrialpes.fr › ... › mairal › tutorial_iccv09 › tuto_part1.pdf · 2012-10-05 ·

References IIJ. Friedman, T. Hastie, H. Holfling, and R. Tibshirani. Pathwise coordinate

optimization. Annals of statistics, 1(2):302–332, 2007.

W. J. Fu. Penalized regressions: The bridge versus the Lasso. Journal ofcomputational and graphical statistics, 7:397–416, 1998.

S. Mallat and Z. Zhang. Matching pursuit in a time-frequency dictionary. IEEETransactions on Signal Processing, 41(12):3397–3415, 1993.

H. M. Markowitz. The optimization of a quadratic function subject to linearconstraints. Naval Research Logistics Quarterly, 3:111–133, 1956.

Y. Nesterov. Gradient methods for minimizing composite objective function.Technical report, CORE, 2007.

Y. Nesterov. A method for solving a convex programming problem with convergencerate O(1/k2). Soviet Math. Dokl., 27:372–376, 1983.

J. Nocedal and SJ Wright. Numerical Optimization. Springer: New York, 2006. 2ndEdition.

M. R. Osborne, B. Presnell, and B. A. Turlach. On the Lasso and its dual. Journal ofComputational and Graphical Statistics, 9(2):319–37, 2000.

Francis Bach, Julien Mairal, Jean Ponce and Guillermo Sapiro Optimization for Sparse Coding 40/41

Page 41: Sparse Coding and Dictionary Learning for Image Analysis eserved@d = *@let@token Part ...lear.inrialpes.fr › ... › mairal › tutorial_iccv09 › tuto_part1.pdf · 2012-10-05 ·

References IIIV. Roth and B. Fischer. The group-lasso for generalized linear models: uniqueness of

solutions and efficient algorithms. In Proceedings of the International Conferenceon Machine Learning (ICML), 2008.

S. Weisberg. Applied Linear Regression. Wiley, New York, 1980.

Francis Bach, Julien Mairal, Jean Ponce and Guillermo Sapiro Optimization for Sparse Coding 41/41


Recommended