+ All Categories
Home > Documents > A SGD safarislacoste/dali/2019-01-03-Lorenzo-dali.pdf · J Lin, L Rosasco The Journal of Machine...

A SGD safarislacoste/dali/2019-01-03-Lorenzo-dali.pdf · J Lin, L Rosasco The Journal of Machine...

Date post: 07-Sep-2020
Category:
Upload: others
View: 0 times
Download: 0 times
Share this document with a friend
27
A SGD safari Lorenzo Rosasco University of Genova Massachusetts Institute of Technology - Istituto Italiano di Tecnologia lcsl.mit.edu Jan. 3rd, 2019 – DALI 2019: Optimization Workshop joint work with R. Camoriano (LCSL), J. Lin (EPFL), S. Villa (UniGE)
Transcript
Page 1: A SGD safarislacoste/dali/2019-01-03-Lorenzo-dali.pdf · J Lin, L Rosasco The Journal of Machine Learning Research 18 (1), 3375-3421 I Generalization properties and implicit regularization

A SGD safari

Lorenzo RosascoUniversity of Genova

Massachusetts Institute of Technology - Istituto Italiano di Tecnologialcsl.mit.edu

Jan. 3rd, 2019 – DALI 2019: Optimization Workshop

joint work with R. Camoriano (LCSL), J. Lin (EPFL), S. Villa (UniGE)

Page 2: A SGD safarislacoste/dali/2019-01-03-Lorenzo-dali.pdf · J Lin, L Rosasco The Journal of Machine Learning Research 18 (1), 3375-3421 I Generalization properties and implicit regularization

Outline

Classic results

Statistical learning & least squares

Multi-pass SGD

Page 3: A SGD safarislacoste/dali/2019-01-03-Lorenzo-dali.pdf · J Lin, L Rosasco The Journal of Machine Learning Research 18 (1), 3375-3421 I Generalization properties and implicit regularization

SGD

Problem SolveminwF (w), F (w) = EZ`(Z,w)

SGDwt+1 = wt − γt∇`(Zt, wt), t = 0, . . . , T

I It holds E∇L(Zt, w) = ∇F (w), hence the name.

I Every step requires a new gradient estimates.

Page 4: A SGD safarislacoste/dali/2019-01-03-Lorenzo-dali.pdf · J Lin, L Rosasco The Journal of Machine Learning Research 18 (1), 3375-3421 I Generalization properties and implicit regularization

SGD typical result

Assume F convex, smooth, with bounded gradients and take γ . 1√t, then

E[F (wT )−min

wF (w)

].

1√T.

I Rates are optimal improved.

I Better rates under stronger conditions: strong convexity, KL/conditioning

Page 5: A SGD safarislacoste/dali/2019-01-03-Lorenzo-dali.pdf · J Lin, L Rosasco The Journal of Machine Learning Research 18 (1), 3375-3421 I Generalization properties and implicit regularization

SGD for training error

Special case

F (w) =1

n

n∑i=1

`(zi, w),

Z rand. var. uniformly distributed on z1, . . . , zn.

I Better rates achievable in this case.

I Again improvable under stronger conditions: strong convexity, KL/conditioning.

I SGD called also incremental gradient in this case.

Page 6: A SGD safarislacoste/dali/2019-01-03-Lorenzo-dali.pdf · J Lin, L Rosasco The Journal of Machine Learning Research 18 (1), 3375-3421 I Generalization properties and implicit regularization

Understanding SGD: from practice to theory

I multiple-passes (gradients are re-used)

I various step-size choices

I mini-batch

I averaging

I sketching

I acceleration

I preconditioning

I . . .

What is the impact for learning (test error)?

Page 7: A SGD safarislacoste/dali/2019-01-03-Lorenzo-dali.pdf · J Lin, L Rosasco The Journal of Machine Learning Research 18 (1), 3375-3421 I Generalization properties and implicit regularization

Outline

Classic results

Statistical learning & least squares

Multi-pass SGD

Page 8: A SGD safarislacoste/dali/2019-01-03-Lorenzo-dali.pdf · J Lin, L Rosasco The Journal of Machine Learning Research 18 (1), 3375-3421 I Generalization properties and implicit regularization

Least squares learning

I X Hilbert space

I Z = (X,Y ) with values in X × R

Problem:Solve

minw∈XE(w) E(w) = E[(Y − 〈w,X〉)2]

given only (xi, yi)ni=1 i.i.d.

Minimal norm solution:

w† = argminw∈O

‖w‖, O = argminw∈X

E(w)

Page 9: A SGD safarislacoste/dali/2019-01-03-Lorenzo-dali.pdf · J Lin, L Rosasco The Journal of Machine Learning Research 18 (1), 3375-3421 I Generalization properties and implicit regularization

Ill-posedeness

Least squares optimality conditions:

Σw† = g, Σ = E[X ⊗X], g = E[XY ]

and w† ∈ Null(Σ)⊥

Ill-posedness

I X infinite dimensional, Σ compact ⇒ problem is ill-posed.

I if X is finite dimensional it is well posed, but possibly badly conditioned.

Page 10: A SGD safarislacoste/dali/2019-01-03-Lorenzo-dali.pdf · J Lin, L Rosasco The Journal of Machine Learning Research 18 (1), 3375-3421 I Generalization properties and implicit regularization

Ill-posedeness

Least squares optimality conditions:

Σw† = g, Σ = E[X ⊗X], g = E[XY ]

and w† ∈ Null(Σ)⊥

Ill-posedness

I X infinite dimensional, Σ compact ⇒ problem is ill-posed.

I if X is finite dimensional it is well posed, but possibly badly conditioned.

Page 11: A SGD safarislacoste/dali/2019-01-03-Lorenzo-dali.pdf · J Lin, L Rosasco The Journal of Machine Learning Research 18 (1), 3375-3421 I Generalization properties and implicit regularization

Least squares SGD

wt+1 = wt − ηt(xit(〈wt, xit〉 − yit) + λwt), t = 0, . . . T

Free parameters:

I regularization parameter λ

I step-size (ηt)tI stopping time T , (T > n multiple “passes”)

Note: (it)t deterministic or stochastic (with/without replacement)

Page 12: A SGD safarislacoste/dali/2019-01-03-Lorenzo-dali.pdf · J Lin, L Rosasco The Journal of Machine Learning Research 18 (1), 3375-3421 I Generalization properties and implicit regularization

LS-SGD: Previous results

Non asymptotic:

I [Smale-Yao ’05] Fixed λ (some classic results hold for this case).

I [Tarres-Yao ’07] Decreasing λ.

I [Ying-Pontil ’07] λ = 0.

All one pass, i.e. it = t, and with decreasing step-size.

[Villa-Rosasco ’15] λ = 0, multiple passes (for the first time?), cyclic selection.

Page 13: A SGD safarislacoste/dali/2019-01-03-Lorenzo-dali.pdf · J Lin, L Rosasco The Journal of Machine Learning Research 18 (1), 3375-3421 I Generalization properties and implicit regularization

Outline

Classic results

Statistical learning & least squares

Multi-pass SGD

Page 14: A SGD safarislacoste/dali/2019-01-03-Lorenzo-dali.pdf · J Lin, L Rosasco The Journal of Machine Learning Research 18 (1), 3375-3421 I Generalization properties and implicit regularization

Multi-pass LS-SGD

wt+1 = wt − η(xit(〈wt, xit〉 − yit)), t = 0, . . . T

Note: (it)t chosen uniformly at random with replacement

Theorem (Lin, R. ’16)

Assume ‖X‖ ≤ 1 and |Y | ≤ 1 for all η and T ,

EE(wT )− E(w†) .1

ηT+

1√n

(ηT√n

)2

+ η

(1 ∨ ηT√

n

)

Note

I Statistics and optimization: integrated in the bound.

I Bias-variance: parameter choices derived optimizing the bound.

Page 15: A SGD safarislacoste/dali/2019-01-03-Lorenzo-dali.pdf · J Lin, L Rosasco The Journal of Machine Learning Research 18 (1), 3375-3421 I Generalization properties and implicit regularization

Multi-pass LS-SGD

wt+1 = wt − η(xit(〈wt, xit〉 − yit)), t = 0, . . . T

Note: (it)t chosen uniformly at random with replacement

Theorem (Lin, R. ’16)

Assume ‖X‖ ≤ 1 and |Y | ≤ 1 for all η and T ,

EE(wT )− E(w†) .1

ηT+

1√n

(ηT√n

)2

+ η

(1 ∨ ηT√

n

)

Note

I Statistics and optimization: integrated in the bound.

I Bias-variance: parameter choices derived optimizing the bound.

Page 16: A SGD safarislacoste/dali/2019-01-03-Lorenzo-dali.pdf · J Lin, L Rosasco The Journal of Machine Learning Research 18 (1), 3375-3421 I Generalization properties and implicit regularization

Multi-pass LS-SGD

wt+1 = wt − η(xit(〈wt, xit〉 − yit)), t = 0, . . . T

Note: (it)t chosen uniformly at random with replacement

Theorem (Lin, R. ’16)

Assume ‖X‖ ≤ 1 and |Y | ≤ 1 for all η and T ,

EE(wT )− E(w†) .1

ηT+

1√n

(ηT√n

)2

+ η

(1 ∨ ηT√

n

)

Note

I Statistics and optimization: integrated in the bound.

I Bias-variance: parameter choices derived optimizing the bound.

Page 17: A SGD safarislacoste/dali/2019-01-03-Lorenzo-dali.pdf · J Lin, L Rosasco The Journal of Machine Learning Research 18 (1), 3375-3421 I Generalization properties and implicit regularization

Multi-pass vs one pass SGD

Corollary (Lin, R. ’16)

Assume ‖X‖ ≤ 1 and |Y | ≤ 1 a.s. and let

I T = n (1 pass), η = 1√n.

I T = n3/2 (√n passes), η = 1

n .

Then,

EE(wT )− E(w†) .1√n

Note

I Optimal (nonparametric) rate in a minmax sense.

I With a larger step-size, one pass suffices (recovering [Dieulevet, Bach ’14– Ying, Pontil, ’06]).

Page 18: A SGD safarislacoste/dali/2019-01-03-Lorenzo-dali.pdf · J Lin, L Rosasco The Journal of Machine Learning Research 18 (1), 3375-3421 I Generalization properties and implicit regularization

Beyond the worst case: source condition

RecallΣw† = g, Σ = E[X ⊗X], g = E[XY ]

and w† ∈ Null(Σ)⊥

I S) Source condition w† ∈ Range (Σα), α > 0

I C) Capacity condition σi(Σ) ∼ i−γ , γ ∈ (0, 1]

Page 19: A SGD safarislacoste/dali/2019-01-03-Lorenzo-dali.pdf · J Lin, L Rosasco The Journal of Machine Learning Research 18 (1), 3375-3421 I Generalization properties and implicit regularization

Fast rates

Theorem (Lin, R. ’16)

Assume ‖X‖ ≤ 1, |Y | ≤ 1 and S), C) hold. Then, for all η and T ,

EE(wT )− E(w†) .

(1

ηT

)2α+1

+1

n2α+1

2α+1+γ

(ηT

n1

2α+1+γ

)2

+ η

(1 ∨ ηT

n1

2α+1+γ

)

Note

I Reduces to worst case for α = 0, γ = 1.

I Different parameter choices derived optimizing the bound.

Page 20: A SGD safarislacoste/dali/2019-01-03-Lorenzo-dali.pdf · J Lin, L Rosasco The Journal of Machine Learning Research 18 (1), 3375-3421 I Generalization properties and implicit regularization

Multiple passes SGD

Corollary (Lin, R. ’16)

Assume ‖X‖ ≤ 1, |Y | ≤ 1 and S), C) hold. Let

I T = n1

2α+1+γ+1 (n1

2α+1+γ passes)

I η = 1n .

Then,

EE(wT )− E(w†) . n−2α+1

2α+1+γ

Note

I Optimal (nonparametric) rate in a minmax sense.

I Same as Tikhonov regularization but include optimization!

I Choosing Tn by cross validation (CV) achieves the same rate.

Page 21: A SGD safarislacoste/dali/2019-01-03-Lorenzo-dali.pdf · J Lin, L Rosasco The Journal of Machine Learning Research 18 (1), 3375-3421 I Generalization properties and implicit regularization

One pass SGD

Corollary (Dieulevet, Bach ’16)

Assume ‖x‖ ≤ 1, |y| ≤ 1 and S), C) hold with α < 1/2. Let

I T = n (1 pass)

I η = n−2α+1

2α+1+γ .

I wn = 1n

∑nt=1 wt.

Then,

EE(wT )− E(w†) . n−2α+1

2α+1+γ

Note

I Optimal (nonparametric) rate in a minmax sense.

I Same rates using cross validation (CV) for choosing step-size η.

Page 22: A SGD safarislacoste/dali/2019-01-03-Lorenzo-dali.pdf · J Lin, L Rosasco The Journal of Machine Learning Research 18 (1), 3375-3421 I Generalization properties and implicit regularization

Remarks

I Stepsize and iterations control convergence and stability of SGD: one of the two (or both)needs be tuned.

I Proof extends to harder or easier learning problems with slightly different take homemessages [Pillaud et al. 1’8].

I Proof strategy extends to averaging [Pillaud et al. ’18], decaying stepsize, mini-batches [Lin,

R.’16].

Page 23: A SGD safarislacoste/dali/2019-01-03-Lorenzo-dali.pdf · J Lin, L Rosasco The Journal of Machine Learning Research 18 (1), 3375-3421 I Generalization properties and implicit regularization

Mini-batch, multi-pass LS-SGD

wt+1 = wt − ηt1

b

bt∑i=b(t−1)+1

(〈wt, (xji)〉 − yji

)(xji)

Theorem (Lin, R. ’16)

Assume ‖X‖ ≤ 1 and |Y | ≤ 1 for all η and T ,

EE(wT )− E(w†) .1

ηT+

1√n

(ηT√n

)2

b

(1 +

ηT√n

)

Note

I mini-batch size: b.

Page 24: A SGD safarislacoste/dali/2019-01-03-Lorenzo-dali.pdf · J Lin, L Rosasco The Journal of Machine Learning Research 18 (1), 3375-3421 I Generalization properties and implicit regularization

Multi-pass vs one pass SGD

Corollary (Lin, R. ’16)

Assume ‖X‖ ≤ 1 and |Y | ≤ 1 a.s. and consider one of the following choices

1. b = 1, ηt ' 1√n, and T = n iterations (1 pass over the data);

2. b =√n, ηt ' 1, and T =

√n iterations (1 pass over the data);

3. b = n, ηt ' 1, and T =√n iterations (

√n passes over the data);

Then,

EE(wT )− E(w†) .1√n

Note

I Mini-batching allows larger step-sizes.

I No gain after b =√n.

I Refined results beyond this worst case.

Page 25: A SGD safarislacoste/dali/2019-01-03-Lorenzo-dali.pdf · J Lin, L Rosasco The Journal of Machine Learning Research 18 (1), 3375-3421 I Generalization properties and implicit regularization

Concluding

I Tools from statistical learning to understand practically used SGD.

I First optimal results for multiple passes ( and minibatching).

I Sketching/random features → I brought a poster...

Page 26: A SGD safarislacoste/dali/2019-01-03-Lorenzo-dali.pdf · J Lin, L Rosasco The Journal of Machine Learning Research 18 (1), 3375-3421 I Generalization properties and implicit regularization

Some open problems

I Combine averaging and minibatching - [Mucke, R. ’19] on the way

I Beyond least squares – [Hardt et al. 16, Lin, Camoriano R. ’16] partial results

I Beyond minimal `2 norm – [Matet, R., Villa, Vu ’16, Garrigos, R., Villa ’16] batch case

I Acceleration - results in [Jain et al ’16-]

I Non-convexity

Page 27: A SGD safarislacoste/dali/2019-01-03-Lorenzo-dali.pdf · J Lin, L Rosasco The Journal of Machine Learning Research 18 (1), 3375-3421 I Generalization properties and implicit regularization

References

I Learning with incremental iterative regularizationL Rosasco, S VillaAdvances in Neural Information Processing Systems, 1630-1638

I Optimal rates for multi-pass stochastic gradient methodsJ Lin, L RosascoThe Journal of Machine Learning Research 18 (1), 3375-3421

I Generalization properties and implicit regularization for multiple passes SGMJ Lin, R Camoriano, L RosascoInternational Conference on Machine Learning, 2340-2348

I Learning with sgd and random featuresL Carratino, A Rudi, L RosascoAdvances in Neural Information Processing Systems, 10213-10224


Recommended