+ All Categories
Home > Documents > ORIE 4741: Learning with Big Messy Data [2ex] Regularization · ORIE 4741: Learning with Big Messy...

ORIE 4741: Learning with Big Messy Data [2ex] Regularization · ORIE 4741: Learning with Big Messy...

Date post: 18-May-2020
Category:
Upload: others
View: 7 times
Download: 0 times
Share this document with a friend
52
ORIE 4741: Learning with Big Messy Data Regularization Professor Udell Operations Research and Information Engineering Cornell November 19, 2019 1 / 35
Transcript
Page 1: ORIE 4741: Learning with Big Messy Data [2ex] Regularization · ORIE 4741: Learning with Big Messy Data Regularization Professor Udell Operations Research and Information Engineering

ORIE 4741: Learning with Big Messy Data

Regularization

Professor Udell

Operations Research and Information EngineeringCornell

November 19, 2019

1 / 35

Page 2: ORIE 4741: Learning with Big Messy Data [2ex] Regularization · ORIE 4741: Learning with Big Messy Data Regularization Professor Udell Operations Research and Information Engineering

Regularized empirical risk minimization

choose model by solving

minimizen∑

i=1

`(xi , yi ;w) + r(w)

with variable w ∈ Rd

I parameter vector w ∈ Rd

I loss function ` : X × Y × Rd → R

I regularizer r : Rd → R

why?

I want to minimize the risk E(x ,y)∼P`(x , y ;w)

I approximate it by the empirical risk∑n

i=1 `(x , y ;w)

I add regularizer to help model generalize

2 / 35

Page 3: ORIE 4741: Learning with Big Messy Data [2ex] Regularization · ORIE 4741: Learning with Big Messy Data Regularization Professor Udell Operations Research and Information Engineering

Regularized empirical risk minimization

choose model by solving

minimizen∑

i=1

`(xi , yi ;w) + r(w)

with variable w ∈ Rd

I parameter vector w ∈ Rd

I loss function ` : X × Y × Rd → R

I regularizer r : Rd → R

why?

I want to minimize the risk E(x ,y)∼P`(x , y ;w)

I approximate it by the empirical risk∑n

i=1 `(x , y ;w)

I add regularizer to help model generalize2 / 35

Page 4: ORIE 4741: Learning with Big Messy Data [2ex] Regularization · ORIE 4741: Learning with Big Messy Data Regularization Professor Udell Operations Research and Information Engineering

Example: regularized least squares

find best model by solving

minimizen∑

i=1

`(xi , yi ;w) + r(w)

with variable w ∈ Rd

ridge regression, aka quadratically regularized least squares:

I loss function `(x , y ;w) = (y − wT x)2

I regularizer r(w) = ‖w‖2

3 / 35

Page 5: ORIE 4741: Learning with Big Messy Data [2ex] Regularization · ORIE 4741: Learning with Big Messy Data Regularization Professor Udell Operations Research and Information Engineering

Outline

Maximum likelihood estimation

Regularizers

Quadratic regularizization

`1 regularizization

Nonnegative regularizer

4 / 35

Page 6: ORIE 4741: Learning with Big Messy Data [2ex] Regularization · ORIE 4741: Learning with Big Messy Data Regularization Professor Udell Operations Research and Information Engineering

Probabilistic setup

I suppose you know a function p : R→ [0, 1] so that

P(yi = y | xi ,w) ∼ p(y ; xi ,w)

I for example, if yi = wT xi + εi , εi ∼ N (0, σ2), then

P(yi = y | xi ,w) ∼ 1√2πσ2

exp

(−(yi − wT xi )

2

2σ2

)I likelihood of data given parameter w is

L(D;w) =n∏

i=1

p(y ; xi ,w) ∼n∏

i=1

P(yi = y | xi ,w)

I for example, for linear model with Gaussian error,

L(D;w) =n∏

i=1

1√2πσ2

exp

(−(yi − wT xi )

2

2σ2

)5 / 35

Page 7: ORIE 4741: Learning with Big Messy Data [2ex] Regularization · ORIE 4741: Learning with Big Messy Data Regularization Professor Udell Operations Research and Information Engineering

Maximum Likelihood Estimation (MLE)

MLE: choose w to maximize L(D;w)

I likelihood

L(D;w) =n∏

i=1

p(yi ; xi ,w)

I negative log likelihood

`(D;w) = − log L(D;w)

I maximize L(D;w) ⇐⇒ minimize `(D;w)

6 / 35

Page 8: ORIE 4741: Learning with Big Messy Data [2ex] Regularization · ORIE 4741: Learning with Big Messy Data Regularization Professor Udell Operations Research and Information Engineering

Example: Maximum Likelihood Estimation (MLE)

for linear model with Gaussian error,

`(D;w) = − log

(n∏

i=1

1√2πσ2

exp

(−(yi − wT xi )

2

2σ2

))

=n∑

i=1

− log

(1√

2πσ2exp

(−(yi − wT xi )

2

2σ2

))

=n∑

i=1

(1

2log(2πσ2)− log

(exp

(−(yi − wT xi )

2

2σ2

)))

=n

2log(2πσ2) +

n∑i=1

1

2σ2(yi − wT xi )

2

=n

2log(2πσ2) +

1

2σ2

n∑i=1

(yi − wT xi )2

so maximize L(D;w) ⇐⇒ minimize∑n

i=1(yi − wT xi )2

(for fixed σ)

7 / 35

Page 9: ORIE 4741: Learning with Big Messy Data [2ex] Regularization · ORIE 4741: Learning with Big Messy Data Regularization Professor Udell Operations Research and Information Engineering

Example: Maximum Likelihood Estimation (MLE)

for linear model with Gaussian error,

`(D;w) = − log

(n∏

i=1

1√2πσ2

exp

(−(yi − wT xi )

2

2σ2

))

=n∑

i=1

− log

(1√

2πσ2exp

(−(yi − wT xi )

2

2σ2

))

=n∑

i=1

(1

2log(2πσ2)− log

(exp

(−(yi − wT xi )

2

2σ2

)))

=n

2log(2πσ2) +

n∑i=1

1

2σ2(yi − wT xi )

2

=n

2log(2πσ2) +

1

2σ2

n∑i=1

(yi − wT xi )2

so maximize L(D;w) ⇐⇒ minimize∑n

i=1(yi − wT xi )2

(for fixed σ)7 / 35

Page 10: ORIE 4741: Learning with Big Messy Data [2ex] Regularization · ORIE 4741: Learning with Big Messy Data Regularization Professor Udell Operations Research and Information Engineering

Priors

what if I have beliefs about what w should be before I begin?

I w should be small

I w should be sparse

I w should be nonnegative

idea: impose prior on w to specify

P(w)

before seeing any data

8 / 35

Page 11: ORIE 4741: Learning with Big Messy Data [2ex] Regularization · ORIE 4741: Learning with Big Messy Data Regularization Professor Udell Operations Research and Information Engineering

Maximum-a-posteriori estimation

after I see data, compute posterior probability

P(D;w) = P(D | w)P(w)

maximum a posteriori (MAP estimation):choose w to maximize posterior probability

n.b. this is not what a true Bayesian would do(see, e.g., Bishop, Pattern Recognition and Machine Learning)

9 / 35

Page 12: ORIE 4741: Learning with Big Messy Data [2ex] Regularization · ORIE 4741: Learning with Big Messy Data Regularization Professor Udell Operations Research and Information Engineering

Maximum-a-posteriori estimation

after I see data, compute posterior probability

P(D;w) = P(D | w)P(w)

maximum a posteriori (MAP estimation):choose w to maximize posterior probability

n.b. this is not what a true Bayesian would do(see, e.g., Bishop, Pattern Recognition and Machine Learning)

9 / 35

Page 13: ORIE 4741: Learning with Big Messy Data [2ex] Regularization · ORIE 4741: Learning with Big Messy Data Regularization Professor Udell Operations Research and Information Engineering

Ridge regression: interpretation as MAP estimator

I prior probabiliy of model w ∼ N (0, Id)I noise εi ∼ N (0, σ2), i = 1, . . . , nI response yi = wT xi + εi , i = 1, . . . , n

P(D;w) = P(D | w)P(w)

∼n∏

i=1

1√2πσ2

exp

(−(yi − wT xi )

2

2σ2

) d∏i=1

1√2π

exp

(−w2

i

2

)

= (2πσ2)−n2

n∏i=1

(exp

(−(yi − wT xi )

2

2σ2

))(2π)−

d2

d∏i=1

(exp

(−w2

i

2

))`(D;w) = − log (P(D;w))

=n

2log(2πσ2) +

d

2log(2π) +

1

2σ2

n∑i=1

(yi − wT xi )2 +

1

2

d∑i=1

w2i

. . . aha! and we have ridge regression with λ = σ2

10 / 35

Page 14: ORIE 4741: Learning with Big Messy Data [2ex] Regularization · ORIE 4741: Learning with Big Messy Data Regularization Professor Udell Operations Research and Information Engineering

Ridge regression: interpretation as MAP estimator

I prior probabiliy of model w ∼ N (0, Id)I noise εi ∼ N (0, σ2), i = 1, . . . , nI response yi = wT xi + εi , i = 1, . . . , n

P(D;w) = P(D | w)P(w)

∼n∏

i=1

1√2πσ2

exp

(−(yi − wT xi )

2

2σ2

) d∏i=1

1√2π

exp

(−w2

i

2

)

= (2πσ2)−n2

n∏i=1

(exp

(−(yi − wT xi )

2

2σ2

))(2π)−

d2

d∏i=1

(exp

(−w2

i

2

))`(D;w) = − log (P(D;w))

=n

2log(2πσ2) +

d

2log(2π) +

1

2σ2

n∑i=1

(yi − wT xi )2 +

1

2

d∑i=1

w2i

. . . aha! and we have ridge regression with λ = σ210 / 35

Page 15: ORIE 4741: Learning with Big Messy Data [2ex] Regularization · ORIE 4741: Learning with Big Messy Data Regularization Professor Udell Operations Research and Information Engineering

Outline

Maximum likelihood estimation

Regularizers

Quadratic regularizization

`1 regularizization

Nonnegative regularizer

11 / 35

Page 16: ORIE 4741: Learning with Big Messy Data [2ex] Regularization · ORIE 4741: Learning with Big Messy Data Regularization Professor Udell Operations Research and Information Engineering

Regularization

why regularize?

I reduce variance of the model

I impose prior structural knowledge

I improve interpretability

why not regularize?

I Gauss-Markov theorem:least squares is the best linear unbiased estimator

I regularization increases bias

12 / 35

Page 17: ORIE 4741: Learning with Big Messy Data [2ex] Regularization · ORIE 4741: Learning with Big Messy Data Regularization Professor Udell Operations Research and Information Engineering

Regularization

why regularize?

I reduce variance of the model

I impose prior structural knowledge

I improve interpretability

why not regularize?

I Gauss-Markov theorem:least squares is the best linear unbiased estimator

I regularization increases bias

12 / 35

Page 18: ORIE 4741: Learning with Big Messy Data [2ex] Regularization · ORIE 4741: Learning with Big Messy Data Regularization Professor Udell Operations Research and Information Engineering

Regularizers: a tour

we might choose regularizer so models will be

I small

I sparse

I nonnegative

I smooth

I . . .

compare with forward- and backward-stepwise selection (e.g.,AIC, BIC):regularized models tend to have lower variance.

(see Elements of Statistical Learning (Hastie, Tibshirani, Friedman) for moreinformation.)

13 / 35

Page 19: ORIE 4741: Learning with Big Messy Data [2ex] Regularization · ORIE 4741: Learning with Big Messy Data Regularization Professor Udell Operations Research and Information Engineering

Regularizers: a tour

we might choose regularizer so models will be

I small

I sparse

I nonnegative

I smooth

I . . .

compare with forward- and backward-stepwise selection (e.g.,AIC, BIC):regularized models tend to have lower variance.

(see Elements of Statistical Learning (Hastie, Tibshirani, Friedman) for moreinformation.)

13 / 35

Page 20: ORIE 4741: Learning with Big Messy Data [2ex] Regularization · ORIE 4741: Learning with Big Messy Data Regularization Professor Udell Operations Research and Information Engineering

Outline

Maximum likelihood estimation

Regularizers

Quadratic regularizization

`1 regularizization

Nonnegative regularizer

14 / 35

Page 21: ORIE 4741: Learning with Big Messy Data [2ex] Regularization · ORIE 4741: Learning with Big Messy Data Regularization Professor Udell Operations Research and Information Engineering

Quadratic regularizer

quadratic regularizer

r(w) = λ

n∑i=1

w2i

ridge regression

minimizen∑

i=1

(yi − wT xi )2 + λ

n∑i=1

w2i

with variable w ∈ Rd

solution w = (XTX + λI )−1XT y

15 / 35

Page 22: ORIE 4741: Learning with Big Messy Data [2ex] Regularization · ORIE 4741: Learning with Big Messy Data Regularization Professor Udell Operations Research and Information Engineering

Quadratic regularizer

I shrinks coefficients towards 0

I shrinks more in the direction of the smallest singular valuesof X

16 / 35

Page 23: ORIE 4741: Learning with Big Messy Data [2ex] Regularization · ORIE 4741: Learning with Big Messy Data Regularization Professor Udell Operations Research and Information Engineering

Is least squares scaling invariant?

suppose Alice and Bob do the same experiment

I Alice measures distance in mm

I Bob measures distance in km

they each compute an estimator with least squares and comparetheir predictions

Q: Do they make the same predictions?A: Yes!

17 / 35

Page 24: ORIE 4741: Learning with Big Messy Data [2ex] Regularization · ORIE 4741: Learning with Big Messy Data Regularization Professor Udell Operations Research and Information Engineering

Is least squares scaling invariant?

suppose Alice and Bob do the same experiment

I Alice measures distance in mm

I Bob measures distance in km

they each compute an estimator with least squares and comparetheir predictions

Q: Do they make the same predictions?

A: Yes!

17 / 35

Page 25: ORIE 4741: Learning with Big Messy Data [2ex] Regularization · ORIE 4741: Learning with Big Messy Data Regularization Professor Udell Operations Research and Information Engineering

Is least squares scaling invariant?

suppose Alice and Bob do the same experiment

I Alice measures distance in mm

I Bob measures distance in km

they each compute an estimator with least squares and comparetheir predictions

Q: Do they make the same predictions?A: Yes!

17 / 35

Page 26: ORIE 4741: Learning with Big Messy Data [2ex] Regularization · ORIE 4741: Learning with Big Messy Data Regularization Professor Udell Operations Research and Information Engineering

Least squares is scaling invariant

if β ∈ R, D ∈ Rd×d is diagonal, and Alice’s measurements(X ′, y ′) are related to Bob’s (X , y) by

y ′ = βy , X ′ = XD,

then the resulting least squares models are

w = (XTX )−1XT y , w ′ = (X ′TX ′)−1X ′T y ′

and they make the same predictions:

X ′w ′ = X ′(X ′TX ′)−1X ′T y ′ = XD(DTXTXD)−1DTXTβy

= XDD−1(XTX )−1(DT )−1DTXTβy

= βX (XTX )−1XT y = βXw

we say least squares is invariant under scaling

18 / 35

Page 27: ORIE 4741: Learning with Big Messy Data [2ex] Regularization · ORIE 4741: Learning with Big Messy Data Regularization Professor Udell Operations Research and Information Engineering

Least squares is scaling invariant

if β ∈ R, D ∈ Rd×d is diagonal, and Alice’s measurements(X ′, y ′) are related to Bob’s (X , y) by

y ′ = βy , X ′ = XD,

then the resulting least squares models are

w = (XTX )−1XT y , w ′ = (X ′TX ′)−1X ′T y ′

and they make the same predictions:

X ′w ′ = X ′(X ′TX ′)−1X ′T y ′ = XD(DTXTXD)−1DTXTβy

= XDD−1(XTX )−1(DT )−1DTXTβy

= βX (XTX )−1XT y = βXw

we say least squares is invariant under scaling18 / 35

Page 28: ORIE 4741: Learning with Big Messy Data [2ex] Regularization · ORIE 4741: Learning with Big Messy Data Regularization Professor Udell Operations Research and Information Engineering

Is ridge regression scaling invariant?

suppose Alice and Bob do the same experiment

I Alice measures distance in mm

I Bob measures distance in km

they each compute an estimator with ridge regression andcompare their predictions

Q: Do they make the same predictions?A: No!

19 / 35

Page 29: ORIE 4741: Learning with Big Messy Data [2ex] Regularization · ORIE 4741: Learning with Big Messy Data Regularization Professor Udell Operations Research and Information Engineering

Is ridge regression scaling invariant?

suppose Alice and Bob do the same experiment

I Alice measures distance in mm

I Bob measures distance in km

they each compute an estimator with ridge regression andcompare their predictions

Q: Do they make the same predictions?

A: No!

19 / 35

Page 30: ORIE 4741: Learning with Big Messy Data [2ex] Regularization · ORIE 4741: Learning with Big Messy Data Regularization Professor Udell Operations Research and Information Engineering

Is ridge regression scaling invariant?

suppose Alice and Bob do the same experiment

I Alice measures distance in mm

I Bob measures distance in km

they each compute an estimator with ridge regression andcompare their predictions

Q: Do they make the same predictions?A: No!

19 / 35

Page 31: ORIE 4741: Learning with Big Messy Data [2ex] Regularization · ORIE 4741: Learning with Big Messy Data Regularization Professor Udell Operations Research and Information Engineering

Ridge regression is not scaling invariant

if β ∈ R, D ∈ Rd×d is diagonal, and Alice’s measurements(X ′, y ′) are related to Bob’s (X , y) by

y ′ = βy , X ′ = XD,

then the resulting ridge regression models are

w = (XTX + λI )−1XT y , w ′ = (X ′TX ′ + λI )−1X ′T y ′

and the predictions are

Xw = X (XTX + λI )−1XT y , X ′w ′ = X ′(X ′TX ′+ λI )−1X ′T y ′

ridge regression is not invariant under coordinatetransformations

20 / 35

Page 32: ORIE 4741: Learning with Big Messy Data [2ex] Regularization · ORIE 4741: Learning with Big Messy Data Regularization Professor Udell Operations Research and Information Engineering

Scaling and offsets

to get the same answer no matter the units of measurement,standardize the data: for each column of X and of y

I demean: subtract column meanI standardize: divide by column standard deviation

let

µj =1

n

n∑i=1

Xij , µ =1

n

n∑i=1

yi

σ2j =1

n

n∑i=1

(Xij − µj)2, σ2 =1

n

n∑i=1

(yi − µ)2

solve

minimizen∑

i=1

yi − µσ−

d∑j=1

wjXij − µjσj

2

+ λ

d∑j=1

w2j

21 / 35

Page 33: ORIE 4741: Learning with Big Messy Data [2ex] Regularization · ORIE 4741: Learning with Big Messy Data Regularization Professor Udell Operations Research and Information Engineering

Scale the regularizer, not the data

instead of

minimizen∑

i=1

yi − µσ−

d∑j=1

wjXij − µiσi

2

+d∑

j=1

w2j ,

I multiply through by σ2

I reparametrize w ′j = σσjwj

to find the equivalent problem

minimizen∑

i=1

(yi −d∑

j=1

w ′jXij + c(w ′))2 +d∑

j=1

σ2j (w ′j )2,

where c(w ′) is some linear function of w ′

finally absorb c(w ′) into the constant term in the model

minimize ‖y − Xw ′‖2 + λ

d∑j=1

σ2j (w ′j )2,

22 / 35

Page 34: ORIE 4741: Learning with Big Messy Data [2ex] Regularization · ORIE 4741: Learning with Big Messy Data Regularization Professor Udell Operations Research and Information Engineering

Scaling and offsets

a different solution to scaling and offsets: take the MAP view

I r(w) is negative log prior on wI with a gaussian prior,

r(w) =n∑

i=1

σ2i w2i

where 1σ i

is the variance of the prior on the ith entry of wI if you believe the noise in the ith features is large, penalize

the ith entry more (σi big);I if you believe the noise in the ith features is small, penalize

the ith entry less (σi small);I if you measure X or y in different units,

your prior on w should change accordingly

example: don’t penalize the offset wn of the model (σn →∞)

r(w) =n−1∑i=1

w2i

23 / 35

Page 35: ORIE 4741: Learning with Big Messy Data [2ex] Regularization · ORIE 4741: Learning with Big Messy Data Regularization Professor Udell Operations Research and Information Engineering

Scaling and offsets

a different solution to scaling and offsets: take the MAP view

I r(w) is negative log prior on wI with a gaussian prior,

r(w) =n∑

i=1

σ2i w2i

where 1σ i

is the variance of the prior on the ith entry of wI if you believe the noise in the ith features is large, penalize

the ith entry more (σi big);I if you believe the noise in the ith features is small, penalize

the ith entry less (σi small);I if you measure X or y in different units,

your prior on w should change accordingly

example: don’t penalize the offset wn of the model (σn →∞)

r(w) =n−1∑i=1

w2i 23 / 35

Page 36: ORIE 4741: Learning with Big Messy Data [2ex] Regularization · ORIE 4741: Learning with Big Messy Data Regularization Professor Udell Operations Research and Information Engineering

Outline

Maximum likelihood estimation

Regularizers

Quadratic regularizization

`1 regularizization

Nonnegative regularizer

24 / 35

Page 37: ORIE 4741: Learning with Big Messy Data [2ex] Regularization · ORIE 4741: Learning with Big Messy Data Regularization Professor Udell Operations Research and Information Engineering

`1 regularization

`1 regularizer

r(w) = λ

n∑i=1

|wi |

lasso problem

minimizen∑

i=1

(yi − wT xi )2 + λ

n∑i=1

|wi |

with variable w ∈ Rd

I penalizes large w less than ridge regression

I no closed form solution

25 / 35

Page 38: ORIE 4741: Learning with Big Messy Data [2ex] Regularization · ORIE 4741: Learning with Big Messy Data Regularization Professor Udell Operations Research and Information Engineering

Recall `p norms

`p norm ‖w‖p for p ∈ (0,∞) is defined as

‖w‖p = (d∑

i=1

|w |p)1/p

examples:

I `1 norm is ‖w‖1 =∑d

i=1 |w |

I `2 norm is ‖w‖2 =√∑d

i=1 w2

for p = 0 or p =∞, `p norm is defined by taking limit:

I `∞ norm is ‖w‖∞ = limp→∞(∑d

i=1 |w |p)1/p = maxi |wi |I `0 norm is ‖w‖0 = limp→0(

∑di=1 |w |p)1/p = card(w),

number of nonzeros in w

note: `0 is not actually a norm(not absolutely homogeneous since ‖αw‖0 = ‖w‖0 for α 6= 0)

26 / 35

Page 39: ORIE 4741: Learning with Big Messy Data [2ex] Regularization · ORIE 4741: Learning with Big Messy Data Regularization Professor Udell Operations Research and Information Engineering

`1 regularization

why use `1?

I best convex lower bound for `0 on the `∞ unit ball

I tends to produce sparse solution

example:

I suppose X:1 = y , X:2 = αy for some 0 < α < 1

I fit lasso model and ridge regression model as λ→ 0

w ridge = limλ→0

argminw

‖y − Xw‖2 + λ‖w‖22

w lasso = limλ→0

argminw

‖y − Xw‖2 + λ‖w‖1

I as λ→ 0, solution solves least squares =⇒ w1 + αw2 = 1

I ridge regression minimizes w21 + w2

2 =⇒ w1 = w2 = 11+α

I lasso minimizes |w1|+ |w2| =⇒ w1 = 1, w2 = 0

27 / 35

Page 40: ORIE 4741: Learning with Big Messy Data [2ex] Regularization · ORIE 4741: Learning with Big Messy Data Regularization Professor Udell Operations Research and Information Engineering

`1 regularization

why use `1?

I best convex lower bound for `0 on the `∞ unit ball

I tends to produce sparse solution

example:

I suppose X:1 = y , X:2 = αy for some 0 < α < 1

I fit lasso model and ridge regression model as λ→ 0

w ridge = limλ→0

argminw

‖y − Xw‖2 + λ‖w‖22

w lasso = limλ→0

argminw

‖y − Xw‖2 + λ‖w‖1

I as λ→ 0, solution solves least squares =⇒ w1 + αw2 = 1

I ridge regression minimizes w21 + w2

2 =⇒ w1 = w2 = 11+α

I lasso minimizes |w1|+ |w2| =⇒ w1 = 1, w2 = 027 / 35

Page 41: ORIE 4741: Learning with Big Messy Data [2ex] Regularization · ORIE 4741: Learning with Big Messy Data Regularization Professor Udell Operations Research and Information Engineering

Sparsity

why would you want sparsity?

I credit card application: requires less info from applicant

I medical diagnosis: easier to explain model to doctor

I genomic study: which genes to investigate?

28 / 35

Page 42: ORIE 4741: Learning with Big Messy Data [2ex] Regularization · ORIE 4741: Learning with Big Messy Data Regularization Professor Udell Operations Research and Information Engineering

Outline

Maximum likelihood estimation

Regularizers

Quadratic regularizization

`1 regularizization

Nonnegative regularizer

29 / 35

Page 43: ORIE 4741: Learning with Big Messy Data [2ex] Regularization · ORIE 4741: Learning with Big Messy Data Regularization Professor Udell Operations Research and Information Engineering

Convex indicator

define convex indicator 1 : {true, false} → R ∪ {∞}

1(z) =

{0 z is true∞ z is false

define convex indicator of set C

1C (x) = 1(x ∈ C ) =

{0 x ∈ C∞ otherwise

don’t confuse this with the boolean indicator 1(z)(no standard notation. . . )

30 / 35

Page 44: ORIE 4741: Learning with Big Messy Data [2ex] Regularization · ORIE 4741: Learning with Big Messy Data Regularization Professor Udell Operations Research and Information Engineering

Convex indicator

define convex indicator 1 : {true, false} → R ∪ {∞}

1(z) =

{0 z is true∞ z is false

define convex indicator of set C

1C (x) = 1(x ∈ C ) =

{0 x ∈ C∞ otherwise

don’t confuse this with the boolean indicator 1(z)(no standard notation. . . )

30 / 35

Page 45: ORIE 4741: Learning with Big Messy Data [2ex] Regularization · ORIE 4741: Learning with Big Messy Data Regularization Professor Udell Operations Research and Information Engineering

Nonnegative regularization

nonnegative regularizer

r(w) =n∑

i=1

1(wi ≥ 0)

nonnegative least squares problem (NNLS)

minimizen∑

i=1

(yi − wT xi )2 +

n∑i=1

1(wi ≥ 0)

with variable w ∈ Rd

I value is ∞ if wi < 0

I so solution is always nonnegative

I often, solution is also sparse

31 / 35

Page 46: ORIE 4741: Learning with Big Messy Data [2ex] Regularization · ORIE 4741: Learning with Big Messy Data Regularization Professor Udell Operations Research and Information Engineering

Nonnegative coefficients

why would you want nonnegativity?

I electricity usage: how often is device turned on?I n = times, d = electric devices,I y = usage, X = which devices use power at which timesI w = devices used by household

I hyperspectral imaging: which species are present?I n = frequencies, d = possible materials,I y = observed spectrum, X = known spectrum of each

materialI w = material composition of location

I logistics: which routes to run?I n = locations, d = possible routes,I y = demand, X = which routes visit which locationsI w = size of truck to send on each route

32 / 35

Page 47: ORIE 4741: Learning with Big Messy Data [2ex] Regularization · ORIE 4741: Learning with Big Messy Data Regularization Professor Udell Operations Research and Information Engineering

Nonnegative coefficients

why would you want nonnegativity?

I electricity usage: how often is device turned on?I n = times, d = electric devices,I y = usage, X = which devices use power at which timesI w = devices used by household

I hyperspectral imaging: which species are present?I n = frequencies, d = possible materials,I y = observed spectrum, X = known spectrum of each

materialI w = material composition of location

I logistics: which routes to run?I n = locations, d = possible routes,I y = demand, X = which routes visit which locationsI w = size of truck to send on each route

32 / 35

Page 48: ORIE 4741: Learning with Big Messy Data [2ex] Regularization · ORIE 4741: Learning with Big Messy Data Regularization Professor Udell Operations Research and Information Engineering

Nonnegative coefficients

why would you want nonnegativity?

I electricity usage: how often is device turned on?I n = times, d = electric devices,I y = usage, X = which devices use power at which timesI w = devices used by household

I hyperspectral imaging: which species are present?I n = frequencies, d = possible materials,I y = observed spectrum, X = known spectrum of each

materialI w = material composition of location

I logistics: which routes to run?I n = locations, d = possible routes,I y = demand, X = which routes visit which locationsI w = size of truck to send on each route

32 / 35

Page 49: ORIE 4741: Learning with Big Messy Data [2ex] Regularization · ORIE 4741: Learning with Big Messy Data Regularization Professor Udell Operations Research and Information Engineering

Nonnegative coefficients

why would you want nonnegativity?

I electricity usage: how often is device turned on?I n = times, d = electric devices,I y = usage, X = which devices use power at which timesI w = devices used by household

I hyperspectral imaging: which species are present?I n = frequencies, d = possible materials,I y = observed spectrum, X = known spectrum of each

materialI w = material composition of location

I logistics: which routes to run?I n = locations, d = possible routes,I y = demand, X = which routes visit which locationsI w = size of truck to send on each route

32 / 35

Page 50: ORIE 4741: Learning with Big Messy Data [2ex] Regularization · ORIE 4741: Learning with Big Messy Data Regularization Professor Udell Operations Research and Information Engineering

Demo: Regularized Regression

https://github.com/ORIE4741/demos/

RegularizedRegression.ipynb

33 / 35

Page 51: ORIE 4741: Learning with Big Messy Data [2ex] Regularization · ORIE 4741: Learning with Big Messy Data Regularization Professor Udell Operations Research and Information Engineering

Smooth coefficients

smooth regularizer

r(w) =d−1∑i=1

(wi+1 − wi )2 = ‖Dw‖2

where D ∈ R(d−1)×d is the first order difference operator

Dij =

1 j = i−1 j = i + 10 else

smoothed least squares problem

minimizen∑

i=1

(yi − wT xi )2 + λ‖Dw‖2

34 / 35

Page 52: ORIE 4741: Learning with Big Messy Data [2ex] Regularization · ORIE 4741: Learning with Big Messy Data Regularization Professor Udell Operations Research and Information Engineering

Why smooth?

I allow model to change over space or timeI e.g., different years in tax data

I interpolates between one model and separate models fordifferent domainsI e.g., counties in tax data

I can couple any pairs of model coefficients, not just (i , i + 1)

35 / 35


Recommended