+ All Categories
Home > Documents > Patrick Breheny February 22 - University of Iowa › pbreheny › 7600 › s16 › notes ›...

Patrick Breheny February 22 - University of Iowa › pbreheny › 7600 › s16 › notes ›...

Date post: 03-Jul-2020
Category:
Upload: others
View: 0 times
Download: 0 times
Share this document with a friend
25
Selection of λ Estimation of σ 2 Cross-validation and the estimation of σ 2 and R 2 Patrick Breheny February 22 Patrick Breheny High-Dimensional Data Analysis (BIOS 7600) 1/25
Transcript
Page 1: Patrick Breheny February 22 - University of Iowa › pbreheny › 7600 › s16 › notes › 2-22.pdf · 2016-03-01 · CV = CV( ) Patrick Breheny High-Dimensional Data Analysis (BIOS

Selection of λEstimation of σ2

Cross-validation and the estimation of σ2 and R2

Patrick Breheny

February 22

Patrick Breheny High-Dimensional Data Analysis (BIOS 7600) 1/25

Page 2: Patrick Breheny February 22 - University of Iowa › pbreheny › 7600 › s16 › notes › 2-22.pdf · 2016-03-01 · CV = CV( ) Patrick Breheny High-Dimensional Data Analysis (BIOS

Selection of λEstimation of σ2

Information criteriaCross-validation

Introduction

Today we will discuss the selection of λ and the estimation ofσ2 (which, in turn, allows us to quanify the signal-to-noiseratio present in the data)

For lasso models, both of these involve tend to revolve aroundcross-validation, although we will discuss a few differentapproaches

Patrick Breheny High-Dimensional Data Analysis (BIOS 7600) 2/25

Page 3: Patrick Breheny February 22 - University of Iowa › pbreheny › 7600 › s16 › notes › 2-22.pdf · 2016-03-01 · CV = CV( ) Patrick Breheny High-Dimensional Data Analysis (BIOS

Selection of λEstimation of σ2

Information criteriaCross-validation

Degrees of freedom

In our discussion of ridge regression, we used informationcriteria to select λ

All of the criteria we discussed required an estimate of thedegrees of freedom of the model

For linear fitting methods, we saw that df = tr(S)

The lasso, however, is not a linear fitting method; there is noexact, closed form solution to Cov(y, y)

Patrick Breheny High-Dimensional Data Analysis (BIOS 7600) 3/25

Page 4: Patrick Breheny February 22 - University of Iowa › pbreheny › 7600 › s16 › notes › 2-22.pdf · 2016-03-01 · CV = CV( ) Patrick Breheny High-Dimensional Data Analysis (BIOS

Selection of λEstimation of σ2

Information criteriaCross-validation

Degrees of freedom for the lasso

A natural proposal would be to use df(λ) = ‖β(λ)‖0, thenumber of nonzero coefficients

From one perspective, this might seem to underestimate thetrue degrees of freedom, as the variables were not prespecified

For example, in our forward selection example from Jan. 20,we selected 5 features but the true df was ≈ 19

On the other hand, shrinkage reduces the degrees of freedomin an estimator, as we have seen in ridge regression; from thisperspective, ‖β(λ)‖0 might seem to overestimate the truedegrees of freedom

Patrick Breheny High-Dimensional Data Analysis (BIOS 7600) 4/25

Page 5: Patrick Breheny February 22 - University of Iowa › pbreheny › 7600 › s16 › notes › 2-22.pdf · 2016-03-01 · CV = CV( ) Patrick Breheny High-Dimensional Data Analysis (BIOS

Selection of λEstimation of σ2

Information criteriaCross-validation

Degrees of freedom for the lasso (cont’d)

Surprisingly, it turns out that these two factors exactly canceland df(λ) = ‖β(λ)‖0 can be shown to be an unbiasedestimate of the lasso degrees of freedom

Given this estimate, we can then use information criteria suchas BIC for the purposes of selecting λ

Patrick Breheny High-Dimensional Data Analysis (BIOS 7600) 5/25

Page 6: Patrick Breheny February 22 - University of Iowa › pbreheny › 7600 › s16 › notes › 2-22.pdf · 2016-03-01 · CV = CV( ) Patrick Breheny High-Dimensional Data Analysis (BIOS

Selection of λEstimation of σ2

Information criteriaCross-validation

ncvreg

To illustrate, we will use the ncvreg package to fit the lassopath

The primary purpose of ncvreg is to provide penalties otherthan the lasso, which we will discuss in our next topic

However, it provides a logLik method, unlike glmnet, so itcan be used with R’s AIC and BIC functions:

fit <- ncvreg(X, y, penalty="lasso")

AIC(fit)

BIC(fit)

Patrick Breheny High-Dimensional Data Analysis (BIOS 7600) 6/25

Page 7: Patrick Breheny February 22 - University of Iowa › pbreheny › 7600 › s16 › notes › 2-22.pdf · 2016-03-01 · CV = CV( ) Patrick Breheny High-Dimensional Data Analysis (BIOS

Selection of λEstimation of σ2

Information criteriaCross-validation

AIC, BIC for pollution data

610

620

630

640

650

660

670

λ

Info

rmat

ion

crite

rion

40 4 0.4 0.04

AIC BIC

Patrick Breheny High-Dimensional Data Analysis (BIOS 7600) 7/25

Page 8: Patrick Breheny February 22 - University of Iowa › pbreheny › 7600 › s16 › notes › 2-22.pdf · 2016-03-01 · CV = CV( ) Patrick Breheny High-Dimensional Data Analysis (BIOS

Selection of λEstimation of σ2

Information criteriaCross-validation

Remarks

As we would expect, BIC applies a stronger penalty foroverfitting and chooses a smaller, more parsimonious modelthan does AIC

The main advantage of AIC and BIC is that they arecomputationally convenient: they can be calculated using thefit of lasso model at very little computational cost

The primary disadvantage is that both AIC and BIC rely on anumber of asymptotic approximations that can be quiteinaccurate for high-dimensional data

Patrick Breheny High-Dimensional Data Analysis (BIOS 7600) 8/25

Page 9: Patrick Breheny February 22 - University of Iowa › pbreheny › 7600 › s16 › notes › 2-22.pdf · 2016-03-01 · CV = CV( ) Patrick Breheny High-Dimensional Data Analysis (BIOS

Selection of λEstimation of σ2

Information criteriaCross-validation

Cross-validation: Introduction

As we have discussed, a reasonable approach to selecting λ inan objective manner is to choose the value of λ that yields thegreatest predictive power

An alternative to the approximations of AIC and BIC is toassess predictive power more directly and empirically througha technique called cross-validation

Cross-validation is more reliable in general, although it comesat an added computation cost

Patrick Breheny High-Dimensional Data Analysis (BIOS 7600) 9/25

Page 10: Patrick Breheny February 22 - University of Iowa › pbreheny › 7600 › s16 › notes › 2-22.pdf · 2016-03-01 · CV = CV( ) Patrick Breheny High-Dimensional Data Analysis (BIOS

Selection of λEstimation of σ2

Information criteriaCross-validation

Sample splitting

As we have discussed, using the observed agreement betweenfitted values and the data is too optimistic; we requireindependent data to test predictive accuracy

One solution, known as sample splitting, is to split the dataset into two fractions, a training set and test set, using oneportion to estimate β (i.e., “train” the model) and the otherto evaluate how well Xβ predicts the observations in thesecond portion (i.e., “test” the model)

The problem with this solution is that we rarely have so muchdata that we can freely part with half of it solely for thepurpose of choosing λ

Patrick Breheny High-Dimensional Data Analysis (BIOS 7600) 10/25

Page 11: Patrick Breheny February 22 - University of Iowa › pbreheny › 7600 › s16 › notes › 2-22.pdf · 2016-03-01 · CV = CV( ) Patrick Breheny High-Dimensional Data Analysis (BIOS

Selection of λEstimation of σ2

Information criteriaCross-validation

Cross-validation

To finesse this problem, cross-validation splits the data into Kfolds, fits the data on K − 1 of the folds, and evaluates predictionerror on the fold that was left out

1 2 3 4 5

Common choices for K are 5, 10, or n (also known asleave-one-out cross-validation)

Patrick Breheny High-Dimensional Data Analysis (BIOS 7600) 11/25

Page 12: Patrick Breheny February 22 - University of Iowa › pbreheny › 7600 › s16 › notes › 2-22.pdf · 2016-03-01 · CV = CV( ) Patrick Breheny High-Dimensional Data Analysis (BIOS

Selection of λEstimation of σ2

Information criteriaCross-validation

Cross-validation: Details

(1) Specify a grid of regularization parameter valuesΛ = {λ1, . . . , λK}

(2) Divide the data into V roughly equal parts D1, . . . , DV

(3) For each v = 1, . . . , V , compute the lasso solution path usingthe observations in {Du, u 6= v}

(4) For each λ ∈ Λ, compute the mean squared prediction error

MSPEv(λ) =1

nv

∑i∈Dv

{yi − xTi β−v(λ)}2,

where nv is the number of observations in Dv, as well as

CV(λ) =1

V

V∑v=1

MSPEv(λ).

Patrick Breheny High-Dimensional Data Analysis (BIOS 7600) 12/25

Page 13: Patrick Breheny February 22 - University of Iowa › pbreheny › 7600 › s16 › notes › 2-22.pdf · 2016-03-01 · CV = CV( ) Patrick Breheny High-Dimensional Data Analysis (BIOS

Selection of λEstimation of σ2

Information criteriaCross-validation

Cross-validation: Details (cont’d)

Then λ is taken to be the value that minimizes CV(λ) andβ ≡ β(λ) the estimator of the regression coefficients

Note that

MSPEv(λ) is the mean squared prediction error for the modelbased on the training data {Du, u 6= v} in predicting theresponse variables in Dv

CV(λ) is an estimate of the expected mean squared predictionerror, EPE(λ), defined in the Feb. 10 lecture

Patrick Breheny High-Dimensional Data Analysis (BIOS 7600) 13/25

Page 14: Patrick Breheny February 22 - University of Iowa › pbreheny › 7600 › s16 › notes › 2-22.pdf · 2016-03-01 · CV = CV( ) Patrick Breheny High-Dimensional Data Analysis (BIOS

Selection of λEstimation of σ2

Information criteriaCross-validation

Variability of CV estimates

Regardless of the number of cross-validation folds, eachobservation in the data appears exactly once in a test set

Letting µi(λ) = xTi βu(i)(λ), the mean of {yi − µi(λ)}ni=1 is

equal to CV(λ)

Its variability, however, is useful for estimating the accuracywith which E(MSPE(λ)) is estimated

Patrick Breheny High-Dimensional Data Analysis (BIOS 7600) 14/25

Page 15: Patrick Breheny February 22 - University of Iowa › pbreheny › 7600 › s16 › notes › 2-22.pdf · 2016-03-01 · CV = CV( ) Patrick Breheny High-Dimensional Data Analysis (BIOS

Selection of λEstimation of σ2

Information criteriaCross-validation

CV standard errors

Letting SDCV(λ) denote the sample standard deviation of the{yi − µi(λ)}ni=1 values, the standard error of CV(λ) is

SECV(λ) =SDCV(λ)√

n,

which, in turn, can be used to construct confidence intervals

The cross-validation procedure described in this section, alongwith the estimates of CV(λ) and its standard error, areimplemented in glmnet and can be carried out using

cvfit <- cv.glmnet(X, y)

plot(cvfit)

By default, cv.glmnet uses V = 10 folds, but this can bechanged through the nfolds option.

Patrick Breheny High-Dimensional Data Analysis (BIOS 7600) 15/25

Page 16: Patrick Breheny February 22 - University of Iowa › pbreheny › 7600 › s16 › notes › 2-22.pdf · 2016-03-01 · CV = CV( ) Patrick Breheny High-Dimensional Data Analysis (BIOS

Selection of λEstimation of σ2

Information criteriaCross-validation

CV plot for lasso: Pollution data

1500

2000

2500

3000

3500

4000

4500

log(λ)

CV

(λ)

●●

●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

0 2 4 6 7 8 11 13 14 14 15 15

40 4 0.4 0.04

Number of nonzero coefficients

Intervals are ±1SEPatrick Breheny High-Dimensional Data Analysis (BIOS 7600) 16/25

Page 17: Patrick Breheny February 22 - University of Iowa › pbreheny › 7600 › s16 › notes › 2-22.pdf · 2016-03-01 · CV = CV( ) Patrick Breheny High-Dimensional Data Analysis (BIOS

Selection of λEstimation of σ2

Information criteriaCross-validation

Remarks

The value λ = 1.84 minimizes the cross-validation error, atwhich point 9 variables are selected

However, as the confidence intervals show, there is substantialuncertainty about this minimum value

A fairly wide range of λ values (λ ∈ [0.12, 9.83]) yield CV(λ)estimates falling within ±1SECV of the minimum

This is almost always the case in model selection: a largenumber of models could reasonably be considered the “best”model, subject to random variability

Patrick Breheny High-Dimensional Data Analysis (BIOS 7600) 17/25

Page 18: Patrick Breheny February 22 - University of Iowa › pbreheny › 7600 › s16 › notes › 2-22.pdf · 2016-03-01 · CV = CV( ) Patrick Breheny High-Dimensional Data Analysis (BIOS

Selection of λEstimation of σ2

Information criteriaCross-validation

Repeated cross-validation

Note that CV(λ), and hence β, will change somewhatdepending on the random folds

To avoid this, some people carry out repeated cross-validation,and select λ according to the average CV error

Another option is to carry out n-fold cross-validation, in whichthere is only one way to select the fold assignments

It is important to realize, however, that neither of theseapproaches does anything to eliminate actual uncertainty withrespect to the selection of λ

Patrick Breheny High-Dimensional Data Analysis (BIOS 7600) 18/25

Page 19: Patrick Breheny February 22 - University of Iowa › pbreheny › 7600 › s16 › notes › 2-22.pdf · 2016-03-01 · CV = CV( ) Patrick Breheny High-Dimensional Data Analysis (BIOS

Selection of λEstimation of σ2

Plug-in and cross-validation estimatorsEstimation of R2

σ2: Plug-in estimator

We have discussed estimation of β; let us now turn ourattention to estimation of the residual variance, σ2

In ordinary least squares regression,

σ2OLS =RSS

n− df

For the lasso, an obvious plug-in alternative is

σ2P =RSS(λ)

n− df(λ)

Patrick Breheny High-Dimensional Data Analysis (BIOS 7600) 19/25

Page 20: Patrick Breheny February 22 - University of Iowa › pbreheny › 7600 › s16 › notes › 2-22.pdf · 2016-03-01 · CV = CV( ) Patrick Breheny High-Dimensional Data Analysis (BIOS

Selection of λEstimation of σ2

Plug-in and cross-validation estimatorsEstimation of R2

σ2: CV estimator

The plug-in estimator is based on the observed fit of themodel and tends to underestimate σ2, particularly for lowvalues of λ

An alternative approach is to use an estimate of theout-of-sample prediction error in place of the observed RSS(λ)

This is the exact quantity estimated by cross-validation:

σ2CV = CV(λ)

Patrick Breheny High-Dimensional Data Analysis (BIOS 7600) 20/25

Page 21: Patrick Breheny February 22 - University of Iowa › pbreheny › 7600 › s16 › notes › 2-22.pdf · 2016-03-01 · CV = CV( ) Patrick Breheny High-Dimensional Data Analysis (BIOS

Selection of λEstimation of σ2

Plug-in and cross-validation estimatorsEstimation of R2

Refitted CV

Other, more computationally intensive methods have alsobeen proposed based on sample splitting

The basic idea is to randomly partitioning the dataset intotwo sets D1 and D2, use the lasso on D1 for the purposes ofvariable selection, then fit an OLS model to D2 (using thepredictors selected by D1) for the purposes of estimating σ2

This can be repeated several times, as well as applied in thereverse direction (switching the roles of D1 and D2) to obtaina more stable estimate

Patrick Breheny High-Dimensional Data Analysis (BIOS 7600) 21/25

Page 22: Patrick Breheny February 22 - University of Iowa › pbreheny › 7600 › s16 › notes › 2-22.pdf · 2016-03-01 · CV = CV( ) Patrick Breheny High-Dimensional Data Analysis (BIOS

Selection of λEstimation of σ2

Plug-in and cross-validation estimatorsEstimation of R2

Comparison of estimators

0.6

0.8

1.0

1.2

λ

σ

0.33 0.16 0.08 0.04 0.02

0.0

0.5

1.0

1.5

2.0

2.5

3.0

λσ

1.17 0.37 0.12 0.04 0.01

Plug−in CV RCV

n = 100, p = 1, 000, σ = 1. Left: β = 0; Right: βj = 1 forj = 1, 2, . . . , 5; βj = 0 for j = 6, 7, . . . , 1000

Patrick Breheny High-Dimensional Data Analysis (BIOS 7600) 22/25

Page 23: Patrick Breheny February 22 - University of Iowa › pbreheny › 7600 › s16 › notes › 2-22.pdf · 2016-03-01 · CV = CV( ) Patrick Breheny High-Dimensional Data Analysis (BIOS

Selection of λEstimation of σ2

Plug-in and cross-validation estimatorsEstimation of R2

Coefficient of determination

One reason that estimating σ2 is of considerable practicalinterest is that it enables us to estimate the proportion ofvariance in the outcome that can be explained by the model

This quantity, familiar from classical regression, is known asthe coefficient of determination and denoted R2

The coefficient of determination is given by

R2 = 1− Var(Y |X)

Var(Y );

we have just discussed the estimation of σ2 = Var(Y |X);estimation of Var(Y ) is straightforward

Patrick Breheny High-Dimensional Data Analysis (BIOS 7600) 23/25

Page 24: Patrick Breheny February 22 - University of Iowa › pbreheny › 7600 › s16 › notes › 2-22.pdf · 2016-03-01 · CV = CV( ) Patrick Breheny High-Dimensional Data Analysis (BIOS

Selection of λEstimation of σ2

Plug-in and cross-validation estimatorsEstimation of R2

R2: Calculation in R

Once cross-validation has been carried out, calculation of R2

is straightforward

With glmnet:

cvfit <- cv.glmnet(X, y)

rsq <- 1-cvfit$cvm/var(y)

Also, the coefficient of determination is available as a plottype in ncvreg:

cvfit <- cv.ncvreg(X, y, penalty="lasso")

plot(cvfit, type="rsq")

Patrick Breheny High-Dimensional Data Analysis (BIOS 7600) 24/25

Page 25: Patrick Breheny February 22 - University of Iowa › pbreheny › 7600 › s16 › notes › 2-22.pdf · 2016-03-01 · CV = CV( ) Patrick Breheny High-Dimensional Data Analysis (BIOS

Selection of λEstimation of σ2

Plug-in and cross-validation estimatorsEstimation of R2

R2 plot: Pollution data

0.0

0.1

0.2

0.3

0.4

0.5

0.6

λ

R2

●●

●●

●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

0 2 4 5 6 8 9 11 13 13 14 14Variables selected

40 4 0.4 0.04

It is worth noting that only a small about the explained variabilitycomes from the pollution variables: maxR2 = 0.58 with thepollution variables; maxR2 = 0.56 without the pollution variables

Patrick Breheny High-Dimensional Data Analysis (BIOS 7600) 25/25


Recommended