IntroductionRegression splines (parametric)
Smoothing splines (nonparametric)
Splines and penalized regression
Patrick Breheny
November 23
Patrick Breheny STA 621: Nonparametric Statistics
IntroductionRegression splines (parametric)
Smoothing splines (nonparametric)
Introduction
We are discussing ways to estimate the regression function f ,where
E(y|x) = f(x)
One approach is of course to assume that f has a certainshape, such as linear or quadratic, that can be estimatedparametrically
We have also discussed locally weighted linear/polynomialmodels as a way of allowing f to be more flexible
An alternative, more direct approach is penalization
Patrick Breheny STA 621: Nonparametric Statistics
IntroductionRegression splines (parametric)
Smoothing splines (nonparametric)
Controlling smoothness with penalization
Here, we directly solve for the function f that minimizes thefollowing objective function, a penalized version of the leastsquares objective:
n∑i=1
yi − f(xi)2 + λ
∫f ′′(u)2du
The first term captures the fit to the data, while the secondpenalizes curvature – note that for a line, f ′′(u) = 0 for all u
Here, λ is the smoothing parameter, and it controls thetradeoff between the two terms:
λ = 0 imposes no restrictions and f will therefore interpolatethe dataλ =∞ renders curvature impossible, thereby returning us toordinary linear regression
Patrick Breheny STA 621: Nonparametric Statistics
IntroductionRegression splines (parametric)
Smoothing splines (nonparametric)
Splines
It may sound impossible to solve for such an f over all possiblefunctions, but the solution turns out to be surprisingly simple
This solutions, it turns out, depends on a class of functionscalled splines
We will begin by introducing splines themselves, then move onto discuss how they represent a solution to our penalizedregression problem
Patrick Breheny STA 621: Nonparametric Statistics
IntroductionRegression splines (parametric)
Smoothing splines (nonparametric)
Basis functions
One approach for extending the linear model is to represent xusing a collection of basis functions:
f(x) =
M∑m=1
βmhm(x)
Because the basis functions hm are prespecified and themodel is linear in these new variables, ordinary least squaresapproaches for model fitting and inference can be employed
This idea is probably not new to you, as transformations andexpansions using polynomial bases are common
Patrick Breheny STA 621: Nonparametric Statistics
IntroductionRegression splines (parametric)
Smoothing splines (nonparametric)
Global versus local bases
However, polynomial bases with global representations haveundesirable side effects: each observation affects the entirecurve, even for x values far from the observation
In previous lectures, we got around this problem with localweighting
In this lecture, we will explore instead an approach based onpiecewise basis functions
As we will see, splines are piecewise polynomials joinedtogether to make a singe smooth curve
Patrick Breheny STA 621: Nonparametric Statistics
IntroductionRegression splines (parametric)
Smoothing splines (nonparametric)
The piecewise constant model
To understand splines, we will gradually build up a piecewisemodel, starting at the simplest one: the piecewise constantmodel
First, we partition the range of x into K + 1 intervals bychoosing K points ξkKk=1 called knots
For our example involving bone mineral density, we willchoose the tertiles of the observed ages
Patrick Breheny STA 621: Nonparametric Statistics
IntroductionRegression splines (parametric)
Smoothing splines (nonparametric)
The piecewise constant model (cont’d)
age
spnb
md
−0.05
0.00
0.05
0.10
0.15
0.20
10 15 20 25
female
10 15 20 25
male
Patrick Breheny STA 621: Nonparametric Statistics
IntroductionRegression splines (parametric)
Smoothing splines (nonparametric)
The piecewise linear model
age
spnb
md
−0.05
0.00
0.05
0.10
0.15
0.20
10 15 20 25
female
10 15 20 25
male
Patrick Breheny STA 621: Nonparametric Statistics
IntroductionRegression splines (parametric)
Smoothing splines (nonparametric)
The continuous piecewise linear model
age
spnb
md
−0.05
0.00
0.05
0.10
0.15
0.20
10 15 20 25
female
10 15 20 25
male
Patrick Breheny STA 621: Nonparametric Statistics
IntroductionRegression splines (parametric)
Smoothing splines (nonparametric)
Basis functions for piecewise continuous models
These constraints can be incorporated directly into the basisfunctions:
h1(x) = 1, h2(x) = x, h3(x) = (x− ξ1)+, h4(x) = (x− ξ2)+,
where (·)+ denotes the positive portion of its argument:
r+ =
r if r ≥ 0
0 if r < 0
Patrick Breheny STA 621: Nonparametric Statistics
IntroductionRegression splines (parametric)
Smoothing splines (nonparametric)
Basis functions for piecewise continuous models
It can be easily checked that these basis functions lead to acomposite function f(x) that:
Is everywhere continuousIs linear everywhere except the knotsHas a different slope for each region
Also, note that the degrees of freedom add up: 3 regions × 2degrees of freedom in each region - 2 constraints = 4 basisfunctions
Patrick Breheny STA 621: Nonparametric Statistics
IntroductionRegression splines (parametric)
Smoothing splines (nonparametric)
Splines
The preceding is an example of a spline: a piecewise m− 1degree polynomial that is continuous up to its first m− 2derivatives
By requiring continuous derivatives, we ensure that theresulting function is as smooth as possible
We can obtain more flexible curves by increasing the degree ofthe spline and/or by adding knots
However, there is a tradeoff:
Few knots/low degree: Resulting class of functions may be toorestrictive (bias)Many knots/high degree: We run the risk of overfitting(variance)
Patrick Breheny STA 621: Nonparametric Statistics
IntroductionRegression splines (parametric)
Smoothing splines (nonparametric)
The truncated power basis
The set of basis functions introduced earlier is an example ofwhat is called the truncated power basis
Its logic is easily extended to splines of order m:
hj(x) = xj−1 j = 1, . . . ,m
hm+k(x) = (x− ξk)m−1+ l = 1, . . . ,K
Note that a spline has m+K degrees of freedom
Patrick Breheny STA 621: Nonparametric Statistics
IntroductionRegression splines (parametric)
Smoothing splines (nonparametric)
Quadratic splines
age
spnb
md
−0.05
0.00
0.05
0.10
0.15
0.20
10 15 20 25
female
10 15 20 25
male
Patrick Breheny STA 621: Nonparametric Statistics
IntroductionRegression splines (parametric)
Smoothing splines (nonparametric)
Cubic splines
age
spnb
md
−0.05
0.00
0.05
0.10
0.15
0.20
10 15 20 25
female
10 15 20 25
male
Patrick Breheny STA 621: Nonparametric Statistics
IntroductionRegression splines (parametric)
Smoothing splines (nonparametric)
Additional notes
These types of fixed-knot models are referred to as regressionsplines
Recall that cubic splines contain 4 +K degrees of freedom:K + 1 regions × 4 parameters per region - K knots × 3constraints per knot
It is claimed that cubic splines are the lowest order spline forwhich the discontinuity at the knots cannot be noticed by thehuman eye
There is rarely any need to go beyond cubic splines, which areby far the most common type of splines in practice
Patrick Breheny STA 621: Nonparametric Statistics
IntroductionRegression splines (parametric)
Smoothing splines (nonparametric)
Implementing regression splines
The truncated power basis has two principal virtues:
Conceptual simplicityThe linear model is nested inside it, leading to simple tests ofthe null hypothesis of linearity
Unfortunately, it has a number of computational/numericalflaws – it’s inefficient and can lead to overflow and nearlysingular matrix problems
The more complicated but numerically much more stable andefficient B-spline basis is often employed instead
Patrick Breheny STA 621: Nonparametric Statistics
IntroductionRegression splines (parametric)
Smoothing splines (nonparametric)
B-splines in R
Fortunately, one can use B-splines without knowing the detailsbehind their complicated construction
In the splines package (which by default is installed but notloaded), the bs() function will implement a B-spline basis foryou
X <- bs(x,knots=quantile(x,p=c(1/3,2/3)))
X <- bs(x,df=5)
X <- bs(x,degree=2,df=10)
Xp <- predict(X,newdata=x)
By default, bs uses degree=3, knots at evenly spacedquantiles, and does not return a column for the intercept
Patrick Breheny STA 621: Nonparametric Statistics
IntroductionRegression splines (parametric)
Smoothing splines (nonparametric)
Natural cubic splines
Polynomial fits tend to be erratic at the boundaries of the data
This is even worse for cubic splines
Natural cubic splines ameliorate this problem by adding theadditional (4) constraints that the function is linear beyondthe boundaries of the data
Patrick Breheny STA 621: Nonparametric Statistics
IntroductionRegression splines (parametric)
Smoothing splines (nonparametric)
Natural cubic splines (cont’d)
age
spnb
md
−0.05
0.00
0.05
0.10
0.15
0.20
10 15 20 25
female
10 15 20 25
male
Patrick Breheny STA 621: Nonparametric Statistics
IntroductionRegression splines (parametric)
Smoothing splines (nonparametric)
Natural cubic splines, 6 df
age
spnb
md
−0.05
0.00
0.05
0.10
0.15
0.20
10 15 20 25
female
10 15 20 25
male
Patrick Breheny STA 621: Nonparametric Statistics
IntroductionRegression splines (parametric)
Smoothing splines (nonparametric)
Natural cubic splines, 6 df (cont’d)
10 15 20 25
−0.
050.
000.
050.
100.
150.
20
Age
Rel
ativ
e ch
ange
in s
pina
l BM
D
femalemale
Patrick Breheny STA 621: Nonparametric Statistics
IntroductionRegression splines (parametric)
Smoothing splines (nonparametric)
Splines vs. Loess (6 df each)
age
spnb
md
−0.05
0.00
0.05
0.10
0.15
0.20
10 15 20 25
female
10 15 20 25
male
spline loess
Patrick Breheny STA 621: Nonparametric Statistics
IntroductionRegression splines (parametric)
Smoothing splines (nonparametric)
Natural splines in R
R also provides a function to compute a basis for the naturalcubic splines, ns, which works almost exactly like bs, exceptthat there is no option to change the degree
Note that a natural spline has m+K − 4 degrees of freedom;thus, a natural cubic spline with K knots has K degrees offreedom
Patrick Breheny STA 621: Nonparametric Statistics
IntroductionRegression splines (parametric)
Smoothing splines (nonparametric)
Mean and variance estimation
Because the basis functions are fixed, all standard approachesto inference for regression are valid
In particular, letting L = X(X′X)−1X′ denote the projectionmatrix,
E(f) = Lf(x)
V(f) = σ2L(L′L)−1L′
CV =1
n
∑i
(yi − yi1− lii
)2
Furthermore, extensions to logistic regression, Coxproportional hazards regression, etc., are straightforward
Patrick Breheny STA 621: Nonparametric Statistics
IntroductionRegression splines (parametric)
Smoothing splines (nonparametric)
Mean and variance estimation (cont’d)
age
spnb
md
−0.05
0.00
0.05
0.10
0.15
0.20
10 15 20 25
female
10 15 20 25
male
Patrick Breheny STA 621: Nonparametric Statistics
IntroductionRegression splines (parametric)
Smoothing splines (nonparametric)
Mean and variance estimation (cont’d)
Coronary heart disease study, K = 4
100 120 140 160 180 200 220
−2
−1
01
23
Systolic blood pressure
f(x)
100 120 140 160 180 200 220
0.0
0.2
0.4
0.6
0.8
1.0
Systolic blood pressure
π(x)
Patrick Breheny STA 621: Nonparametric Statistics
IntroductionRegression splines (parametric)
Smoothing splines (nonparametric)
Problems with knots
Fixed-df splines are useful tools, but are not trulynonparametric
Choices regarding the number of knots and where they arelocated are fundamentally parametric choices and have a largeeffect on the fit
Furthermore, assuming that you place knots at quantiles,models will not be nested inside each other, whichcomplicates hypothesis testing
Patrick Breheny STA 621: Nonparametric Statistics
IntroductionRegression splines (parametric)
Smoothing splines (nonparametric)
Controlling smoothness with penalization
We can avoid the knot selection problem altogether via thenonparametric formulation introduced at the beginning oflecture: choose the function f that minimizes
n∑i=1
yi − f(xi)2 + λ
∫f ′′(u)2du
We will now see that the solution to this problem lies in thefamily of natural cubic splines
Patrick Breheny STA 621: Nonparametric Statistics
IntroductionRegression splines (parametric)
Smoothing splines (nonparametric)
Terminology
First, some terminology:
The parametric splines with fixed degrees of freedom that wehave talked about so far are called regression splines
A spline that passes through the points xi, yi is called aninterpolating spline, and is said to interpolate the pointsxi, yiA spline that describes and smooths noisy data by passingclose to xi, yi without the requirement of passing throughthem is called a smoothing spline
Patrick Breheny STA 621: Nonparametric Statistics
IntroductionRegression splines (parametric)
Smoothing splines (nonparametric)
Natural cubic splines are the smoothest interpolators
Theorem: Out of all twice-differentiable functions passing throughthe points xi, yi, the one that minimizes
λ
∫f ′′(u)2du
is a natural cubic spline with knots at every unique value of xi
Patrick Breheny STA 621: Nonparametric Statistics
IntroductionRegression splines (parametric)
Smoothing splines (nonparametric)
Natural cubic splines solve the nonparametric formulation
Theorem: Out of all twice-differentiable functions, the one thatminimizes
n∑i=1
yi − f(xi)2 + λ
∫f ′′(u)2du
is a natural cubic spline with knots at every unique value of xi
Patrick Breheny STA 621: Nonparametric Statistics
IntroductionRegression splines (parametric)
Smoothing splines (nonparametric)
Design matrix
Let Njnj=1 denote the collection of natural cubic spline basisfunctions and N denote the n× n design matrix consisting of thebasis functions evaluated at the observed values:
Nij = Nj(xi)
f(x) =∑n
j=1Nj(x)βj
f(x) = Nβ
Patrick Breheny STA 621: Nonparametric Statistics
IntroductionRegression splines (parametric)
Smoothing splines (nonparametric)
Solution
The penalized objective function is therefore
(y −Nβ)′(y −Nβ) + λβ′Ωβ,
where Ωjk =∫N ′′j (t)N ′′k (t)dt
The solution is therefore
β = (N′N + λΩ)−1N′y
Patrick Breheny STA 621: Nonparametric Statistics
IntroductionRegression splines (parametric)
Smoothing splines (nonparametric)
Smoothing splines are linear smoothers
Note that the fitted values can be represented as
y = N(N′N + λΩ)−1N′y
= Lλy
Thus, smoothing splines are linear smoothers, and we can useall the results that we derived back when discussing localregression
Patrick Breheny STA 621: Nonparametric Statistics
IntroductionRegression splines (parametric)
Smoothing splines (nonparametric)
Smoothing splines are linear smoothers (cont’d)
In particular:
CV =1
n
∑i
(yi − yi1− lii
)2
Ef(x0) =∑i
li(x0)f(x0)
Vf(x0) = σ2∑i
li(x0)2
σ2 =
∑i(yi − yi)2
n− 2ν + ν
ν = tr(L)
ν = tr(L′L)
Patrick Breheny STA 621: Nonparametric Statistics
IntroductionRegression splines (parametric)
Smoothing splines (nonparametric)
CV, GCV for BMD example
∝ log(λ)
0.002
0.003
0.004
0.005
0.006
0.5 1.0 1.5
male
0.5 1.0 1.5
female
CV GCV
Patrick Breheny STA 621: Nonparametric Statistics
IntroductionRegression splines (parametric)
Smoothing splines (nonparametric)
Undersmoothing and oversmoothing of BMD data
10 15 20 25
−0.
050.
050.
15
10 15 20 25
−0.
050.
050.
15
Males
10 15 20 25
−0.
050.
050.
15
10 15 20 25
−0.
050.
050.
15
10 15 20 25
−0.
050.
050.
15
Females
10 15 20 25
−0.
050.
050.
15
Patrick Breheny STA 621: Nonparametric Statistics
IntroductionRegression splines (parametric)
Smoothing splines (nonparametric)
Pointwise confidence bands
age
spnb
md
−0.05
0.00
0.05
0.10
0.15
0.20
10 15 20 25
female
10 15 20 25
male
Patrick Breheny STA 621: Nonparametric Statistics
IntroductionRegression splines (parametric)
Smoothing splines (nonparametric)
R implementation
Recall that local regression had a simple, standard function forbasic one-dimensional smoothing (loess) and an extensivepackage for more comprehensive analyses (locfit)
Spline-based smoothing is similar
(smooth.spline) does not require any packages andimplements simple one-dimensional smoothing:
fit <- smooth.spline(x,y)
plot(fit,type="l")
predict(fit,xx)
By default, the function will choose λ based on GCV, but thiscan be changed to CV, or you can specify λ
Patrick Breheny STA 621: Nonparametric Statistics
IntroductionRegression splines (parametric)
Smoothing splines (nonparametric)
The mgcv package
If you have a binary outcome variable or multiple covariates orwant confidence intervals, however, smooth.spline is lacking
A very extensive package called mgcv provides those features,as well as much more
The basic function is called gam, which stands for generalizedadditive model (we’ll discuss GAMs more in a later lecture)
Patrick Breheny STA 621: Nonparametric Statistics
IntroductionRegression splines (parametric)
Smoothing splines (nonparametric)
The mgcv package (cont’d)
The syntax of gam is very similar to glm and locfit, with afunction s() placed around any terms that you want a smoothfunction of:
fit <- gam(y~s(x))
fit <- gam(y~s(x),family="binomial")
plot(fit)
plot(fit,shade=TRUE)
predict(fit,newdata=data.frame(x=xx),se.fit=TRUE)
summary(fit)
Patrick Breheny STA 621: Nonparametric Statistics
IntroductionRegression splines (parametric)
Smoothing splines (nonparametric)
Hypothesis testing
We are often interested in testing whether the smaller of twonested models provides an adequate fit to the data
In ordinary regression, this accomplished via the F -statistic:
F =(RSS0 −RSS)/q
σ2,
where RSS is the residual sum of squares and q is thedifference in degrees of freedom between the two models
Or via the likelihood ratio test:
Λ = 2`(β)− `(β0)∼ χ2
q
Patrick Breheny STA 621: Nonparametric Statistics
IntroductionRegression splines (parametric)
Smoothing splines (nonparametric)
Hypothesis testing (cont’d)
These results do not hold exactly for the case of penalizedleast squares, but still provide a way to compute usefulapproximate p-values, using ν = tr(L) as the degrees offreedom in a model
One can do this manually, or via
anova(fit0,fit,test="F")
anova(fit0,fit,test="Chisq")
It should be noted, however, that such tests treat λ as fixed,even though in reality it is almost always estimated from thedata using CV or GCV
Patrick Breheny STA 621: Nonparametric Statistics