+ All Categories
Home > Documents > Simulation - Lecture 2 - Inversion and transformation methodsI Inversion is a special case of this...

Simulation - Lecture 2 - Inversion and transformation methodsI Inversion is a special case of this...

Date post: 09-Mar-2021
Category:
Upload: others
View: 1 times
Download: 0 times
Share this document with a friend
27
Simulation - Lecture 2 - Inversion and transformation methods Lecture version: Monday 27 th January, 2020, 23:25 Robert Davies Part A Simulation and Statistical Programming Hilary Term 2020 Part A Simulation. HT 2020. R. Davies. 1 / 27
Transcript
Page 1: Simulation - Lecture 2 - Inversion and transformation methodsI Inversion is a special case of this idea. I We may generalize this idea to take functions of collections of variables

Simulation - Lecture 2 - Inversion and transformationmethods

Lecture version: Monday 27th January, 2020, 23:25

Robert Davies

Part A Simulation and Statistical Programming

Hilary Term 2020

Part A Simulation. HT 2020. R. Davies. 1 / 27

Page 2: Simulation - Lecture 2 - Inversion and transformation methodsI Inversion is a special case of this idea. I We may generalize this idea to take functions of collections of variables

Recap from previous lecture

I Examples of distributions from different fields we might be interestedin studying

I Monte CarloI Suppose X ∼ dist, and we have a method to simulate iid random

variables Xi ∼ distI Then θn = 1

n

∑ni=1 φ(Xi) is an unbiased estimator of E (φ(X))

I We can form a confidence interval for θ using the sample variableS2φ(X) using the central limit theorem

I Rest of simulation lecturesI How do we generate X ∼ dist in the real world for increasingly

complicated distributionsI Today: Inversion, the simplest case, when the CDF is well behavedI Also today: Transformation, when you can build your distribution from

distributions that are well behaved

Part A Simulation. HT 2020. R. Davies. 2 / 27

Page 3: Simulation - Lecture 2 - Inversion and transformation methodsI Inversion is a special case of this idea. I We may generalize this idea to take functions of collections of variables

A quick note about pseudo-random numbers

I We seek to be able to generate complicated random variables andstochastic models.

I Henceforth, we will assume that we have access to a sequence ofindependent random variables (Ui, i ≥ 1) that are uniformlydistributed on (0, 1); i.e. Ui ∼ U [0, 1].

I In R, the command u←runif(100) return 100 realizations of uniformr.v. in (0, 1).

I Strictly speaking, we only have access to pseudo-random(deterministic) numbers.

I The behaviour of modern random number generators (constructed onnumber theory) resembles mathematical random numbers in manyrespects: standard statistical tests for uniformity, independence, etc.do not show significant deviations.

Part A Simulation. HT 2020. R. Davies. 3 / 27

Page 4: Simulation - Lecture 2 - Inversion and transformation methodsI Inversion is a special case of this idea. I We may generalize this idea to take functions of collections of variables

Outline

Inversion Method

Transformation Methods

Part A Simulation. HT 2020. R. Davies. 4 / 27

Page 5: Simulation - Lecture 2 - Inversion and transformation methodsI Inversion is a special case of this idea. I We may generalize this idea to take functions of collections of variables

Recap of CDF definition

I A function F : R→ [0, 1] is a cumulative distribution function (cdf) ifI F is increasing; i.e. if x ≤ y then F (x) ≤ F (y)I F is right continuous; i.e. F (x+ ε)→ F (x) as ε→ 0 (ε > 0)I F (x)→ 0 as x→ −∞ and F (x)→ 1 as x→ +∞.

I A random variable X ∈ R has cdf F if P (X ≤ x) = F (x) for allx ∈ R.

I If F is differentiable on R, with derivative f , then X is continuouslydistributed with probability density function (pdf) f .

Part A Simulation. HT 2020. R. Davies. 5 / 27

Page 6: Simulation - Lecture 2 - Inversion and transformation methodsI Inversion is a special case of this idea. I We may generalize this idea to take functions of collections of variables

The CDF of a random variable has a uniform distribution

I Proposition. Let F be a continuous and strictly increasing cdf on R,with inverse F−1 : [0, 1]→ R. Then the random variable F (X) has auniform distribution on [0, 1].

I Proof. Let y ∈ [0, 1]. Then

P (F (X) ≤ y) = P (X ≤ F−1(y)) = F (F−1(y)) = y

and so F (X) ∼ U [0, 1]

Part A Simulation. HT 2020. R. Davies. 6 / 27

Page 7: Simulation - Lecture 2 - Inversion and transformation methodsI Inversion is a special case of this idea. I We may generalize this idea to take functions of collections of variables

The inverse of the CDF applied to uniforms generatesrandom variables from the CDF

I Proposition. Let F be a continuous and strictly increasing cdf on R,with inverse F−1 : [0, 1]→ R. Let U ∼ U [0, 1] then X = F−1(U) hascdf F.

I Proof. Let x ∈ R. Then we have

P (X ≤ x) = P(F−1(U) ≤ x

)= P (U ≤ F (x))

= F (x).

Part A Simulation. HT 2020. R. Davies. 7 / 27

Page 8: Simulation - Lecture 2 - Inversion and transformation methodsI Inversion is a special case of this idea. I We may generalize this idea to take functions of collections of variables

Inversion method

Algorithm 1 Inversion method

I Given CDF F , calculate F−1

I Simulate independent Ui ∼ U [0, 1]

I Return Xi = F−1(Ui) ∼ F

Part A Simulation. HT 2020. R. Davies. 8 / 27

Page 9: Simulation - Lecture 2 - Inversion and transformation methodsI Inversion is a special case of this idea. I We may generalize this idea to take functions of collections of variables

Illustrative example of inversion method using Gaussiandistribution

−10 −8 −6 −4 −2 0 2 4 6 8 100

0.1

0.2

0.3

0.4

−10 −8 −6 −4 −2 0 2 4 6 8 100

0.2

0.4

0.6

0.8

1

Gaussian

Cumulative distribution

u~U(0,1)

x=F−1(u)

Top: pdf of a Gaussian r.v., bottom: associated cdf.

Part A Simulation. HT 2020. R. Davies. 9 / 27

Page 10: Simulation - Lecture 2 - Inversion and transformation methodsI Inversion is a special case of this idea. I We may generalize this idea to take functions of collections of variables

Exponential distribution example

I Exponential distribution. Let λ > 0. Then the exponential CDF isgiven by

F (x) = 1− e−λx

We calculate

u = F (x)

u = 1− e−λx

=⇒ log (1− u) = −λx

=⇒ x = − log (1− u)

λ

Part A Simulation. HT 2020. R. Davies. 10 / 27

Page 11: Simulation - Lecture 2 - Inversion and transformation methodsI Inversion is a special case of this idea. I We may generalize this idea to take functions of collections of variables

Exponential rvs using the inversion method

set.seed(9119)

lambda <- 0.25

n <- 100000

u <- runif(n)

x_inversion <- -log(1 - u) / lambda

x_rexp <- rexp(n = n, rate = lambda)

wilcox.test(x_inversion, x_rexp)$p.value # 0.46

Histogram of x_inversion

x_inversion

Fre

quen

cy

0 5 10 15 20 25

010

000

3000

0

Histogram of x_rexp

x_rexp

Fre

quen

cy

0 5 10 15 20 25

010

000

3000

0

Part A Simulation. HT 2020. R. Davies. 11 / 27

Page 12: Simulation - Lecture 2 - Inversion and transformation methodsI Inversion is a special case of this idea. I We may generalize this idea to take functions of collections of variables

ExamplesI Cauchy distribution. It has pdf and cdf

f(x) =1

π (1 + x2), F (x) =

1

2+arc tanx

π

We have

u = F (x)⇔ u =1

2+arc tanx

π

⇔ x = tan

(u− 1

2

))I Logistic distribution. It has pdf and cdf

f(x) =exp(−x)

(1 + exp(−x))2 , F (x) =1

1 + exp(−x)

⇔ x = log

(u

1− u

).

I Practice: Derive an algorithm to simulate from an Weibull randomvariable with rates α, λ > 0

Part A Simulation. HT 2020. R. Davies. 12 / 27

Page 13: Simulation - Lecture 2 - Inversion and transformation methodsI Inversion is a special case of this idea. I We may generalize this idea to take functions of collections of variables

Definition of the discrete CDF inverse

I Proposition. Let F be a cdf on R and define its generalized inverseF−1 : [0, 1]→ R,

F−1(u) = inf x ∈ R;F (x) ≥ u .

Let U ∼ U [0, 1] then X = F−1(U) has cdf F.

Part A Simulation. HT 2020. R. Davies. 13 / 27

Page 14: Simulation - Lecture 2 - Inversion and transformation methodsI Inversion is a special case of this idea. I We may generalize this idea to take functions of collections of variables

Discrete N−r.v. CDF

I If X is a discrete N−r.v. with P (X = n) = p(n), we get

F (x) =∑bxc

j=0 p(j) and F−1(u) is x ∈ N such that

x−1∑j=0

p(j) < u ≤x∑j=0

p(j)

with the LHS= 0 if x = 0.

I Note: the mapping at the values F (n) are irrelevant (0 probability ofgetting a single point)

I Note: the same method is applicable to any discrete valued r.v. X,P (X = xn) = p(n).

Part A Simulation. HT 2020. R. Davies. 14 / 27

Page 15: Simulation - Lecture 2 - Inversion and transformation methodsI Inversion is a special case of this idea. I We may generalize this idea to take functions of collections of variables

Example code for simple discrete rv

p <- c(0.5, 0.3, 0.2) ## pmf

p_norm <- c(0, cumsum(p)) ## 0.0 0.5 0.8 1.0

m <- length(p)

n <- 100000

u <- runif(n)

x <- array(NA, n)

for(i in 1:n)

for(j in 1:m)

if ((p_norm[j] < u[i]) & (u[i] <= p_norm[j + 1]))

x[i] <- j

sum(is.na(x)) ## 0

table(x)

## 1 2 3

## 50227 30105 19668Part A Simulation. HT 2020. R. Davies. 15 / 27

Page 16: Simulation - Lecture 2 - Inversion and transformation methodsI Inversion is a special case of this idea. I We may generalize this idea to take functions of collections of variables

Example: Geometric Distribution

I If 0 < p < 1 and q = 1− p and we want to simulate X ∼ Geom(p)then

p(x) = pqx−1, F (x) = 1− qx x = 1, 2, 3...

I The smallest x ∈ N giving F (x) ≥ u is the smallest x ≥ 1 satisfying

x ≥ log(1− u)/ log(q)

and this is given by

x = F−1(u) =

⌈log(1− u)

log(q)

⌉where dxe rounds up and we could replace 1− u with u.

Part A Simulation. HT 2020. R. Davies. 16 / 27

Page 17: Simulation - Lecture 2 - Inversion and transformation methodsI Inversion is a special case of this idea. I We may generalize this idea to take functions of collections of variables

Illustration of the Inversion Method: Discrete case

Part A Simulation. HT 2020. R. Davies. 17 / 27

Page 18: Simulation - Lecture 2 - Inversion and transformation methodsI Inversion is a special case of this idea. I We may generalize this idea to take functions of collections of variables

Outline

Inversion Method

Transformation Methods

Part A Simulation. HT 2020. R. Davies. 18 / 27

Page 19: Simulation - Lecture 2 - Inversion and transformation methodsI Inversion is a special case of this idea. I We may generalize this idea to take functions of collections of variables

Transformation Methods

I Suppose weI Have a random variable Y ∼ Q, Y ∈ ΩQ, which we can simulate (e.g.,

by inversion)I Have a random variable X ∼ P , X ∈ ΩP , which we wish to simulateI Can find a function ϕ : ΩQ → ΩP with the property that if Y ∼ Q

then X = ϕ(Y ) ∼ P .

I Then we can simulate from X by first simulating Y ∼ Q, and thenset X = ϕ(Y ).

I Inversion is a special case of this idea.

I We may generalize this idea to take functions of collections ofvariables with different distributions.

Part A Simulation. HT 2020. R. Davies. 19 / 27

Page 20: Simulation - Lecture 2 - Inversion and transformation methodsI Inversion is a special case of this idea. I We may generalize this idea to take functions of collections of variables

Transformation method

Algorithm 2 Transformation methodI Find Y ∼ Q that you can simulate from, and a function ϕ such thatX = ϕ(Y ) ∼ P

I Simulate independent Yi ∼ QI Return Xi = ϕ(Yi) ∼ P

Part A Simulation. HT 2020. R. Davies. 20 / 27

Page 21: Simulation - Lecture 2 - Inversion and transformation methodsI Inversion is a special case of this idea. I We may generalize this idea to take functions of collections of variables

Exponential to gamma example

I Example: Let Yi, i = 1, 2, ..., α, be iid variables with Yi ∼ Exp(1) andX = β−1

∑αi=1 Yi then X ∼ Gamma(α, β).

Proof: The MGF of the random variable X is

E(etX)

=

α∏i=1

E(eβ

−1tYi)

= (1− t/β)−α

which is the MGF of a Gamma(α, β) variable.Incidentally, the Gamma(α, β) density is fX(x) = βα

Γ(α)xα−1e−βx for

x > 0.

Part A Simulation. HT 2020. R. Davies. 21 / 27

Page 22: Simulation - Lecture 2 - Inversion and transformation methodsI Inversion is a special case of this idea. I We may generalize this idea to take functions of collections of variables

Transformation Methods: Box-Muller Algorithm

I Proposition. If R2 ∼ Exp(12) and Θ ∼ U [0, 2π] are independent then

X = R cos Θ, Y = R sin Θ are independent with X ∼ N (0, 1),Y ∼ N (0, 1).Proof: We have fR2,Θ(r2θ) = 1

2 exp(−r2/2

)1

2π and therefore we areinterested in

fX,Y (x, y) = fR2,Θ(r2(x, y), θ(x, y))

∣∣∣∣det∂(r2, θ)

∂(x, y)

∣∣∣∣where ∣∣∣∣det

∂(r2, θ)

∂(x, y)

∣∣∣∣ =

∣∣∣∣∣ ∂r2

∂x∂r2

∂y∂θ∂x

∂θ∂y

∣∣∣∣∣ = 2

=⇒ fX,Y (x, y) =1

2e−

12

(x2+y2) 1

2π2 =

(1√2πe−

12x2)(

1√2πe−

12y2)

Part A Simulation. HT 2020. R. Davies. 22 / 27

Page 23: Simulation - Lecture 2 - Inversion and transformation methodsI Inversion is a special case of this idea. I We may generalize this idea to take functions of collections of variables

Transformation Methods: Box-Muller Algorithm, applied

I Let U1 ∼ U [0, 1] and U2 ∼ U [0, 1] then

R2 = −2 log(U1) ∼ Exp

(1

2

)Θ = 2πU2 ∼ U [0, 2π]

and

X = R cos Θ ∼ N (0, 1)

Y = R sin Θ ∼ N (0, 1),

I Note this still requires evaluating log, cos and sin.

Part A Simulation. HT 2020. R. Davies. 23 / 27

Page 24: Simulation - Lecture 2 - Inversion and transformation methodsI Inversion is a special case of this idea. I We may generalize this idea to take functions of collections of variables

Box Muller applied

set.seed(913)

n <- 100000

u1 <- runif(n)

u2 <- runif(n)

lambda <- 1 / 2

r2 <- -log(1 - u1) / lambda ## are now Exp(1/2)

theta <- 2 * pi * u2 ## U[0, 2*pi]

r <- sqrt(r2)

x <- r * cos(theta)

y <- r * sin(theta)

round(c(mean(x), var(x)), 3) ## -0.001 0.998

round(c(mean(y), var(y)), 3) ## -0.003 1.000

cor(x, y) ## -0.0006317268

Part A Simulation. HT 2020. R. Davies. 24 / 27

Page 25: Simulation - Lecture 2 - Inversion and transformation methodsI Inversion is a special case of this idea. I We may generalize this idea to take functions of collections of variables

Simulating Multivariate Normal

I Let consider X ∈ Rd, X ∼ N(µ,Σ) where µ is the mean and Σ is the(positive definite) covariance matrix.

fX(x) = (2π)−d/2|det Σ|−1/2 exp

(−1

2(x− µ)T Σ−1 (x− µ)

).

I Proposition. Let Z = (Z1, ..., Zd) be a collection of d independentstandard normal random variables. Let L be a real d× d matrixsatisfying

LLT = Σ,

andX = LZ + µ.

ThenX ∼ N (µ,Σ).

Part A Simulation. HT 2020. R. Davies. 25 / 27

Page 26: Simulation - Lecture 2 - Inversion and transformation methodsI Inversion is a special case of this idea. I We may generalize this idea to take functions of collections of variables

Simulating Multivariate Normal proof

I Proof. We have fZ(z) = (2π)d/2 exp(−1

2zT z).The joint density of

the new variables is

fX(x) = fZ(z)

∣∣∣∣det∂z

∂x

∣∣∣∣where ∂z

∂x = L−1 and det(L) = det(LT ) so det(L2) = det(Σ), and

det(L−1) = 1/det(L) so det(L−1) = det(Σ)−1/2. Also

zT z = (x− µ)T(L−1

)TL−1 (x− µ)

= (x− µ)T Σ−1 (x− µ) .

I If Σ = V DV T is the eigendecomposition of Σ, we can pickL = V D1/2.

I Cholesky factorization Σ = LLT where L is a lower triangular matrix.

Part A Simulation. HT 2020. R. Davies. 26 / 27

Page 27: Simulation - Lecture 2 - Inversion and transformation methodsI Inversion is a special case of this idea. I We may generalize this idea to take functions of collections of variables

Recap

I Monte Carlo is useful but requires simulated random variables

I Assume we can always drawn uniform random variables

I Inversion method For continuous strictly increasing CDFs we candraw Xi as F−1(Ui)

I We can do the same thing for discrete distributions

I Transformation method If we can find ϕ for some distribution Yisuch that Xi = ϕ(Yi), then we can simulate Xi in that way

Part A Simulation. HT 2020. R. Davies. 27 / 27


Recommended