+ All Categories
Home > Documents > Homework 1 due Friday, February 22 at 2 PM. No class on...

Homework 1 due Friday, February 22 at 2 PM. No class on...

Date post: 16-Aug-2020
Category:
Upload: others
View: 0 times
Download: 0 times
Share this document with a friend
9
Homework 1 due Friday, February 22 at 2 PM. No class on Tuesday, February 19. Reading: Kloeden and Platen, Sec. 1.3 Simulating models with random variables is often called Monte Carlo simulation But true Monte Carlo algorithms really refer to using random simulations to solve deterministic problems. Many Monte Carlo simulations are formulated in such a way that it refers to a stream of independent random variables. When this is done, the problem of simulating random variables is reduced to just simulating a single (one-dimensional) random variable. Our discrete-time Brownian motion model is expressed in terms of independent random variables: If Z n is vector-valued, then we have to find a way to express it in terms of independent random variables. How does one simulate a random variable at all on a computer? Computers are deterministic. Random numbers are most typically produced by pseudorandom algorithms, which are deterministic algorithms that produce a stream of numbers that, if confronted with a battery of standard statistical tests, appear to behave as if they were random. These pseudorandom algorithms typically will accept a seed value, and produce predictable results based on it. Typical pseudorandom algorithms (built into software or operating system as ran or rand) usually are simulating a random variable These studies are done by certain specialists, such as Pierre L'Ecuyer (Montreal) who has a website with review papers and suggested, high quality pseudorandom algorithms. For debugging or testing code, it's often useful to use explicit seeds. For the purposes of simulating uniformly distributed random variables, therefore, one can Interlude on Numerical Simulation of Random Variables Friday, February 15, 2013 2:00 PM AppSDE13 Page 1
Transcript
Page 1: Homework 1 due Friday, February 22 at 2 PM. No class on …eaton.math.rpi.edu/faculty/Kramer/AppSDE13/appsde021513.pdf · 2013. 2. 18. · Homework 1 due Friday, February 22 at 2

Homework 1 due Friday, February 22 at 2 PM.

No class on Tuesday, February 19.

Reading: Kloeden and Platen, Sec. 1.3

Simulating models with random variables is often called Monte Carlo simulationBut true Monte Carlo algorithms really refer to using random simulations to solve deterministic problems.

Many Monte Carlo simulations are formulated in such a way that it refers to a stream of independent random variables. When this is done, the problem of simulating random variables is reduced to just simulating a single (one-dimensional) random variable.

Our discrete-time Brownian motion model is expressed in terms of independent random variables:

If Zn is vector-valued, then we have to find a way to express it in terms of independent random variables.

How does one simulate a random variable at all on a computer? Computers are deterministic. Random numbers are most typically produced by pseudorandom algorithms, which are deterministic algorithms that produce a stream of numbers that, if confronted with a battery of standard statistical tests, appear to behave as if they were random. These pseudorandom algorithms typically will accept a seed value, and produce predictable results based on it. Typical pseudorandom algorithms (built into software or operating system as ran or rand) usually are simulating a random variable

These studies are done by certain specialists, such as Pierre L'Ecuyer (Montreal)who has a website with review papers and suggested, high quality pseudorandom algorithms.

For debugging or testing code, it's often useful to use explicit seeds.

For the purposes of simulating uniformly distributed random variables, therefore, one can

Interlude on Numerical Simulation of Random VariablesFriday, February 15, 20132:00 PM

AppSDE13 Page 1

Page 2: Homework 1 due Friday, February 22 at 2 PM. No class on …eaton.math.rpi.edu/faculty/Kramer/AppSDE13/appsde021513.pdf · 2013. 2. 18. · Homework 1 due Friday, February 22 at 2

For the purposes of simulating uniformly distributed random variables, therefore, one can use (with the above caveats) built-in random number generators and/or your own pseudorandom number generator.

If you want to simulate

What about simulating other types of random variables? Sometimes this is also built in to software packages like MATLAB, but how do these algorithms actually work, and how could you write your own if you are trying to simulate a random variable that isn't covered by the software you're using? We will describe some general procedures.

Discrete random variables

More general (particularly continuous) random variables?

Completely general-purpose procedure is the Inverse Transform Method.

Given an arbitrary (one-dimensional) random variable Y, with prescribed probability distribution, represent this information in terms of the CDF.

AppSDE13 Page 2

Page 3: Homework 1 due Friday, February 22 at 2 PM. No class on …eaton.math.rpi.edu/faculty/Kramer/AppSDE13/appsde021513.pdf · 2013. 2. 18. · Homework 1 due Friday, February 22 at 2

Slight adjustment when the CDF has flat regions (i.e., discrete random variables), one needs to interpret:

A good example where this works well is exponentially dsitributed random variables.

AppSDE13 Page 3

Page 4: Homework 1 due Friday, February 22 at 2 PM. No class on …eaton.math.rpi.edu/faculty/Kramer/AppSDE13/appsde021513.pdf · 2013. 2. 18. · Homework 1 due Friday, February 22 at 2

How does inverse transform method work for Gaussians?

AppSDE13 Page 4

Page 5: Homework 1 due Friday, February 22 at 2 PM. No class on …eaton.math.rpi.edu/faculty/Kramer/AppSDE13/appsde021513.pdf · 2013. 2. 18. · Homework 1 due Friday, February 22 at 2

Inverting this function is awkward to program and slow. But there are better special-purpose methods for simulating Gaussian random variables.

Simulating Gaussian (normal) random variables

There are two widely used standard approaches that only work for Gaussian because of the following special property:

Follows from computing probability distributions for functions of random variables.

A literal implementation of this idea gives the Box Muller method.

AppSDE13 Page 5

Page 6: Homework 1 due Friday, February 22 at 2 PM. No class on …eaton.math.rpi.edu/faculty/Kramer/AppSDE13/appsde021513.pdf · 2013. 2. 18. · Homework 1 due Friday, February 22 at 2

This produces independent, standard Gaussian random variables in pairs (throw one away if you just need an odd number).

Once you have standard normal variables, you can get arbitrary ones by rescaling:

An alternative that avoids "expensive" evaluations of trigonometric functions is the Polar Marsaglia method.

It is built on the Box Muller method and the idea of the rejection method.

Rejection method in basic form:

AppSDE13 Page 6

Page 7: Homework 1 due Friday, February 22 at 2 PM. No class on …eaton.math.rpi.edu/faculty/Kramer/AppSDE13/appsde021513.pdf · 2013. 2. 18. · Homework 1 due Friday, February 22 at 2

Simulate a random point which is uniformly distributed in an arbitrary region A by covering it with a larger rectangle B, and then generating uniform points in B by just applying uniform choices of coordinates. If the result lies in A, then accept that point, else reject it and repeat the simulation until acceptance.

Polar Marsagla method:

Simulate a point uniformly within the unit circle by rejecting from the circumscribing square.

Basic idea: think:

AppSDE13 Page 7

Page 8: Homework 1 due Friday, February 22 at 2 PM. No class on …eaton.math.rpi.edu/faculty/Kramer/AppSDE13/appsde021513.pdf · 2013. 2. 18. · Homework 1 due Friday, February 22 at 2

We saw that the inverse CDF approach, while generally applicable in principle, can be problematic in practice (i.e., for Gaussian random variables and many others) because the inverse CDF may be awkward or expensive to compute. However, we can combine the inverse CDF method with the rejection method to produce a general procedure that is useful for a broad range of random variables.

Suppose we are given some strange probability density pY:

Find a simple function f(y) that is never less than pY and which has the property that its shape, after normalization, gives a probability density that is easy to sample from (exponential, Gaussian).

Sample from this easy distribution; this produces a candidate value for Y. Given this candidate value, accept it with probability

Returning to our calculation of the continuous-time limit for the forward Kolmogorov equation for Brownian motion

The result we obtain will depend on how the size of the kick scales with the size of the time interval of the kick. We already saw that if this doesn't decrease, then one should use jump processes instead. But at what rate should the size of the kick be scaled with the time interval?

We will argue that the most natural scaling is:

AppSDE13 Page 8

Page 9: Homework 1 due Friday, February 22 at 2 PM. No class on …eaton.math.rpi.edu/faculty/Kramer/AppSDE13/appsde021513.pdf · 2013. 2. 18. · Homework 1 due Friday, February 22 at 2

We will argue that the most natural scaling is:

With this scaling, the drift and diffusivity coefficient have finite limits. Clear for the drift. For diffusivity, note:

Also,

And the error term does go to zero.

Next time we will show that other choices of scaling don't lead to such clean results.

AppSDE13 Page 9


Recommended