+ All Categories
Home > Documents > Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods...

Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods...

Date post: 30-Jan-2018
Category:
Upload: dangdan
View: 215 times
Download: 1 times
Share this document with a friend
310
MCM for PDEs Monte Carlo Methods for Partial Differential Equations Prof. Michael Mascagni Department of Computer Science Department of Mathematics Department of Scientific Computing Florida State University, Tallahassee, FL 32306 USA E-mail: [email protected] or [email protected] URL: http://www.cs.fsu.edu/mascagni In collaboration with Drs. Marcia O. Fenley, and Nikolai Simonov and Messrs. Alexander Silalahi, and James McClain Research supported by ARO, DOE/ASCI, NATO, and NSF
Transcript
Page 1: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Monte Carlo Methods forPartial Differential Equations

Prof. Michael Mascagni

Department of Computer ScienceDepartment of Mathematics

Department of Scientific ComputingFlorida State University, Tallahassee, FL 32306 USA

E-mail: [email protected] or [email protected]: http://www.cs.fsu.edu/∼mascagni

In collaboration with Drs. Marcia O. Fenley, and Nikolai Simonov and Messrs. AlexanderSilalahi, and James McClain

Research supported by ARO, DOE/ASCI, NATO, and NSF

Page 2: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Introduction

Early History of MCMs for PDEs

Probabilistic Representations of PDEsProbabilistic Representation of Elliptic PDEs via Feynman-KacProbabilistic Representation of Parabolic PDEs via Feynman-KacProbabilistic Approaches of Reaction-Diffusion EquationsMonte Carlo Methods for PDEs from Fluid MechanicsProbabilistic Representations for Other PDEs

Monte Carlo Methods and Linear Algebra

Parallel Computing OverviewGeneral Principles for Constructing Parallel AlgorithmsParallel N-body Potential Evaluation

Bibliography

Page 3: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Introduction

Early History of MCMs for PDEs

Probabilistic Representations of PDEsProbabilistic Representation of Elliptic PDEs via Feynman-KacProbabilistic Representation of Parabolic PDEs via Feynman-KacProbabilistic Approaches of Reaction-Diffusion EquationsMonte Carlo Methods for PDEs from Fluid MechanicsProbabilistic Representations for Other PDEs

Monte Carlo Methods and Linear Algebra

Parallel Computing OverviewGeneral Principles for Constructing Parallel AlgorithmsParallel N-body Potential Evaluation

Bibliography

Page 4: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Introduction

Early History of MCMs for PDEs

Probabilistic Representations of PDEsProbabilistic Representation of Elliptic PDEs via Feynman-KacProbabilistic Representation of Parabolic PDEs via Feynman-KacProbabilistic Approaches of Reaction-Diffusion EquationsMonte Carlo Methods for PDEs from Fluid MechanicsProbabilistic Representations for Other PDEs

Monte Carlo Methods and Linear Algebra

Parallel Computing OverviewGeneral Principles for Constructing Parallel AlgorithmsParallel N-body Potential Evaluation

Bibliography

Page 5: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Introduction

Early History of MCMs for PDEs

Probabilistic Representations of PDEsProbabilistic Representation of Elliptic PDEs via Feynman-KacProbabilistic Representation of Parabolic PDEs via Feynman-KacProbabilistic Approaches of Reaction-Diffusion EquationsMonte Carlo Methods for PDEs from Fluid MechanicsProbabilistic Representations for Other PDEs

Monte Carlo Methods and Linear Algebra

Parallel Computing OverviewGeneral Principles for Constructing Parallel AlgorithmsParallel N-body Potential Evaluation

Bibliography

Page 6: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Introduction

Early History of MCMs for PDEs

Probabilistic Representations of PDEsProbabilistic Representation of Elliptic PDEs via Feynman-KacProbabilistic Representation of Parabolic PDEs via Feynman-KacProbabilistic Approaches of Reaction-Diffusion EquationsMonte Carlo Methods for PDEs from Fluid MechanicsProbabilistic Representations for Other PDEs

Monte Carlo Methods and Linear Algebra

Parallel Computing OverviewGeneral Principles for Constructing Parallel AlgorithmsParallel N-body Potential Evaluation

Bibliography

Page 7: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Early History of MCMs for PDEs

Early History of MCMs for PDEs

1. Courant, Friedrichs, and Lewy: Their pivotal 1928 paper hasprobabilistic interpretations and MC algorithms for linear ellipticand parabolic problems

2. Fermi/Ulam/von Neumann: Atomic bomb calculations were doneusing Monte Carlo methods for neutron transport, their successinspired much post-War work especially in nuclear reactor design

3. Kac and Donsker: Used large deviation calculations to estimateeigenvalues of a linear Schrödinger equation

4. Forsythe and Leibler: Derived a MCM for solving special linearsystems related to discrete elliptic PDE problems

Page 8: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Early History of MCMs for PDEs

Early History of MCMs for PDEs

1. Courant, Friedrichs, and Lewy: Their pivotal 1928 paper hasprobabilistic interpretations and MC algorithms for linear ellipticand parabolic problems

2. Fermi/Ulam/von Neumann: Atomic bomb calculations were doneusing Monte Carlo methods for neutron transport, their successinspired much post-War work especially in nuclear reactor design

3. Kac and Donsker: Used large deviation calculations to estimateeigenvalues of a linear Schrödinger equation

4. Forsythe and Leibler: Derived a MCM for solving special linearsystems related to discrete elliptic PDE problems

Page 9: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Early History of MCMs for PDEs

Early History of MCMs for PDEs

1. Courant, Friedrichs, and Lewy: Their pivotal 1928 paper hasprobabilistic interpretations and MC algorithms for linear ellipticand parabolic problems

2. Fermi/Ulam/von Neumann: Atomic bomb calculations were doneusing Monte Carlo methods for neutron transport, their successinspired much post-War work especially in nuclear reactor design

3. Kac and Donsker: Used large deviation calculations to estimateeigenvalues of a linear Schrödinger equation

4. Forsythe and Leibler: Derived a MCM for solving special linearsystems related to discrete elliptic PDE problems

Page 10: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Early History of MCMs for PDEs

Early History of MCMs for PDEs

1. Courant, Friedrichs, and Lewy: Their pivotal 1928 paper hasprobabilistic interpretations and MC algorithms for linear ellipticand parabolic problems

2. Fermi/Ulam/von Neumann: Atomic bomb calculations were doneusing Monte Carlo methods for neutron transport, their successinspired much post-War work especially in nuclear reactor design

3. Kac and Donsker: Used large deviation calculations to estimateeigenvalues of a linear Schrödinger equation

4. Forsythe and Leibler: Derived a MCM for solving special linearsystems related to discrete elliptic PDE problems

Page 11: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Early History of MCMs for PDEs

Early History of MCMs for PDEs

1. Curtiss: Compared Monte Carlo, direct and iterative solutionmethods for Ax = b

I General conclusions of all this work (as other methods wereexplored) is that random walk methods do worse than conventionalmethods on serial computers except when modest precision andfew solution values are required

I Much of this “conventional wisdom” needs revision due tocomplexity differences with parallel implementations

Page 12: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Early History of MCMs for PDEs

Early History of MCMs for PDEs

1. Curtiss: Compared Monte Carlo, direct and iterative solutionmethods for Ax = b

I General conclusions of all this work (as other methods wereexplored) is that random walk methods do worse than conventionalmethods on serial computers except when modest precision andfew solution values are required

I Much of this “conventional wisdom” needs revision due tocomplexity differences with parallel implementations

Page 13: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Early History of MCMs for PDEs

Early History of MCMs for PDEs

1. Curtiss: Compared Monte Carlo, direct and iterative solutionmethods for Ax = b

I General conclusions of all this work (as other methods wereexplored) is that random walk methods do worse than conventionalmethods on serial computers except when modest precision andfew solution values are required

I Much of this “conventional wisdom” needs revision due tocomplexity differences with parallel implementations

Page 14: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Representation of Elliptic PDEs via Feynman-Kac

Elliptic PDEs as Boundary Value Problems

1. Elliptic PDEs describe equilibrium, like the electrostatic field setup by a charge distribution, or the strain in a beam due to loading

2. No time dependence in elliptic problems so it is natural to havethe interior configuration satisfy a PDE with boundary conditionsto choose a particular global solution

3. Elliptic PDEs are thus part of boundary value problems (BVPs)such as the famous Dirichlet problem for Laplace’s equation:

12

∆u(x) = 0, x ∈ Ω, u(x) = g(x), x ∈ ∂Ω (1)

4. Here Ω ⊂ Rs is a open set (domain) with a smooth boundary ∂Ωand g(x) is the given boundary condition

Page 15: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Representation of Elliptic PDEs via Feynman-Kac

Elliptic PDEs as Boundary Value Problems

1. Elliptic PDEs describe equilibrium, like the electrostatic field setup by a charge distribution, or the strain in a beam due to loading

2. No time dependence in elliptic problems so it is natural to havethe interior configuration satisfy a PDE with boundary conditionsto choose a particular global solution

3. Elliptic PDEs are thus part of boundary value problems (BVPs)such as the famous Dirichlet problem for Laplace’s equation:

12

∆u(x) = 0, x ∈ Ω, u(x) = g(x), x ∈ ∂Ω (1)

4. Here Ω ⊂ Rs is a open set (domain) with a smooth boundary ∂Ωand g(x) is the given boundary condition

Page 16: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Representation of Elliptic PDEs via Feynman-Kac

Elliptic PDEs as Boundary Value Problems

1. Elliptic PDEs describe equilibrium, like the electrostatic field setup by a charge distribution, or the strain in a beam due to loading

2. No time dependence in elliptic problems so it is natural to havethe interior configuration satisfy a PDE with boundary conditionsto choose a particular global solution

3. Elliptic PDEs are thus part of boundary value problems (BVPs)such as the famous Dirichlet problem for Laplace’s equation:

12

∆u(x) = 0, x ∈ Ω, u(x) = g(x), x ∈ ∂Ω (1)

4. Here Ω ⊂ Rs is a open set (domain) with a smooth boundary ∂Ωand g(x) is the given boundary condition

Page 17: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Representation of Elliptic PDEs via Feynman-Kac

Elliptic PDEs as Boundary Value Problems

1. Elliptic PDEs describe equilibrium, like the electrostatic field setup by a charge distribution, or the strain in a beam due to loading

2. No time dependence in elliptic problems so it is natural to havethe interior configuration satisfy a PDE with boundary conditionsto choose a particular global solution

3. Elliptic PDEs are thus part of boundary value problems (BVPs)such as the famous Dirichlet problem for Laplace’s equation:

12

∆u(x) = 0, x ∈ Ω, u(x) = g(x), x ∈ ∂Ω (1)

4. Here Ω ⊂ Rs is a open set (domain) with a smooth boundary ∂Ωand g(x) is the given boundary condition

Page 18: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Representation of Elliptic PDEs via Feynman-Kac

Elliptic PDEs as Boundary Value Problems

I An important equivalence for the Laplace equation is the meanvalue property (MVP), i.e. if u(x) is a solution to (1) then:

u(x) =1

|∂Sn(x , r)|

∫∂Sn(x,r)

u(y) dy

∂Sn(x , r) is the surface of an n-dimensional sphere centered at xwith radius r

I Another way to express u(x) is via the Green’s function:u(x) =

∫∂Ω

G(x , y)u(y) dyI Showing a function has the MVP and the right boundary values

establishes it as the unique solution to (1)

Page 19: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Representation of Elliptic PDEs via Feynman-Kac

Elliptic PDEs as Boundary Value Problems

I An important equivalence for the Laplace equation is the meanvalue property (MVP), i.e. if u(x) is a solution to (1) then:

u(x) =1

|∂Sn(x , r)|

∫∂Sn(x,r)

u(y) dy

∂Sn(x , r) is the surface of an n-dimensional sphere centered at xwith radius r

I Another way to express u(x) is via the Green’s function:u(x) =

∫∂Ω

G(x , y)u(y) dyI Showing a function has the MVP and the right boundary values

establishes it as the unique solution to (1)

Page 20: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Representation of Elliptic PDEs via Feynman-Kac

Elliptic PDEs as Boundary Value Problems

I An important equivalence for the Laplace equation is the meanvalue property (MVP), i.e. if u(x) is a solution to (1) then:

u(x) =1

|∂Sn(x , r)|

∫∂Sn(x,r)

u(y) dy

∂Sn(x , r) is the surface of an n-dimensional sphere centered at xwith radius r

I Another way to express u(x) is via the Green’s function:u(x) =

∫∂Ω

G(x , y)u(y) dyI Showing a function has the MVP and the right boundary values

establishes it as the unique solution to (1)

Page 21: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Representation of Elliptic PDEs via Feynman-Kac

Probabilistic Approaches to Elliptic PDEsI Early this century probabilists placed measures on different sets

including sets of continuous functions1. Called Wiener measure2. Gaussian based: 1√

2πte−

x22t

3. Sample paths are Brownian motion4. Related to linear PDEs

I E.g. u(x) = Ex [g(β(τ∂Ω))] is the Wiener integral representation ofthe solution to (1), to prove it we must check:

1. u(x) = g(x) on ∂Ω2. u(x) has the MVP

I Interpretation via Brownian motion and/or a probabilistic Green’sfunction

I Important: τ∂Ω = first passage (hitting) time of the path β(·)started at x to ∂Ω, statistics based on this random variable areintimately related to elliptic problems

Page 22: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Representation of Elliptic PDEs via Feynman-Kac

Probabilistic Approaches to Elliptic PDEsI Early this century probabilists placed measures on different sets

including sets of continuous functions1. Called Wiener measure2. Gaussian based: 1√

2πte−

x22t

3. Sample paths are Brownian motion4. Related to linear PDEs

I E.g. u(x) = Ex [g(β(τ∂Ω))] is the Wiener integral representation ofthe solution to (1), to prove it we must check:

1. u(x) = g(x) on ∂Ω2. u(x) has the MVP

I Interpretation via Brownian motion and/or a probabilistic Green’sfunction

I Important: τ∂Ω = first passage (hitting) time of the path β(·)started at x to ∂Ω, statistics based on this random variable areintimately related to elliptic problems

Page 23: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Representation of Elliptic PDEs via Feynman-Kac

Probabilistic Approaches to Elliptic PDEsI Early this century probabilists placed measures on different sets

including sets of continuous functions1. Called Wiener measure2. Gaussian based: 1√

2πte−

x22t

3. Sample paths are Brownian motion4. Related to linear PDEs

I E.g. u(x) = Ex [g(β(τ∂Ω))] is the Wiener integral representation ofthe solution to (1), to prove it we must check:

1. u(x) = g(x) on ∂Ω2. u(x) has the MVP

I Interpretation via Brownian motion and/or a probabilistic Green’sfunction

I Important: τ∂Ω = first passage (hitting) time of the path β(·)started at x to ∂Ω, statistics based on this random variable areintimately related to elliptic problems

Page 24: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Representation of Elliptic PDEs via Feynman-Kac

Probabilistic Approaches to Elliptic PDEsI Early this century probabilists placed measures on different sets

including sets of continuous functions1. Called Wiener measure2. Gaussian based: 1√

2πte−

x22t

3. Sample paths are Brownian motion4. Related to linear PDEs

I E.g. u(x) = Ex [g(β(τ∂Ω))] is the Wiener integral representation ofthe solution to (1), to prove it we must check:

1. u(x) = g(x) on ∂Ω2. u(x) has the MVP

I Interpretation via Brownian motion and/or a probabilistic Green’sfunction

I Important: τ∂Ω = first passage (hitting) time of the path β(·)started at x to ∂Ω, statistics based on this random variable areintimately related to elliptic problems

Page 25: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Representation of Elliptic PDEs via Feynman-Kac

Probabilistic Approaches to Elliptic PDEsI Early this century probabilists placed measures on different sets

including sets of continuous functions1. Called Wiener measure2. Gaussian based: 1√

2πte−

x22t

3. Sample paths are Brownian motion4. Related to linear PDEs

I E.g. u(x) = Ex [g(β(τ∂Ω))] is the Wiener integral representation ofthe solution to (1), to prove it we must check:

1. u(x) = g(x) on ∂Ω2. u(x) has the MVP

I Interpretation via Brownian motion and/or a probabilistic Green’sfunction

I Important: τ∂Ω = first passage (hitting) time of the path β(·)started at x to ∂Ω, statistics based on this random variable areintimately related to elliptic problems

Page 26: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Representation of Elliptic PDEs via Feynman-Kac

Probabilistic Approaches to Elliptic PDEsI Early this century probabilists placed measures on different sets

including sets of continuous functions1. Called Wiener measure2. Gaussian based: 1√

2πte−

x22t

3. Sample paths are Brownian motion4. Related to linear PDEs

I E.g. u(x) = Ex [g(β(τ∂Ω))] is the Wiener integral representation ofthe solution to (1), to prove it we must check:

1. u(x) = g(x) on ∂Ω2. u(x) has the MVP

I Interpretation via Brownian motion and/or a probabilistic Green’sfunction

I Important: τ∂Ω = first passage (hitting) time of the path β(·)started at x to ∂Ω, statistics based on this random variable areintimately related to elliptic problems

Page 27: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Representation of Elliptic PDEs via Feynman-Kac

Probabilistic Approaches to Elliptic PDEsI Early this century probabilists placed measures on different sets

including sets of continuous functions1. Called Wiener measure2. Gaussian based: 1√

2πte−

x22t

3. Sample paths are Brownian motion4. Related to linear PDEs

I E.g. u(x) = Ex [g(β(τ∂Ω))] is the Wiener integral representation ofthe solution to (1), to prove it we must check:

1. u(x) = g(x) on ∂Ω2. u(x) has the MVP

I Interpretation via Brownian motion and/or a probabilistic Green’sfunction

I Important: τ∂Ω = first passage (hitting) time of the path β(·)started at x to ∂Ω, statistics based on this random variable areintimately related to elliptic problems

Page 28: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Representation of Elliptic PDEs via Feynman-Kac

Probabilistic Approaches to Elliptic PDEsI Early this century probabilists placed measures on different sets

including sets of continuous functions1. Called Wiener measure2. Gaussian based: 1√

2πte−

x22t

3. Sample paths are Brownian motion4. Related to linear PDEs

I E.g. u(x) = Ex [g(β(τ∂Ω))] is the Wiener integral representation ofthe solution to (1), to prove it we must check:

1. u(x) = g(x) on ∂Ω2. u(x) has the MVP

I Interpretation via Brownian motion and/or a probabilistic Green’sfunction

I Important: τ∂Ω = first passage (hitting) time of the path β(·)started at x to ∂Ω, statistics based on this random variable areintimately related to elliptic problems

Page 29: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Representation of Elliptic PDEs via Feynman-Kac

Probabilistic Approaches to Elliptic PDEsI Early this century probabilists placed measures on different sets

including sets of continuous functions1. Called Wiener measure2. Gaussian based: 1√

2πte−

x22t

3. Sample paths are Brownian motion4. Related to linear PDEs

I E.g. u(x) = Ex [g(β(τ∂Ω))] is the Wiener integral representation ofthe solution to (1), to prove it we must check:

1. u(x) = g(x) on ∂Ω2. u(x) has the MVP

I Interpretation via Brownian motion and/or a probabilistic Green’sfunction

I Important: τ∂Ω = first passage (hitting) time of the path β(·)started at x to ∂Ω, statistics based on this random variable areintimately related to elliptic problems

Page 30: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Representation of Elliptic PDEs via Feynman-Kac

Probabilistic Approaches to Elliptic PDEsI Early this century probabilists placed measures on different sets

including sets of continuous functions1. Called Wiener measure2. Gaussian based: 1√

2πte−

x22t

3. Sample paths are Brownian motion4. Related to linear PDEs

I E.g. u(x) = Ex [g(β(τ∂Ω))] is the Wiener integral representation ofthe solution to (1), to prove it we must check:

1. u(x) = g(x) on ∂Ω2. u(x) has the MVP

I Interpretation via Brownian motion and/or a probabilistic Green’sfunction

I Important: τ∂Ω = first passage (hitting) time of the path β(·)started at x to ∂Ω, statistics based on this random variable areintimately related to elliptic problems

Page 31: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Representation of Elliptic PDEs via Feynman-Kac

Probabilistic Approaches to Elliptic PDEs

I Can generalize Wiener integrals to different BVPs via therelationship between elliptic operators, stochastic differentialequations (SDEs), and the Feynman-Kac formula

I E.g. consider the general elliptic PDE:

Lu(x)− c(x)u(x) = f (x), x ∈ Ω, c(x) ≥ 0,u(x) = g(x), x ∈ ∂Ω (2.1a)

where L is an elliptic partial differential operator of the form:

L =12

s∑i,j=1

aij (x)∂2

∂xi∂xj+

s∑i=1

bi (x)∂

∂xi, (2.1b)

Page 32: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Representation of Elliptic PDEs via Feynman-Kac

Probabilistic Approaches to Elliptic PDEs

I Can generalize Wiener integrals to different BVPs via therelationship between elliptic operators, stochastic differentialequations (SDEs), and the Feynman-Kac formula

I E.g. consider the general elliptic PDE:

Lu(x)− c(x)u(x) = f (x), x ∈ Ω, c(x) ≥ 0,u(x) = g(x), x ∈ ∂Ω (2.1a)

where L is an elliptic partial differential operator of the form:

L =12

s∑i,j=1

aij (x)∂2

∂xi∂xj+

s∑i=1

bi (x)∂

∂xi, (2.1b)

Page 33: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Representation of Elliptic PDEs via Feynman-Kac

Probabilistic Approaches to Elliptic PDEsI The Wiener integral representation is:

u(x) = ELx

[ ∫ τ∂Ω

0

g(β(τ∂Ω))

τ∂Ω− f (β(t))

e−

∫ t0 c(β(s)) ds dt

](2.2a)

the expectation is w.r.t. paths which are solutions to the following(vector) SDE:

dβ(t) = σ(β(t)) dW (t) + b(β(t)) dt , β(0) = x (2.2b)

I The matrix σ(·) is the Choleski factor (matrix-like square root) ofaij (·) in (2.1b)

I To use these ideas to construct MCMs for elliptic BVPs onemust:

1. Simulate sample paths via SDEs (2.2b)2. Evaluate (2.2a) on the sample paths3. Sample until variance is acceptable

Page 34: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Representation of Elliptic PDEs via Feynman-Kac

Probabilistic Approaches to Elliptic PDEsI The Wiener integral representation is:

u(x) = ELx

[ ∫ τ∂Ω

0

g(β(τ∂Ω))

τ∂Ω− f (β(t))

e−

∫ t0 c(β(s)) ds dt

](2.2a)

the expectation is w.r.t. paths which are solutions to the following(vector) SDE:

dβ(t) = σ(β(t)) dW (t) + b(β(t)) dt , β(0) = x (2.2b)

I The matrix σ(·) is the Choleski factor (matrix-like square root) ofaij (·) in (2.1b)

I To use these ideas to construct MCMs for elliptic BVPs onemust:

1. Simulate sample paths via SDEs (2.2b)2. Evaluate (2.2a) on the sample paths3. Sample until variance is acceptable

Page 35: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Representation of Elliptic PDEs via Feynman-Kac

Probabilistic Approaches to Elliptic PDEsI The Wiener integral representation is:

u(x) = ELx

[ ∫ τ∂Ω

0

g(β(τ∂Ω))

τ∂Ω− f (β(t))

e−

∫ t0 c(β(s)) ds dt

](2.2a)

the expectation is w.r.t. paths which are solutions to the following(vector) SDE:

dβ(t) = σ(β(t)) dW (t) + b(β(t)) dt , β(0) = x (2.2b)

I The matrix σ(·) is the Choleski factor (matrix-like square root) ofaij (·) in (2.1b)

I To use these ideas to construct MCMs for elliptic BVPs onemust:

1. Simulate sample paths via SDEs (2.2b)2. Evaluate (2.2a) on the sample paths3. Sample until variance is acceptable

Page 36: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Representation of Elliptic PDEs via Feynman-Kac

Probabilistic Approaches to Elliptic PDEsI The Wiener integral representation is:

u(x) = ELx

[ ∫ τ∂Ω

0

g(β(τ∂Ω))

τ∂Ω− f (β(t))

e−

∫ t0 c(β(s)) ds dt

](2.2a)

the expectation is w.r.t. paths which are solutions to the following(vector) SDE:

dβ(t) = σ(β(t)) dW (t) + b(β(t)) dt , β(0) = x (2.2b)

I The matrix σ(·) is the Choleski factor (matrix-like square root) ofaij (·) in (2.1b)

I To use these ideas to construct MCMs for elliptic BVPs onemust:

1. Simulate sample paths via SDEs (2.2b)2. Evaluate (2.2a) on the sample paths3. Sample until variance is acceptable

Page 37: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Representation of Elliptic PDEs via Feynman-Kac

Probabilistic Approaches to Elliptic PDEsI The Wiener integral representation is:

u(x) = ELx

[ ∫ τ∂Ω

0

g(β(τ∂Ω))

τ∂Ω− f (β(t))

e−

∫ t0 c(β(s)) ds dt

](2.2a)

the expectation is w.r.t. paths which are solutions to the following(vector) SDE:

dβ(t) = σ(β(t)) dW (t) + b(β(t)) dt , β(0) = x (2.2b)

I The matrix σ(·) is the Choleski factor (matrix-like square root) ofaij (·) in (2.1b)

I To use these ideas to construct MCMs for elliptic BVPs onemust:

1. Simulate sample paths via SDEs (2.2b)2. Evaluate (2.2a) on the sample paths3. Sample until variance is acceptable

Page 38: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Representation of Elliptic PDEs via Feynman-Kac

Probabilistic Approaches to Elliptic PDEsI The Wiener integral representation is:

u(x) = ELx

[ ∫ τ∂Ω

0

g(β(τ∂Ω))

τ∂Ω− f (β(t))

e−

∫ t0 c(β(s)) ds dt

](2.2a)

the expectation is w.r.t. paths which are solutions to the following(vector) SDE:

dβ(t) = σ(β(t)) dW (t) + b(β(t)) dt , β(0) = x (2.2b)

I The matrix σ(·) is the Choleski factor (matrix-like square root) ofaij (·) in (2.1b)

I To use these ideas to construct MCMs for elliptic BVPs onemust:

1. Simulate sample paths via SDEs (2.2b)2. Evaluate (2.2a) on the sample paths3. Sample until variance is acceptable

Page 39: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Representation of Parabolic PDEs via Feynman-Kac

Probabilistic Approaches to Parabolic PDEs viaFeynman-Kac

I Can generalize Wiener integrals to a wide class of IBVPs via therelationship between elliptic operators, stochastic differentialequations (SDEs), and the Feynman-Kac formula

I Recall that t →∞ parabolic→ ellipticI E.g. consider the general elliptic PDE:

ut = Lu(x)− c(x)u(x)− f (x), x ∈ Ω, c(x) ≥ 0,u(x) = g(x), x ∈ ∂Ω (2.3a)

where L is an elliptic partial differential operator of the form:

L =12

s∑i,j=1

aij (x)∂2

∂xi∂xj+

s∑i=1

bi (x)∂

∂xi, (2.3b)

Page 40: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Representation of Parabolic PDEs via Feynman-Kac

Probabilistic Approaches to Parabolic PDEs viaFeynman-Kac

I Can generalize Wiener integrals to a wide class of IBVPs via therelationship between elliptic operators, stochastic differentialequations (SDEs), and the Feynman-Kac formula

I Recall that t →∞ parabolic→ ellipticI E.g. consider the general elliptic PDE:

ut = Lu(x)− c(x)u(x)− f (x), x ∈ Ω, c(x) ≥ 0,u(x) = g(x), x ∈ ∂Ω (2.3a)

where L is an elliptic partial differential operator of the form:

L =12

s∑i,j=1

aij (x)∂2

∂xi∂xj+

s∑i=1

bi (x)∂

∂xi, (2.3b)

Page 41: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Representation of Parabolic PDEs via Feynman-Kac

Probabilistic Approaches to Parabolic PDEs viaFeynman-Kac

I Can generalize Wiener integrals to a wide class of IBVPs via therelationship between elliptic operators, stochastic differentialequations (SDEs), and the Feynman-Kac formula

I Recall that t →∞ parabolic→ ellipticI E.g. consider the general elliptic PDE:

ut = Lu(x)− c(x)u(x)− f (x), x ∈ Ω, c(x) ≥ 0,u(x) = g(x), x ∈ ∂Ω (2.3a)

where L is an elliptic partial differential operator of the form:

L =12

s∑i,j=1

aij (x)∂2

∂xi∂xj+

s∑i=1

bi (x)∂

∂xi, (2.3b)

Page 42: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Representation of Parabolic PDEs via Feynman-Kac

Probabilistic Approaches to Parabolic PDEs viaFeynman-Kac

I The Wiener integral representation is:

u(x , t) = ELx

[g(β(τ∂Ω)−

∫ t

0f (β(t))e−

∫ t0 c(β(s)) ds dt

](2.4a)

the expectation is w.r.t. paths which are solutions to the following(vector) SDE:

dβ(t) = σ(β(t)) dW (t) + b(β(t)) dt , β(0) = x (2.4b)

I The matrix σ(·) is the Choleski factor (matrix-like square root) ofaij (·) in (2.3b)

Page 43: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Representation of Parabolic PDEs via Feynman-Kac

Probabilistic Approaches to Parabolic PDEs viaFeynman-Kac

I The Wiener integral representation is:

u(x , t) = ELx

[g(β(τ∂Ω)−

∫ t

0f (β(t))e−

∫ t0 c(β(s)) ds dt

](2.4a)

the expectation is w.r.t. paths which are solutions to the following(vector) SDE:

dβ(t) = σ(β(t)) dW (t) + b(β(t)) dt , β(0) = x (2.4b)

I The matrix σ(·) is the Choleski factor (matrix-like square root) ofaij (·) in (2.3b)

Page 44: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Representation of Parabolic PDEs via Feynman-Kac

Different SDEs, Different Processes, DifferentEquations

I The SDE gives us a process, and the process defines L (note: acomplete definition of L includes the boundary conditions)

I We have solved only the Dirichlet problem, what about otherBCs?

I Neumann Boundary Conditions: ∂u∂n = g(x) on ∂Ω

I If one uses reflecting Brownian motion, can sample over thesepaths

I Mixed Boundary Conditions: α∂u∂n + βu = g(x) on ∂Ω

I Use reflecting Brownian motion and first passage probabilities,together

Page 45: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Representation of Parabolic PDEs via Feynman-Kac

Different SDEs, Different Processes, DifferentEquations

I The SDE gives us a process, and the process defines L (note: acomplete definition of L includes the boundary conditions)

I We have solved only the Dirichlet problem, what about otherBCs?

I Neumann Boundary Conditions: ∂u∂n = g(x) on ∂Ω

I If one uses reflecting Brownian motion, can sample over thesepaths

I Mixed Boundary Conditions: α∂u∂n + βu = g(x) on ∂Ω

I Use reflecting Brownian motion and first passage probabilities,together

Page 46: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Representation of Parabolic PDEs via Feynman-Kac

Different SDEs, Different Processes, DifferentEquations

I The SDE gives us a process, and the process defines L (note: acomplete definition of L includes the boundary conditions)

I We have solved only the Dirichlet problem, what about otherBCs?

I Neumann Boundary Conditions: ∂u∂n = g(x) on ∂Ω

I If one uses reflecting Brownian motion, can sample over thesepaths

I Mixed Boundary Conditions: α∂u∂n + βu = g(x) on ∂Ω

I Use reflecting Brownian motion and first passage probabilities,together

Page 47: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Representation of Parabolic PDEs via Feynman-Kac

Different SDEs, Different Processes, DifferentEquations

I The SDE gives us a process, and the process defines L (note: acomplete definition of L includes the boundary conditions)

I We have solved only the Dirichlet problem, what about otherBCs?

I Neumann Boundary Conditions: ∂u∂n = g(x) on ∂Ω

I If one uses reflecting Brownian motion, can sample over thesepaths

I Mixed Boundary Conditions: α∂u∂n + βu = g(x) on ∂Ω

I Use reflecting Brownian motion and first passage probabilities,together

Page 48: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Representation of Parabolic PDEs via Feynman-Kac

Different SDEs, Different Processes, DifferentEquations

I The SDE gives us a process, and the process defines L (note: acomplete definition of L includes the boundary conditions)

I We have solved only the Dirichlet problem, what about otherBCs?

I Neumann Boundary Conditions: ∂u∂n = g(x) on ∂Ω

I If one uses reflecting Brownian motion, can sample over thesepaths

I Mixed Boundary Conditions: α∂u∂n + βu = g(x) on ∂Ω

I Use reflecting Brownian motion and first passage probabilities,together

Page 49: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Representation of Parabolic PDEs via Feynman-Kac

Different SDEs, Different Processes, DifferentEquations

I The SDE gives us a process, and the process defines L (note: acomplete definition of L includes the boundary conditions)

I We have solved only the Dirichlet problem, what about otherBCs?

I Neumann Boundary Conditions: ∂u∂n = g(x) on ∂Ω

I If one uses reflecting Brownian motion, can sample over thesepaths

I Mixed Boundary Conditions: α∂u∂n + βu = g(x) on ∂Ω

I Use reflecting Brownian motion and first passage probabilities,together

Page 50: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

Parabolic PDEs and Initial Value Problems

I Parabolic PDEs are evolution equations: the heat equationspecifies how an initial temperature profile evolves with time, apure initial value problem (IVP):

∂u∂t

=12

∆u, u(x ,0) = u0(x) (3.1a)

I As with elliptic PDEs, there are Feynman-Kac formulas for IVPsand initial-boundary value problems (IBVPs) for parabolic PDEs

I Instead of this approach try to use the fundamental solution,which has a real probabilistic flavor, 1√

2πte−

x22t is the fundamental

solution of (3.1a), in the construction of a MCM

Page 51: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

Parabolic PDEs and Initial Value Problems

I Parabolic PDEs are evolution equations: the heat equationspecifies how an initial temperature profile evolves with time, apure initial value problem (IVP):

∂u∂t

=12

∆u, u(x ,0) = u0(x) (3.1a)

I As with elliptic PDEs, there are Feynman-Kac formulas for IVPsand initial-boundary value problems (IBVPs) for parabolic PDEs

I Instead of this approach try to use the fundamental solution,which has a real probabilistic flavor, 1√

2πte−

x22t is the fundamental

solution of (3.1a), in the construction of a MCM

Page 52: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

Parabolic PDEs and Initial Value Problems

I Parabolic PDEs are evolution equations: the heat equationspecifies how an initial temperature profile evolves with time, apure initial value problem (IVP):

∂u∂t

=12

∆u, u(x ,0) = u0(x) (3.1a)

I As with elliptic PDEs, there are Feynman-Kac formulas for IVPsand initial-boundary value problems (IBVPs) for parabolic PDEs

I Instead of this approach try to use the fundamental solution,which has a real probabilistic flavor, 1√

2πte−

x22t is the fundamental

solution of (3.1a), in the construction of a MCM

Page 53: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

MCMs for Linear Parabolic IVPsI Consider the IVP in (3.1a), if u0(x) = δ(x − x0) (spike at x = x0),

the exact solution is u(x , t) = 1√2πt

e−(x−x0)2

2t , can interpret this asu(x , t) is N(x0, t) for MCM sampling of values of u(x , t)

I To solve (3.1a) with u0(x) general, must approximate u0(x) withspikes and “move” the spikes via their individual normaldistributions

I The approximation of a smooth u0(x) by spikes is quite poor, andso the MCM above gives a solution with large statisticalfluctuations (variance)

I Instead, can solve for the gradient of u(x , t) and integrate back togive a better solution, i.e. if we call v(x , t) = ∂u(x,t)

∂x , v(x , t) solves:

∂v∂t

=12

∆v , v(x ,0) =∂u0(x)

∂x(3.1b)

Page 54: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

MCMs for Linear Parabolic IVPsI Consider the IVP in (3.1a), if u0(x) = δ(x − x0) (spike at x = x0),

the exact solution is u(x , t) = 1√2πt

e−(x−x0)2

2t , can interpret this asu(x , t) is N(x0, t) for MCM sampling of values of u(x , t)

I To solve (3.1a) with u0(x) general, must approximate u0(x) withspikes and “move” the spikes via their individual normaldistributions

I The approximation of a smooth u0(x) by spikes is quite poor, andso the MCM above gives a solution with large statisticalfluctuations (variance)

I Instead, can solve for the gradient of u(x , t) and integrate back togive a better solution, i.e. if we call v(x , t) = ∂u(x,t)

∂x , v(x , t) solves:

∂v∂t

=12

∆v , v(x ,0) =∂u0(x)

∂x(3.1b)

Page 55: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

MCMs for Linear Parabolic IVPsI Consider the IVP in (3.1a), if u0(x) = δ(x − x0) (spike at x = x0),

the exact solution is u(x , t) = 1√2πt

e−(x−x0)2

2t , can interpret this asu(x , t) is N(x0, t) for MCM sampling of values of u(x , t)

I To solve (3.1a) with u0(x) general, must approximate u0(x) withspikes and “move” the spikes via their individual normaldistributions

I The approximation of a smooth u0(x) by spikes is quite poor, andso the MCM above gives a solution with large statisticalfluctuations (variance)

I Instead, can solve for the gradient of u(x , t) and integrate back togive a better solution, i.e. if we call v(x , t) = ∂u(x,t)

∂x , v(x , t) solves:

∂v∂t

=12

∆v , v(x ,0) =∂u0(x)

∂x(3.1b)

Page 56: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

MCMs for Linear Parabolic IVPsI Consider the IVP in (3.1a), if u0(x) = δ(x − x0) (spike at x = x0),

the exact solution is u(x , t) = 1√2πt

e−(x−x0)2

2t , can interpret this asu(x , t) is N(x0, t) for MCM sampling of values of u(x , t)

I To solve (3.1a) with u0(x) general, must approximate u0(x) withspikes and “move” the spikes via their individual normaldistributions

I The approximation of a smooth u0(x) by spikes is quite poor, andso the MCM above gives a solution with large statisticalfluctuations (variance)

I Instead, can solve for the gradient of u(x , t) and integrate back togive a better solution, i.e. if we call v(x , t) = ∂u(x,t)

∂x , v(x , t) solves:

∂v∂t

=12

∆v , v(x ,0) =∂u0(x)

∂x(3.1b)

Page 57: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

MCMs for Linear Parabolic IVPsI This variance reduction idea is the basis of the random gradient

method:1. Set up the gradient problem2. Initial gradient is spiky3. Evolve the gradient via MCM4. Integrate to recover function

I Since v(x , t) = ∂u(x,t)∂x , u(x , t) =

∫ x−∞ v(y , t) dy

I Note that if v(x , t) = 1N

∑Ni=1 δ(x − xi ) then

u(x , t) = 1N

∑i|xi≤x 1, i.e. a step function

I More generally if v(x , t) =∑N

i=1 aiδ(x − xi ) thenu(x , t) =

∑i|xi≤x ai , i.e. a step function, here we can

approximate more than monotone initial conditions with ai ’snegative

I The random gradient method is very efficient and allows solvingpure IVPs on infinite domains without difficulty

Page 58: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

MCMs for Linear Parabolic IVPsI This variance reduction idea is the basis of the random gradient

method:1. Set up the gradient problem2. Initial gradient is spiky3. Evolve the gradient via MCM4. Integrate to recover function

I Since v(x , t) = ∂u(x,t)∂x , u(x , t) =

∫ x−∞ v(y , t) dy

I Note that if v(x , t) = 1N

∑Ni=1 δ(x − xi ) then

u(x , t) = 1N

∑i|xi≤x 1, i.e. a step function

I More generally if v(x , t) =∑N

i=1 aiδ(x − xi ) thenu(x , t) =

∑i|xi≤x ai , i.e. a step function, here we can

approximate more than monotone initial conditions with ai ’snegative

I The random gradient method is very efficient and allows solvingpure IVPs on infinite domains without difficulty

Page 59: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

MCMs for Linear Parabolic IVPsI This variance reduction idea is the basis of the random gradient

method:1. Set up the gradient problem2. Initial gradient is spiky3. Evolve the gradient via MCM4. Integrate to recover function

I Since v(x , t) = ∂u(x,t)∂x , u(x , t) =

∫ x−∞ v(y , t) dy

I Note that if v(x , t) = 1N

∑Ni=1 δ(x − xi ) then

u(x , t) = 1N

∑i|xi≤x 1, i.e. a step function

I More generally if v(x , t) =∑N

i=1 aiδ(x − xi ) thenu(x , t) =

∑i|xi≤x ai , i.e. a step function, here we can

approximate more than monotone initial conditions with ai ’snegative

I The random gradient method is very efficient and allows solvingpure IVPs on infinite domains without difficulty

Page 60: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

MCMs for Linear Parabolic IVPsI This variance reduction idea is the basis of the random gradient

method:1. Set up the gradient problem2. Initial gradient is spiky3. Evolve the gradient via MCM4. Integrate to recover function

I Since v(x , t) = ∂u(x,t)∂x , u(x , t) =

∫ x−∞ v(y , t) dy

I Note that if v(x , t) = 1N

∑Ni=1 δ(x − xi ) then

u(x , t) = 1N

∑i|xi≤x 1, i.e. a step function

I More generally if v(x , t) =∑N

i=1 aiδ(x − xi ) thenu(x , t) =

∑i|xi≤x ai , i.e. a step function, here we can

approximate more than monotone initial conditions with ai ’snegative

I The random gradient method is very efficient and allows solvingpure IVPs on infinite domains without difficulty

Page 61: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

MCMs for Linear Parabolic IVPsI This variance reduction idea is the basis of the random gradient

method:1. Set up the gradient problem2. Initial gradient is spiky3. Evolve the gradient via MCM4. Integrate to recover function

I Since v(x , t) = ∂u(x,t)∂x , u(x , t) =

∫ x−∞ v(y , t) dy

I Note that if v(x , t) = 1N

∑Ni=1 δ(x − xi ) then

u(x , t) = 1N

∑i|xi≤x 1, i.e. a step function

I More generally if v(x , t) =∑N

i=1 aiδ(x − xi ) thenu(x , t) =

∑i|xi≤x ai , i.e. a step function, here we can

approximate more than monotone initial conditions with ai ’snegative

I The random gradient method is very efficient and allows solvingpure IVPs on infinite domains without difficulty

Page 62: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

MCMs for Linear Parabolic IVPsI This variance reduction idea is the basis of the random gradient

method:1. Set up the gradient problem2. Initial gradient is spiky3. Evolve the gradient via MCM4. Integrate to recover function

I Since v(x , t) = ∂u(x,t)∂x , u(x , t) =

∫ x−∞ v(y , t) dy

I Note that if v(x , t) = 1N

∑Ni=1 δ(x − xi ) then

u(x , t) = 1N

∑i|xi≤x 1, i.e. a step function

I More generally if v(x , t) =∑N

i=1 aiδ(x − xi ) thenu(x , t) =

∑i|xi≤x ai , i.e. a step function, here we can

approximate more than monotone initial conditions with ai ’snegative

I The random gradient method is very efficient and allows solvingpure IVPs on infinite domains without difficulty

Page 63: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

MCMs for Linear Parabolic IVPsI This variance reduction idea is the basis of the random gradient

method:1. Set up the gradient problem2. Initial gradient is spiky3. Evolve the gradient via MCM4. Integrate to recover function

I Since v(x , t) = ∂u(x,t)∂x , u(x , t) =

∫ x−∞ v(y , t) dy

I Note that if v(x , t) = 1N

∑Ni=1 δ(x − xi ) then

u(x , t) = 1N

∑i|xi≤x 1, i.e. a step function

I More generally if v(x , t) =∑N

i=1 aiδ(x − xi ) thenu(x , t) =

∑i|xi≤x ai , i.e. a step function, here we can

approximate more than monotone initial conditions with ai ’snegative

I The random gradient method is very efficient and allows solvingpure IVPs on infinite domains without difficulty

Page 64: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

MCMs for Linear Parabolic IVPsI This variance reduction idea is the basis of the random gradient

method:1. Set up the gradient problem2. Initial gradient is spiky3. Evolve the gradient via MCM4. Integrate to recover function

I Since v(x , t) = ∂u(x,t)∂x , u(x , t) =

∫ x−∞ v(y , t) dy

I Note that if v(x , t) = 1N

∑Ni=1 δ(x − xi ) then

u(x , t) = 1N

∑i|xi≤x 1, i.e. a step function

I More generally if v(x , t) =∑N

i=1 aiδ(x − xi ) thenu(x , t) =

∑i|xi≤x ai , i.e. a step function, here we can

approximate more than monotone initial conditions with ai ’snegative

I The random gradient method is very efficient and allows solvingpure IVPs on infinite domains without difficulty

Page 65: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

MCMs for Linear Parabolic IVPsI This variance reduction idea is the basis of the random gradient

method:1. Set up the gradient problem2. Initial gradient is spiky3. Evolve the gradient via MCM4. Integrate to recover function

I Since v(x , t) = ∂u(x,t)∂x , u(x , t) =

∫ x−∞ v(y , t) dy

I Note that if v(x , t) = 1N

∑Ni=1 δ(x − xi ) then

u(x , t) = 1N

∑i|xi≤x 1, i.e. a step function

I More generally if v(x , t) =∑N

i=1 aiδ(x − xi ) thenu(x , t) =

∑i|xi≤x ai , i.e. a step function, here we can

approximate more than monotone initial conditions with ai ’snegative

I The random gradient method is very efficient and allows solvingpure IVPs on infinite domains without difficulty

Page 66: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

MCMs for Linear Parabolic IVPs

I Consider the related linear IVP (c is constant):

∂u∂t

=12

∆u + cu, u(x ,0) = u0(x) (3.2a)

I The first term on the r.h.s. is diffusion and its effect may besampled via a normally distributed random number

I The second term on the r.h.s. is an exponential growth/shrinkageterm, can also sample its effect probabilistically:

I Think of dispatching random walkers to do the sampling1. Choose ∆t s.t. ∆t |c| < 12. Move all walkers via N(0,∆t)3. Create/destroy with prob. = ∆t |c|4. c > 0: create by doubling5. c < 0: destroy by removal

Page 67: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

MCMs for Linear Parabolic IVPs

I Consider the related linear IVP (c is constant):

∂u∂t

=12

∆u + cu, u(x ,0) = u0(x) (3.2a)

I The first term on the r.h.s. is diffusion and its effect may besampled via a normally distributed random number

I The second term on the r.h.s. is an exponential growth/shrinkageterm, can also sample its effect probabilistically:

I Think of dispatching random walkers to do the sampling1. Choose ∆t s.t. ∆t |c| < 12. Move all walkers via N(0,∆t)3. Create/destroy with prob. = ∆t |c|4. c > 0: create by doubling5. c < 0: destroy by removal

Page 68: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

MCMs for Linear Parabolic IVPs

I Consider the related linear IVP (c is constant):

∂u∂t

=12

∆u + cu, u(x ,0) = u0(x) (3.2a)

I The first term on the r.h.s. is diffusion and its effect may besampled via a normally distributed random number

I The second term on the r.h.s. is an exponential growth/shrinkageterm, can also sample its effect probabilistically:

I Think of dispatching random walkers to do the sampling1. Choose ∆t s.t. ∆t |c| < 12. Move all walkers via N(0,∆t)3. Create/destroy with prob. = ∆t |c|4. c > 0: create by doubling5. c < 0: destroy by removal

Page 69: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

MCMs for Linear Parabolic IVPs

I Consider the related linear IVP (c is constant):

∂u∂t

=12

∆u + cu, u(x ,0) = u0(x) (3.2a)

I The first term on the r.h.s. is diffusion and its effect may besampled via a normally distributed random number

I The second term on the r.h.s. is an exponential growth/shrinkageterm, can also sample its effect probabilistically:

I Think of dispatching random walkers to do the sampling1. Choose ∆t s.t. ∆t |c| < 12. Move all walkers via N(0,∆t)3. Create/destroy with prob. = ∆t |c|4. c > 0: create by doubling5. c < 0: destroy by removal

Page 70: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

MCMs for Linear Parabolic IVPs

I Consider the related linear IVP (c is constant):

∂u∂t

=12

∆u + cu, u(x ,0) = u0(x) (3.2a)

I The first term on the r.h.s. is diffusion and its effect may besampled via a normally distributed random number

I The second term on the r.h.s. is an exponential growth/shrinkageterm, can also sample its effect probabilistically:

I Think of dispatching random walkers to do the sampling1. Choose ∆t s.t. ∆t |c| < 12. Move all walkers via N(0,∆t)3. Create/destroy with prob. = ∆t |c|4. c > 0: create by doubling5. c < 0: destroy by removal

Page 71: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

MCMs for Linear Parabolic IVPs

I Consider the related linear IVP (c is constant):

∂u∂t

=12

∆u + cu, u(x ,0) = u0(x) (3.2a)

I The first term on the r.h.s. is diffusion and its effect may besampled via a normally distributed random number

I The second term on the r.h.s. is an exponential growth/shrinkageterm, can also sample its effect probabilistically:

I Think of dispatching random walkers to do the sampling1. Choose ∆t s.t. ∆t |c| < 12. Move all walkers via N(0,∆t)3. Create/destroy with prob. = ∆t |c|4. c > 0: create by doubling5. c < 0: destroy by removal

Page 72: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

MCMs for Linear Parabolic IVPs

I Consider the related linear IVP (c is constant):

∂u∂t

=12

∆u + cu, u(x ,0) = u0(x) (3.2a)

I The first term on the r.h.s. is diffusion and its effect may besampled via a normally distributed random number

I The second term on the r.h.s. is an exponential growth/shrinkageterm, can also sample its effect probabilistically:

I Think of dispatching random walkers to do the sampling1. Choose ∆t s.t. ∆t |c| < 12. Move all walkers via N(0,∆t)3. Create/destroy with prob. = ∆t |c|4. c > 0: create by doubling5. c < 0: destroy by removal

Page 73: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

MCMs for Linear Parabolic IVPs

I Consider the related linear IVP (c is constant):

∂u∂t

=12

∆u + cu, u(x ,0) = u0(x) (3.2a)

I The first term on the r.h.s. is diffusion and its effect may besampled via a normally distributed random number

I The second term on the r.h.s. is an exponential growth/shrinkageterm, can also sample its effect probabilistically:

I Think of dispatching random walkers to do the sampling1. Choose ∆t s.t. ∆t |c| < 12. Move all walkers via N(0,∆t)3. Create/destroy with prob. = ∆t |c|4. c > 0: create by doubling5. c < 0: destroy by removal

Page 74: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

MCMs for Linear Parabolic IVPs

I Consider the related linear IVP (c is constant):

∂u∂t

=12

∆u + cu, u(x ,0) = u0(x) (3.2a)

I The first term on the r.h.s. is diffusion and its effect may besampled via a normally distributed random number

I The second term on the r.h.s. is an exponential growth/shrinkageterm, can also sample its effect probabilistically:

I Think of dispatching random walkers to do the sampling1. Choose ∆t s.t. ∆t |c| < 12. Move all walkers via N(0,∆t)3. Create/destroy with prob. = ∆t |c|4. c > 0: create by doubling5. c < 0: destroy by removal

Page 75: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

MCMs for Linear Parabolic IVPs

I Consider the related gradient IVP:

∂v∂t

=12

∆v + cv , v(x ,0) =∂u0(x)

∂x(3.2b)

I Let us summarize the algorithm for the MCM to advance thesolution of (3.2a) one time step using ∆t :

1. Represent v(x , t) = 1N

∑Ni=1 δ(x − xi )

2. Choose ∆t s.t. ∆t |c| < 13. Move xi to xi + ηi where ηi is N(0,∆t)4. If c > 0 create new walkers at those xi where ξi < ∆tc with ξi

U[0, 1]5. If c < 0 destroy walkers at those xi where ξi < −∆tc with ξi U[0, 1]6. Over the remaining points, u(x , t + ∆t) = 1

N

∑i|xi≤x 1

I This is the linear PDE version of the random gradient method(RGM)

Page 76: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

MCMs for Linear Parabolic IVPs

I Consider the related gradient IVP:

∂v∂t

=12

∆v + cv , v(x ,0) =∂u0(x)

∂x(3.2b)

I Let us summarize the algorithm for the MCM to advance thesolution of (3.2a) one time step using ∆t :

1. Represent v(x , t) = 1N

∑Ni=1 δ(x − xi )

2. Choose ∆t s.t. ∆t |c| < 13. Move xi to xi + ηi where ηi is N(0,∆t)4. If c > 0 create new walkers at those xi where ξi < ∆tc with ξi

U[0, 1]5. If c < 0 destroy walkers at those xi where ξi < −∆tc with ξi U[0, 1]6. Over the remaining points, u(x , t + ∆t) = 1

N

∑i|xi≤x 1

I This is the linear PDE version of the random gradient method(RGM)

Page 77: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

MCMs for Linear Parabolic IVPs

I Consider the related gradient IVP:

∂v∂t

=12

∆v + cv , v(x ,0) =∂u0(x)

∂x(3.2b)

I Let us summarize the algorithm for the MCM to advance thesolution of (3.2a) one time step using ∆t :

1. Represent v(x , t) = 1N

∑Ni=1 δ(x − xi )

2. Choose ∆t s.t. ∆t |c| < 13. Move xi to xi + ηi where ηi is N(0,∆t)4. If c > 0 create new walkers at those xi where ξi < ∆tc with ξi

U[0, 1]5. If c < 0 destroy walkers at those xi where ξi < −∆tc with ξi U[0, 1]6. Over the remaining points, u(x , t + ∆t) = 1

N

∑i|xi≤x 1

I This is the linear PDE version of the random gradient method(RGM)

Page 78: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

MCMs for Linear Parabolic IVPs

I Consider the related gradient IVP:

∂v∂t

=12

∆v + cv , v(x ,0) =∂u0(x)

∂x(3.2b)

I Let us summarize the algorithm for the MCM to advance thesolution of (3.2a) one time step using ∆t :

1. Represent v(x , t) = 1N

∑Ni=1 δ(x − xi )

2. Choose ∆t s.t. ∆t |c| < 13. Move xi to xi + ηi where ηi is N(0,∆t)4. If c > 0 create new walkers at those xi where ξi < ∆tc with ξi

U[0, 1]5. If c < 0 destroy walkers at those xi where ξi < −∆tc with ξi U[0, 1]6. Over the remaining points, u(x , t + ∆t) = 1

N

∑i|xi≤x 1

I This is the linear PDE version of the random gradient method(RGM)

Page 79: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

MCMs for Linear Parabolic IVPs

I Consider the related gradient IVP:

∂v∂t

=12

∆v + cv , v(x ,0) =∂u0(x)

∂x(3.2b)

I Let us summarize the algorithm for the MCM to advance thesolution of (3.2a) one time step using ∆t :

1. Represent v(x , t) = 1N

∑Ni=1 δ(x − xi )

2. Choose ∆t s.t. ∆t |c| < 13. Move xi to xi + ηi where ηi is N(0,∆t)4. If c > 0 create new walkers at those xi where ξi < ∆tc with ξi

U[0, 1]5. If c < 0 destroy walkers at those xi where ξi < −∆tc with ξi U[0, 1]6. Over the remaining points, u(x , t + ∆t) = 1

N

∑i|xi≤x 1

I This is the linear PDE version of the random gradient method(RGM)

Page 80: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

MCMs for Linear Parabolic IVPs

I Consider the related gradient IVP:

∂v∂t

=12

∆v + cv , v(x ,0) =∂u0(x)

∂x(3.2b)

I Let us summarize the algorithm for the MCM to advance thesolution of (3.2a) one time step using ∆t :

1. Represent v(x , t) = 1N

∑Ni=1 δ(x − xi )

2. Choose ∆t s.t. ∆t |c| < 13. Move xi to xi + ηi where ηi is N(0,∆t)4. If c > 0 create new walkers at those xi where ξi < ∆tc with ξi

U[0, 1]5. If c < 0 destroy walkers at those xi where ξi < −∆tc with ξi U[0, 1]6. Over the remaining points, u(x , t + ∆t) = 1

N

∑i|xi≤x 1

I This is the linear PDE version of the random gradient method(RGM)

Page 81: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

MCMs for Linear Parabolic IVPs

I Consider the related gradient IVP:

∂v∂t

=12

∆v + cv , v(x ,0) =∂u0(x)

∂x(3.2b)

I Let us summarize the algorithm for the MCM to advance thesolution of (3.2a) one time step using ∆t :

1. Represent v(x , t) = 1N

∑Ni=1 δ(x − xi )

2. Choose ∆t s.t. ∆t |c| < 13. Move xi to xi + ηi where ηi is N(0,∆t)4. If c > 0 create new walkers at those xi where ξi < ∆tc with ξi

U[0, 1]5. If c < 0 destroy walkers at those xi where ξi < −∆tc with ξi U[0, 1]6. Over the remaining points, u(x , t + ∆t) = 1

N

∑i|xi≤x 1

I This is the linear PDE version of the random gradient method(RGM)

Page 82: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

MCMs for Linear Parabolic IVPs

I Consider the related gradient IVP:

∂v∂t

=12

∆v + cv , v(x ,0) =∂u0(x)

∂x(3.2b)

I Let us summarize the algorithm for the MCM to advance thesolution of (3.2a) one time step using ∆t :

1. Represent v(x , t) = 1N

∑Ni=1 δ(x − xi )

2. Choose ∆t s.t. ∆t |c| < 13. Move xi to xi + ηi where ηi is N(0,∆t)4. If c > 0 create new walkers at those xi where ξi < ∆tc with ξi

U[0, 1]5. If c < 0 destroy walkers at those xi where ξi < −∆tc with ξi U[0, 1]6. Over the remaining points, u(x , t + ∆t) = 1

N

∑i|xi≤x 1

I This is the linear PDE version of the random gradient method(RGM)

Page 83: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

MCMs for Linear Parabolic IVPs

I Consider the related gradient IVP:

∂v∂t

=12

∆v + cv , v(x ,0) =∂u0(x)

∂x(3.2b)

I Let us summarize the algorithm for the MCM to advance thesolution of (3.2a) one time step using ∆t :

1. Represent v(x , t) = 1N

∑Ni=1 δ(x − xi )

2. Choose ∆t s.t. ∆t |c| < 13. Move xi to xi + ηi where ηi is N(0,∆t)4. If c > 0 create new walkers at those xi where ξi < ∆tc with ξi

U[0, 1]5. If c < 0 destroy walkers at those xi where ξi < −∆tc with ξi U[0, 1]6. Over the remaining points, u(x , t + ∆t) = 1

N

∑i|xi≤x 1

I This is the linear PDE version of the random gradient method(RGM)

Page 84: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

The RGM for Nonlinear Parabolic IVPs

I Consider the IVP for a nonlinear scalar reaction diffusionequation:

∂u∂t

=12

∆u + c(u), u(x ,0) = u0(x) (3.3a)

I The associated gradient equation is:

∂v∂t

=12

∆v + c′(u)v , v(x ,0) =∂u0(x)

∂x(3.3b)

I The similarity of (3.3b) to (3.2b) make it clear how to extend theRGM method to these nonlinear scalar reaction diffusionequations

Page 85: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

The RGM for Nonlinear Parabolic IVPs

I Consider the IVP for a nonlinear scalar reaction diffusionequation:

∂u∂t

=12

∆u + c(u), u(x ,0) = u0(x) (3.3a)

I The associated gradient equation is:

∂v∂t

=12

∆v + c′(u)v , v(x ,0) =∂u0(x)

∂x(3.3b)

I The similarity of (3.3b) to (3.2b) make it clear how to extend theRGM method to these nonlinear scalar reaction diffusionequations

Page 86: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

The RGM for Nonlinear Parabolic IVPs

I Consider the IVP for a nonlinear scalar reaction diffusionequation:

∂u∂t

=12

∆u + c(u), u(x ,0) = u0(x) (3.3a)

I The associated gradient equation is:

∂v∂t

=12

∆v + c′(u)v , v(x ,0) =∂u0(x)

∂x(3.3b)

I The similarity of (3.3b) to (3.2b) make it clear how to extend theRGM method to these nonlinear scalar reaction diffusionequations

Page 87: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

The RGM for Nonlinear Parabolic IVPs

I Summary of the algorithm for the RGM to advance the solution of(3.3a) one time step using ∆t :

1. Represent v(x , t) = 1N

∑Ni=1 δ(x − xi )

2. Choose ∆t s.t. ∆t (supu |c′(u)|) < 1

3. Move xi to xi + ηi where ηi is N(0,∆t)4. At those xi where c′(u) > 0 create new walkers if ξi < ∆tc′(u) with

ξi U[0, 1]5. At those xi where c′(u) < 0 destroy walkers if ξi < −∆tc′(u) with ξi

U[0, 1]6. Over the remaining points, u(x , t + ∆t) = 1

N

∑i|xi≤x 1

I This is the nonlinear scalar reaction diffusion version of the RGM

Page 88: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

The RGM for Nonlinear Parabolic IVPs

I Summary of the algorithm for the RGM to advance the solution of(3.3a) one time step using ∆t :

1. Represent v(x , t) = 1N

∑Ni=1 δ(x − xi )

2. Choose ∆t s.t. ∆t (supu |c′(u)|) < 1

3. Move xi to xi + ηi where ηi is N(0,∆t)4. At those xi where c′(u) > 0 create new walkers if ξi < ∆tc′(u) with

ξi U[0, 1]5. At those xi where c′(u) < 0 destroy walkers if ξi < −∆tc′(u) with ξi

U[0, 1]6. Over the remaining points, u(x , t + ∆t) = 1

N

∑i|xi≤x 1

I This is the nonlinear scalar reaction diffusion version of the RGM

Page 89: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

The RGM for Nonlinear Parabolic IVPs

I Summary of the algorithm for the RGM to advance the solution of(3.3a) one time step using ∆t :

1. Represent v(x , t) = 1N

∑Ni=1 δ(x − xi )

2. Choose ∆t s.t. ∆t (supu |c′(u)|) < 1

3. Move xi to xi + ηi where ηi is N(0,∆t)4. At those xi where c′(u) > 0 create new walkers if ξi < ∆tc′(u) with

ξi U[0, 1]5. At those xi where c′(u) < 0 destroy walkers if ξi < −∆tc′(u) with ξi

U[0, 1]6. Over the remaining points, u(x , t + ∆t) = 1

N

∑i|xi≤x 1

I This is the nonlinear scalar reaction diffusion version of the RGM

Page 90: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

The RGM for Nonlinear Parabolic IVPs

I Summary of the algorithm for the RGM to advance the solution of(3.3a) one time step using ∆t :

1. Represent v(x , t) = 1N

∑Ni=1 δ(x − xi )

2. Choose ∆t s.t. ∆t (supu |c′(u)|) < 1

3. Move xi to xi + ηi where ηi is N(0,∆t)4. At those xi where c′(u) > 0 create new walkers if ξi < ∆tc′(u) with

ξi U[0, 1]5. At those xi where c′(u) < 0 destroy walkers if ξi < −∆tc′(u) with ξi

U[0, 1]6. Over the remaining points, u(x , t + ∆t) = 1

N

∑i|xi≤x 1

I This is the nonlinear scalar reaction diffusion version of the RGM

Page 91: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

The RGM for Nonlinear Parabolic IVPs

I Summary of the algorithm for the RGM to advance the solution of(3.3a) one time step using ∆t :

1. Represent v(x , t) = 1N

∑Ni=1 δ(x − xi )

2. Choose ∆t s.t. ∆t (supu |c′(u)|) < 1

3. Move xi to xi + ηi where ηi is N(0,∆t)4. At those xi where c′(u) > 0 create new walkers if ξi < ∆tc′(u) with

ξi U[0, 1]5. At those xi where c′(u) < 0 destroy walkers if ξi < −∆tc′(u) with ξi

U[0, 1]6. Over the remaining points, u(x , t + ∆t) = 1

N

∑i|xi≤x 1

I This is the nonlinear scalar reaction diffusion version of the RGM

Page 92: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

The RGM for Nonlinear Parabolic IVPs

I Summary of the algorithm for the RGM to advance the solution of(3.3a) one time step using ∆t :

1. Represent v(x , t) = 1N

∑Ni=1 δ(x − xi )

2. Choose ∆t s.t. ∆t (supu |c′(u)|) < 1

3. Move xi to xi + ηi where ηi is N(0,∆t)4. At those xi where c′(u) > 0 create new walkers if ξi < ∆tc′(u) with

ξi U[0, 1]5. At those xi where c′(u) < 0 destroy walkers if ξi < −∆tc′(u) with ξi

U[0, 1]6. Over the remaining points, u(x , t + ∆t) = 1

N

∑i|xi≤x 1

I This is the nonlinear scalar reaction diffusion version of the RGM

Page 93: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

The RGM for Nonlinear Parabolic IVPs

I Summary of the algorithm for the RGM to advance the solution of(3.3a) one time step using ∆t :

1. Represent v(x , t) = 1N

∑Ni=1 δ(x − xi )

2. Choose ∆t s.t. ∆t (supu |c′(u)|) < 1

3. Move xi to xi + ηi where ηi is N(0,∆t)4. At those xi where c′(u) > 0 create new walkers if ξi < ∆tc′(u) with

ξi U[0, 1]5. At those xi where c′(u) < 0 destroy walkers if ξi < −∆tc′(u) with ξi

U[0, 1]6. Over the remaining points, u(x , t + ∆t) = 1

N

∑i|xi≤x 1

I This is the nonlinear scalar reaction diffusion version of the RGM

Page 94: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

The RGM for Nonlinear Parabolic IVPs

I Summary of the algorithm for the RGM to advance the solution of(3.3a) one time step using ∆t :

1. Represent v(x , t) = 1N

∑Ni=1 δ(x − xi )

2. Choose ∆t s.t. ∆t (supu |c′(u)|) < 1

3. Move xi to xi + ηi where ηi is N(0,∆t)4. At those xi where c′(u) > 0 create new walkers if ξi < ∆tc′(u) with

ξi U[0, 1]5. At those xi where c′(u) < 0 destroy walkers if ξi < −∆tc′(u) with ξi

U[0, 1]6. Over the remaining points, u(x , t + ∆t) = 1

N

∑i|xi≤x 1

I This is the nonlinear scalar reaction diffusion version of the RGM

Page 95: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

The RGM for Nonlinear Parabolic IVPs

I Differences/advantages of the RGM and conventional(finite-difference/finite-element) methods:

1. It is computationally easy to sample from N(0,∆t)2. Only costly operation each time step is to sort the remaining cohort

of walkers by their position3. Can use ensemble averaging to reduce the variance of the solution4. Can choose to use either gradient “particles” of equal mass or

allow the mass of each particle to change5. The RGM is adaptive, computational elements (pieces of the

gradient) are created where c′(u) is greatest, this is where sharpfronts appear and so fewer total computational elements areneeded to spatially resolve jump-like solutions

Page 96: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

The RGM for Nonlinear Parabolic IVPs

I Differences/advantages of the RGM and conventional(finite-difference/finite-element) methods:

1. It is computationally easy to sample from N(0,∆t)2. Only costly operation each time step is to sort the remaining cohort

of walkers by their position3. Can use ensemble averaging to reduce the variance of the solution4. Can choose to use either gradient “particles” of equal mass or

allow the mass of each particle to change5. The RGM is adaptive, computational elements (pieces of the

gradient) are created where c′(u) is greatest, this is where sharpfronts appear and so fewer total computational elements areneeded to spatially resolve jump-like solutions

Page 97: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

The RGM for Nonlinear Parabolic IVPs

I Differences/advantages of the RGM and conventional(finite-difference/finite-element) methods:

1. It is computationally easy to sample from N(0,∆t)2. Only costly operation each time step is to sort the remaining cohort

of walkers by their position3. Can use ensemble averaging to reduce the variance of the solution4. Can choose to use either gradient “particles” of equal mass or

allow the mass of each particle to change5. The RGM is adaptive, computational elements (pieces of the

gradient) are created where c′(u) is greatest, this is where sharpfronts appear and so fewer total computational elements areneeded to spatially resolve jump-like solutions

Page 98: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

The RGM for Nonlinear Parabolic IVPs

I Differences/advantages of the RGM and conventional(finite-difference/finite-element) methods:

1. It is computationally easy to sample from N(0,∆t)2. Only costly operation each time step is to sort the remaining cohort

of walkers by their position3. Can use ensemble averaging to reduce the variance of the solution4. Can choose to use either gradient “particles” of equal mass or

allow the mass of each particle to change5. The RGM is adaptive, computational elements (pieces of the

gradient) are created where c′(u) is greatest, this is where sharpfronts appear and so fewer total computational elements areneeded to spatially resolve jump-like solutions

Page 99: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

The RGM for Nonlinear Parabolic IVPs

I Differences/advantages of the RGM and conventional(finite-difference/finite-element) methods:

1. It is computationally easy to sample from N(0,∆t)2. Only costly operation each time step is to sort the remaining cohort

of walkers by their position3. Can use ensemble averaging to reduce the variance of the solution4. Can choose to use either gradient “particles” of equal mass or

allow the mass of each particle to change5. The RGM is adaptive, computational elements (pieces of the

gradient) are created where c′(u) is greatest, this is where sharpfronts appear and so fewer total computational elements areneeded to spatially resolve jump-like solutions

Page 100: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

The RGM for Nonlinear Parabolic IVPs

I Differences/advantages of the RGM and conventional(finite-difference/finite-element) methods:

1. It is computationally easy to sample from N(0,∆t)2. Only costly operation each time step is to sort the remaining cohort

of walkers by their position3. Can use ensemble averaging to reduce the variance of the solution4. Can choose to use either gradient “particles” of equal mass or

allow the mass of each particle to change5. The RGM is adaptive, computational elements (pieces of the

gradient) are created where c′(u) is greatest, this is where sharpfronts appear and so fewer total computational elements areneeded to spatially resolve jump-like solutions

Page 101: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

The RGM in 2-Dimensions

I The RGM in 2D is the same as in 1D except for recovery fromthe gradient

I Write u(x) = G(x,y) ∗∆u(y) = ∇nG(x,y) ∗ ∇nu(y) (integrationby parts)

I If ∇nu(y) = δ(y− y0) then u(x) = ∇nG(x,y0) = −12π

(x−y0)·n‖x−y0‖2

I Thus gradient recovery can be done via 2 n-body evaluationswith charges n1 and n2 or with 1 Hilbert matrix application to thecomplex vector with n

I Can (and do) use the Rokhlin-Greengard fast multipole algorithmfor gradient recovery

I Initial gradient distribution comes from a detailed contour plot

Page 102: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

The RGM in 2-Dimensions

I The RGM in 2D is the same as in 1D except for recovery fromthe gradient

I Write u(x) = G(x,y) ∗∆u(y) = ∇nG(x,y) ∗ ∇nu(y) (integrationby parts)

I If ∇nu(y) = δ(y− y0) then u(x) = ∇nG(x,y0) = −12π

(x−y0)·n‖x−y0‖2

I Thus gradient recovery can be done via 2 n-body evaluationswith charges n1 and n2 or with 1 Hilbert matrix application to thecomplex vector with n

I Can (and do) use the Rokhlin-Greengard fast multipole algorithmfor gradient recovery

I Initial gradient distribution comes from a detailed contour plot

Page 103: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

The RGM in 2-Dimensions

I The RGM in 2D is the same as in 1D except for recovery fromthe gradient

I Write u(x) = G(x,y) ∗∆u(y) = ∇nG(x,y) ∗ ∇nu(y) (integrationby parts)

I If ∇nu(y) = δ(y− y0) then u(x) = ∇nG(x,y0) = −12π

(x−y0)·n‖x−y0‖2

I Thus gradient recovery can be done via 2 n-body evaluationswith charges n1 and n2 or with 1 Hilbert matrix application to thecomplex vector with n

I Can (and do) use the Rokhlin-Greengard fast multipole algorithmfor gradient recovery

I Initial gradient distribution comes from a detailed contour plot

Page 104: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

The RGM in 2-Dimensions

I The RGM in 2D is the same as in 1D except for recovery fromthe gradient

I Write u(x) = G(x,y) ∗∆u(y) = ∇nG(x,y) ∗ ∇nu(y) (integrationby parts)

I If ∇nu(y) = δ(y− y0) then u(x) = ∇nG(x,y0) = −12π

(x−y0)·n‖x−y0‖2

I Thus gradient recovery can be done via 2 n-body evaluationswith charges n1 and n2 or with 1 Hilbert matrix application to thecomplex vector with n

I Can (and do) use the Rokhlin-Greengard fast multipole algorithmfor gradient recovery

I Initial gradient distribution comes from a detailed contour plot

Page 105: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

The RGM in 2-Dimensions

I The RGM in 2D is the same as in 1D except for recovery fromthe gradient

I Write u(x) = G(x,y) ∗∆u(y) = ∇nG(x,y) ∗ ∇nu(y) (integrationby parts)

I If ∇nu(y) = δ(y− y0) then u(x) = ∇nG(x,y0) = −12π

(x−y0)·n‖x−y0‖2

I Thus gradient recovery can be done via 2 n-body evaluationswith charges n1 and n2 or with 1 Hilbert matrix application to thecomplex vector with n

I Can (and do) use the Rokhlin-Greengard fast multipole algorithmfor gradient recovery

I Initial gradient distribution comes from a detailed contour plot

Page 106: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

The RGM in 2-Dimensions

I The RGM in 2D is the same as in 1D except for recovery fromthe gradient

I Write u(x) = G(x,y) ∗∆u(y) = ∇nG(x,y) ∗ ∇nu(y) (integrationby parts)

I If ∇nu(y) = δ(y− y0) then u(x) = ∇nG(x,y0) = −12π

(x−y0)·n‖x−y0‖2

I Thus gradient recovery can be done via 2 n-body evaluationswith charges n1 and n2 or with 1 Hilbert matrix application to thecomplex vector with n

I Can (and do) use the Rokhlin-Greengard fast multipole algorithmfor gradient recovery

I Initial gradient distribution comes from a detailed contour plot

Page 107: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

Another MCM for a Nonlinear Parabolic PDE fromFluid Dynamics

I A model equation for fluid dynamics is Berger’s equation inone-dimension, as an IVP:

∂u∂t

+ u∂u∂x

2∂2u∂x2 , u(x ,0) = u0(x)

I The substitution φ = e−1ε

∫u dx ⇐⇒ u = −ε∂(lnφ)

∂x = −ε 1φ∂φ∂x

converts Berger’s equation to the heat equation (Hopf, 1950):

∂φ

∂t=ε

2∂2φ

∂x2 , φ(x ,0) = e−1ε

∫ x0 u0(ξ) dξ

I Using the Feynman-Kac formula for the IVP for the heat equationone gets that φ(x , t) = Ex [e−

∫√εβ(t)0 u0(ξ) dξ], which determines

u(x , t) via the above inversion formula

Page 108: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

Another MCM for a Nonlinear Parabolic PDE fromFluid Dynamics

I A model equation for fluid dynamics is Berger’s equation inone-dimension, as an IVP:

∂u∂t

+ u∂u∂x

2∂2u∂x2 , u(x ,0) = u0(x)

I The substitution φ = e−1ε

∫u dx ⇐⇒ u = −ε∂(lnφ)

∂x = −ε 1φ∂φ∂x

converts Berger’s equation to the heat equation (Hopf, 1950):

∂φ

∂t=ε

2∂2φ

∂x2 , φ(x ,0) = e−1ε

∫ x0 u0(ξ) dξ

I Using the Feynman-Kac formula for the IVP for the heat equationone gets that φ(x , t) = Ex [e−

∫√εβ(t)0 u0(ξ) dξ], which determines

u(x , t) via the above inversion formula

Page 109: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

Another MCM for a Nonlinear Parabolic PDE fromFluid Dynamics

I A model equation for fluid dynamics is Berger’s equation inone-dimension, as an IVP:

∂u∂t

+ u∂u∂x

2∂2u∂x2 , u(x ,0) = u0(x)

I The substitution φ = e−1ε

∫u dx ⇐⇒ u = −ε∂(lnφ)

∂x = −ε 1φ∂φ∂x

converts Berger’s equation to the heat equation (Hopf, 1950):

∂φ

∂t=ε

2∂2φ

∂x2 , φ(x ,0) = e−1ε

∫ x0 u0(ξ) dξ

I Using the Feynman-Kac formula for the IVP for the heat equationone gets that φ(x , t) = Ex [e−

∫√εβ(t)0 u0(ξ) dξ], which determines

u(x , t) via the above inversion formula

Page 110: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

MCMs for the Schrödinger Equation (Brief)

I The Schrödinger equation is given by:

−i~

2πuτ = ∆u(x)− V (x)u(x)

I Can replace −i ~2π τ = t (imaginary time), to give a linear

parabolic PDEI Usually x ∈ R3n where there are n quantum particles, thus we

are in a very high-dimensional caseI As in the above, use walks, killing, and importance samplingI Interesting variants:

1. Diffusion Monte Carlo2. Greens Function Monte Carlo3. Path Integral Monte Carlo

Page 111: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

MCMs for the Schrödinger Equation (Brief)

I The Schrödinger equation is given by:

−i~

2πuτ = ∆u(x)− V (x)u(x)

I Can replace −i ~2π τ = t (imaginary time), to give a linear

parabolic PDEI Usually x ∈ R3n where there are n quantum particles, thus we

are in a very high-dimensional caseI As in the above, use walks, killing, and importance samplingI Interesting variants:

1. Diffusion Monte Carlo2. Greens Function Monte Carlo3. Path Integral Monte Carlo

Page 112: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

MCMs for the Schrödinger Equation (Brief)

I The Schrödinger equation is given by:

−i~

2πuτ = ∆u(x)− V (x)u(x)

I Can replace −i ~2π τ = t (imaginary time), to give a linear

parabolic PDEI Usually x ∈ R3n where there are n quantum particles, thus we

are in a very high-dimensional caseI As in the above, use walks, killing, and importance samplingI Interesting variants:

1. Diffusion Monte Carlo2. Greens Function Monte Carlo3. Path Integral Monte Carlo

Page 113: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

MCMs for the Schrödinger Equation (Brief)

I The Schrödinger equation is given by:

−i~

2πuτ = ∆u(x)− V (x)u(x)

I Can replace −i ~2π τ = t (imaginary time), to give a linear

parabolic PDEI Usually x ∈ R3n where there are n quantum particles, thus we

are in a very high-dimensional caseI As in the above, use walks, killing, and importance samplingI Interesting variants:

1. Diffusion Monte Carlo2. Greens Function Monte Carlo3. Path Integral Monte Carlo

Page 114: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

MCMs for the Schrödinger Equation (Brief)

I The Schrödinger equation is given by:

−i~

2πuτ = ∆u(x)− V (x)u(x)

I Can replace −i ~2π τ = t (imaginary time), to give a linear

parabolic PDEI Usually x ∈ R3n where there are n quantum particles, thus we

are in a very high-dimensional caseI As in the above, use walks, killing, and importance samplingI Interesting variants:

1. Diffusion Monte Carlo2. Greens Function Monte Carlo3. Path Integral Monte Carlo

Page 115: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

MCMs for the Schrödinger Equation (Brief)

I The Schrödinger equation is given by:

−i~

2πuτ = ∆u(x)− V (x)u(x)

I Can replace −i ~2π τ = t (imaginary time), to give a linear

parabolic PDEI Usually x ∈ R3n where there are n quantum particles, thus we

are in a very high-dimensional caseI As in the above, use walks, killing, and importance samplingI Interesting variants:

1. Diffusion Monte Carlo2. Greens Function Monte Carlo3. Path Integral Monte Carlo

Page 116: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

MCMs for the Schrödinger Equation (Brief)

I The Schrödinger equation is given by:

−i~

2πuτ = ∆u(x)− V (x)u(x)

I Can replace −i ~2π τ = t (imaginary time), to give a linear

parabolic PDEI Usually x ∈ R3n where there are n quantum particles, thus we

are in a very high-dimensional caseI As in the above, use walks, killing, and importance samplingI Interesting variants:

1. Diffusion Monte Carlo2. Greens Function Monte Carlo3. Path Integral Monte Carlo

Page 117: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

MCMs for the Schrödinger Equation (Brief)

I The Schrödinger equation is given by:

−i~

2πuτ = ∆u(x)− V (x)u(x)

I Can replace −i ~2π τ = t (imaginary time), to give a linear

parabolic PDEI Usually x ∈ R3n where there are n quantum particles, thus we

are in a very high-dimensional caseI As in the above, use walks, killing, and importance samplingI Interesting variants:

1. Diffusion Monte Carlo2. Greens Function Monte Carlo3. Path Integral Monte Carlo

Page 118: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

Another MCM for a Nonlinear Parabolic PDE fromFluid Dynamics

I Note when L = ε2 ∆ the scaling x →

√εx converts L into the pure

LaplacianI Thus can sample from L with

√εβ(·) as scaled sample paths

instead of ordinary Brownian motion, this is Brownian scalingI Unlike the reaction-diffusion problems solved by the RGM this

equation is an hyperbolic conservation law: ut + (u2/2)x = ε2 uxx ,

these equations often have jumps that sharpen intodiscontinuous shocks as ε→ 0

I The MCM for Berger’s equation derived above was only possiblebecause the Hopf-Cole transformation could be used to convertthis problem to the heat equation

Page 119: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

Another MCM for a Nonlinear Parabolic PDE fromFluid Dynamics

I Note when L = ε2 ∆ the scaling x →

√εx converts L into the pure

LaplacianI Thus can sample from L with

√εβ(·) as scaled sample paths

instead of ordinary Brownian motion, this is Brownian scalingI Unlike the reaction-diffusion problems solved by the RGM this

equation is an hyperbolic conservation law: ut + (u2/2)x = ε2 uxx ,

these equations often have jumps that sharpen intodiscontinuous shocks as ε→ 0

I The MCM for Berger’s equation derived above was only possiblebecause the Hopf-Cole transformation could be used to convertthis problem to the heat equation

Page 120: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

Another MCM for a Nonlinear Parabolic PDE fromFluid Dynamics

I Note when L = ε2 ∆ the scaling x →

√εx converts L into the pure

LaplacianI Thus can sample from L with

√εβ(·) as scaled sample paths

instead of ordinary Brownian motion, this is Brownian scalingI Unlike the reaction-diffusion problems solved by the RGM this

equation is an hyperbolic conservation law: ut + (u2/2)x = ε2 uxx ,

these equations often have jumps that sharpen intodiscontinuous shocks as ε→ 0

I The MCM for Berger’s equation derived above was only possiblebecause the Hopf-Cole transformation could be used to convertthis problem to the heat equation

Page 121: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Approaches of Reaction-Diffusion Equations

Another MCM for a Nonlinear Parabolic PDE fromFluid Dynamics

I Note when L = ε2 ∆ the scaling x →

√εx converts L into the pure

LaplacianI Thus can sample from L with

√εβ(·) as scaled sample paths

instead of ordinary Brownian motion, this is Brownian scalingI Unlike the reaction-diffusion problems solved by the RGM this

equation is an hyperbolic conservation law: ut + (u2/2)x = ε2 uxx ,

these equations often have jumps that sharpen intodiscontinuous shocks as ε→ 0

I The MCM for Berger’s equation derived above was only possiblebecause the Hopf-Cole transformation could be used to convertthis problem to the heat equation

Page 122: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Monte Carlo Methods for PDEs from Fluid Mechanics

Two-Dimensional Incompressible Fluid DynamicsI The equations of two dimensional incompressible fluid dynamics

are:

∂u∂t

+ (u · ∇)u = −1ρ∇p + γ∆u

∇ · u = 0, u = (u, v)T (4.1a)

1. Inviscid fluid: γ = 0 =⇒ Euler equations2. Viscous fluid: γ 6= 0 =⇒ Navier-Stokes equations

I Since ∇ · u = 0 there is a stream function ψ s.t. u = (−∂ψ∂y ,∂ψ∂x )T ,

with the vorticity ξ = ∇× u we can rewrite as:

∂ξ

∂t+ (u · ∇)ξ = γ∆ξ, ∆ψ = −ξ (4.1b)

Page 123: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Monte Carlo Methods for PDEs from Fluid Mechanics

Two-Dimensional Incompressible Fluid DynamicsI The equations of two dimensional incompressible fluid dynamics

are:

∂u∂t

+ (u · ∇)u = −1ρ∇p + γ∆u

∇ · u = 0, u = (u, v)T (4.1a)

1. Inviscid fluid: γ = 0 =⇒ Euler equations2. Viscous fluid: γ 6= 0 =⇒ Navier-Stokes equations

I Since ∇ · u = 0 there is a stream function ψ s.t. u = (−∂ψ∂y ,∂ψ∂x )T ,

with the vorticity ξ = ∇× u we can rewrite as:

∂ξ

∂t+ (u · ∇)ξ = γ∆ξ, ∆ψ = −ξ (4.1b)

Page 124: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Monte Carlo Methods for PDEs from Fluid Mechanics

Two-Dimensional Incompressible Fluid DynamicsI The equations of two dimensional incompressible fluid dynamics

are:

∂u∂t

+ (u · ∇)u = −1ρ∇p + γ∆u

∇ · u = 0, u = (u, v)T (4.1a)

1. Inviscid fluid: γ = 0 =⇒ Euler equations2. Viscous fluid: γ 6= 0 =⇒ Navier-Stokes equations

I Since ∇ · u = 0 there is a stream function ψ s.t. u = (−∂ψ∂y ,∂ψ∂x )T ,

with the vorticity ξ = ∇× u we can rewrite as:

∂ξ

∂t+ (u · ∇)ξ = γ∆ξ, ∆ψ = −ξ (4.1b)

Page 125: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Monte Carlo Methods for PDEs from Fluid Mechanics

Two-Dimensional Incompressible Fluid DynamicsI The equations of two dimensional incompressible fluid dynamics

are:

∂u∂t

+ (u · ∇)u = −1ρ∇p + γ∆u

∇ · u = 0, u = (u, v)T (4.1a)

1. Inviscid fluid: γ = 0 =⇒ Euler equations2. Viscous fluid: γ 6= 0 =⇒ Navier-Stokes equations

I Since ∇ · u = 0 there is a stream function ψ s.t. u = (−∂ψ∂y ,∂ψ∂x )T ,

with the vorticity ξ = ∇× u we can rewrite as:

∂ξ

∂t+ (u · ∇)ξ = γ∆ξ, ∆ψ = −ξ (4.1b)

Page 126: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Monte Carlo Methods for PDEs from Fluid Mechanics

Two-Dimensional Incompressible Fluid Dynamics

I Recall the material (or total) derivative of z: DzDt = ∂z

∂t + (u · ∇)z,this is the time rate of change in a quantity that is being advectedin a fluid with velocity u

I We can rewrite the vorticity above formulation as:

DξDt

= γ∆ξ, ∆ψ = −ξ (4.1c)

I If we neglect boundary conditions, can representξ =

∑Nn=1 ξiδ(x− xi ) (spike discretization of a gradient), since in

2D the fundamental solution of the Poisson equation is∆−1δ(x− xi ) = − 1

2π log |x− xi | we haveψ = 1

∑Nn=1 ξi log |x− xi |

I Can prove that if ξ is a sum of delta functions at some time t0, itremains so for all time t ≥ t0

Page 127: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Monte Carlo Methods for PDEs from Fluid Mechanics

Two-Dimensional Incompressible Fluid Dynamics

I Recall the material (or total) derivative of z: DzDt = ∂z

∂t + (u · ∇)z,this is the time rate of change in a quantity that is being advectedin a fluid with velocity u

I We can rewrite the vorticity above formulation as:

DξDt

= γ∆ξ, ∆ψ = −ξ (4.1c)

I If we neglect boundary conditions, can representξ =

∑Nn=1 ξiδ(x− xi ) (spike discretization of a gradient), since in

2D the fundamental solution of the Poisson equation is∆−1δ(x− xi ) = − 1

2π log |x− xi | we haveψ = 1

∑Nn=1 ξi log |x− xi |

I Can prove that if ξ is a sum of delta functions at some time t0, itremains so for all time t ≥ t0

Page 128: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Monte Carlo Methods for PDEs from Fluid Mechanics

Two-Dimensional Incompressible Fluid Dynamics

I Recall the material (or total) derivative of z: DzDt = ∂z

∂t + (u · ∇)z,this is the time rate of change in a quantity that is being advectedin a fluid with velocity u

I We can rewrite the vorticity above formulation as:

DξDt

= γ∆ξ, ∆ψ = −ξ (4.1c)

I If we neglect boundary conditions, can representξ =

∑Nn=1 ξiδ(x− xi ) (spike discretization of a gradient), since in

2D the fundamental solution of the Poisson equation is∆−1δ(x− xi ) = − 1

2π log |x− xi | we haveψ = 1

∑Nn=1 ξi log |x− xi |

I Can prove that if ξ is a sum of delta functions at some time t0, itremains so for all time t ≥ t0

Page 129: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Monte Carlo Methods for PDEs from Fluid Mechanics

Two-Dimensional Incompressible Fluid Dynamics

I Recall the material (or total) derivative of z: DzDt = ∂z

∂t + (u · ∇)z,this is the time rate of change in a quantity that is being advectedin a fluid with velocity u

I We can rewrite the vorticity above formulation as:

DξDt

= γ∆ξ, ∆ψ = −ξ (4.1c)

I If we neglect boundary conditions, can representξ =

∑Nn=1 ξiδ(x− xi ) (spike discretization of a gradient), since in

2D the fundamental solution of the Poisson equation is∆−1δ(x− xi ) = − 1

2π log |x− xi | we haveψ = 1

∑Nn=1 ξi log |x− xi |

I Can prove that if ξ is a sum of delta functions at some time t0, itremains so for all time t ≥ t0

Page 130: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Monte Carlo Methods for PDEs from Fluid Mechanics

The Vortex Method for the Incompressible Euler’sEquation

I These observations on the vortex form of the Euler equationshelp extend ideas of a method first proposed by Chorin (Chorin,1971):

1. Represent ξ(x, t) =∑N

i=1 ξiδ(x− xi ), and soψ = 1

∑Nn=1 ξi log |x− xi |

2. Move each of the vortex “blobs” via the ODEs for the x and ycomponents of ξi :

dxi

dt= − 1

∑j 6=i

ξj∂ log |xi − xj |

∂y, i = 1, . . . ,N

dyi

dt=

12π

∑j 6=i

ξj∂ log |xi − xj |

∂x, i = 1, . . . ,N

I Note, no time step size constraint is a priori imposed butdepends on the choice of numerical ODE scheme

Page 131: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Monte Carlo Methods for PDEs from Fluid Mechanics

The Vortex Method for the Incompressible Euler’sEquation

I These observations on the vortex form of the Euler equationshelp extend ideas of a method first proposed by Chorin (Chorin,1971):

1. Represent ξ(x, t) =∑N

i=1 ξiδ(x− xi ), and soψ = 1

∑Nn=1 ξi log |x− xi |

2. Move each of the vortex “blobs” via the ODEs for the x and ycomponents of ξi :

dxi

dt= − 1

∑j 6=i

ξj∂ log |xi − xj |

∂y, i = 1, . . . ,N

dyi

dt=

12π

∑j 6=i

ξj∂ log |xi − xj |

∂x, i = 1, . . . ,N

I Note, no time step size constraint is a priori imposed butdepends on the choice of numerical ODE scheme

Page 132: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Monte Carlo Methods for PDEs from Fluid Mechanics

The Vortex Method for the Incompressible Euler’sEquation

I These observations on the vortex form of the Euler equationshelp extend ideas of a method first proposed by Chorin (Chorin,1971):

1. Represent ξ(x, t) =∑N

i=1 ξiδ(x− xi ), and soψ = 1

∑Nn=1 ξi log |x− xi |

2. Move each of the vortex “blobs” via the ODEs for the x and ycomponents of ξi :

dxi

dt= − 1

∑j 6=i

ξj∂ log |xi − xj |

∂y, i = 1, . . . ,N

dyi

dt=

12π

∑j 6=i

ξj∂ log |xi − xj |

∂x, i = 1, . . . ,N

I Note, no time step size constraint is a priori imposed butdepends on the choice of numerical ODE scheme

Page 133: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Monte Carlo Methods for PDEs from Fluid Mechanics

The Vortex Method for the Incompressible Euler’sEquation

I These observations on the vortex form of the Euler equationshelp extend ideas of a method first proposed by Chorin (Chorin,1971):

1. Represent ξ(x, t) =∑N

i=1 ξiδ(x− xi ), and soψ = 1

∑Nn=1 ξi log |x− xi |

2. Move each of the vortex “blobs” via the ODEs for the x and ycomponents of ξi :

dxi

dt= − 1

∑j 6=i

ξj∂ log |xi − xj |

∂y, i = 1, . . . ,N

dyi

dt=

12π

∑j 6=i

ξj∂ log |xi − xj |

∂x, i = 1, . . . ,N

I Note, no time step size constraint is a priori imposed butdepends on the choice of numerical ODE scheme

Page 134: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Monte Carlo Methods for PDEs from Fluid Mechanics

The Vortex Method for the Incompressible Euler’sEquation

I This is not a MCM but only a method for converting the 2D Eulerequations (PDEs) to a system of ODEs which are mathematicallyequivalent to the N-body problem (gravitation, particle dynamics)

I It is common to use other functional forms for the vortex “blobs,"and hence the stream function is changed, in our case thestream “blob” function is ψ(x) =

∑Nn=1 ξiψ

0(x− xi ) whereψ0(x− xi ) = 1

2π log |x− xi |I Other choices of stream “blob” functions are ψ0(x)

s.t. ψ0(x) ∼ 12π log(x) for |x| 0, and ψ0(x)→ 0 as |x| → 0

Page 135: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Monte Carlo Methods for PDEs from Fluid Mechanics

The Vortex Method for the Incompressible Euler’sEquation

I This is not a MCM but only a method for converting the 2D Eulerequations (PDEs) to a system of ODEs which are mathematicallyequivalent to the N-body problem (gravitation, particle dynamics)

I It is common to use other functional forms for the vortex “blobs,"and hence the stream function is changed, in our case thestream “blob” function is ψ(x) =

∑Nn=1 ξiψ

0(x− xi ) whereψ0(x− xi ) = 1

2π log |x− xi |I Other choices of stream “blob” functions are ψ0(x)

s.t. ψ0(x) ∼ 12π log(x) for |x| 0, and ψ0(x)→ 0 as |x| → 0

Page 136: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Monte Carlo Methods for PDEs from Fluid Mechanics

The Vortex Method for the Incompressible Euler’sEquation

I This is not a MCM but only a method for converting the 2D Eulerequations (PDEs) to a system of ODEs which are mathematicallyequivalent to the N-body problem (gravitation, particle dynamics)

I It is common to use other functional forms for the vortex “blobs,"and hence the stream function is changed, in our case thestream “blob” function is ψ(x) =

∑Nn=1 ξiψ

0(x− xi ) whereψ0(x− xi ) = 1

2π log |x− xi |I Other choices of stream “blob” functions are ψ0(x)

s.t. ψ0(x) ∼ 12π log(x) for |x| 0, and ψ0(x)→ 0 as |x| → 0

Page 137: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Monte Carlo Methods for PDEs from Fluid Mechanics

Chorin’s Random Vortex Method for theIncompressible Navier-Stokes Equations

I When γ 6= 0, Euler =⇒ Navier-Stokes and (4.1b) can bethought of as an equation where vorticity is advected via fluid(the l.h.s. material derivative), and diffuses due to viscosity (ther.h.s term)

I As above, diffusion can be sampled by moving diffusing particlesvia the Gaussian (fundamental soln. of the diffusion eqn.)

I Thus we can extend the inviscid random vortex method (RVM) toNavier-Stokes through a fractional step approach, do the vorticityadvection via the inviscid RVM and then treat the diffusion ofvorticity equation by moving the vortex randomly

I This addition for diffusion of vorticity make Chorin’s RGM into aMCM

I Note that the RGM is also fractional step, but both steps areMCMs

Page 138: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Monte Carlo Methods for PDEs from Fluid Mechanics

Chorin’s Random Vortex Method for theIncompressible Navier-Stokes Equations

I When γ 6= 0, Euler =⇒ Navier-Stokes and (4.1b) can bethought of as an equation where vorticity is advected via fluid(the l.h.s. material derivative), and diffuses due to viscosity (ther.h.s term)

I As above, diffusion can be sampled by moving diffusing particlesvia the Gaussian (fundamental soln. of the diffusion eqn.)

I Thus we can extend the inviscid random vortex method (RVM) toNavier-Stokes through a fractional step approach, do the vorticityadvection via the inviscid RVM and then treat the diffusion ofvorticity equation by moving the vortex randomly

I This addition for diffusion of vorticity make Chorin’s RGM into aMCM

I Note that the RGM is also fractional step, but both steps areMCMs

Page 139: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Monte Carlo Methods for PDEs from Fluid Mechanics

Chorin’s Random Vortex Method for theIncompressible Navier-Stokes Equations

I When γ 6= 0, Euler =⇒ Navier-Stokes and (4.1b) can bethought of as an equation where vorticity is advected via fluid(the l.h.s. material derivative), and diffuses due to viscosity (ther.h.s term)

I As above, diffusion can be sampled by moving diffusing particlesvia the Gaussian (fundamental soln. of the diffusion eqn.)

I Thus we can extend the inviscid random vortex method (RVM) toNavier-Stokes through a fractional step approach, do the vorticityadvection via the inviscid RVM and then treat the diffusion ofvorticity equation by moving the vortex randomly

I This addition for diffusion of vorticity make Chorin’s RGM into aMCM

I Note that the RGM is also fractional step, but both steps areMCMs

Page 140: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Monte Carlo Methods for PDEs from Fluid Mechanics

Chorin’s Random Vortex Method for theIncompressible Navier-Stokes Equations

I When γ 6= 0, Euler =⇒ Navier-Stokes and (4.1b) can bethought of as an equation where vorticity is advected via fluid(the l.h.s. material derivative), and diffuses due to viscosity (ther.h.s term)

I As above, diffusion can be sampled by moving diffusing particlesvia the Gaussian (fundamental soln. of the diffusion eqn.)

I Thus we can extend the inviscid random vortex method (RVM) toNavier-Stokes through a fractional step approach, do the vorticityadvection via the inviscid RVM and then treat the diffusion ofvorticity equation by moving the vortex randomly

I This addition for diffusion of vorticity make Chorin’s RGM into aMCM

I Note that the RGM is also fractional step, but both steps areMCMs

Page 141: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Monte Carlo Methods for PDEs from Fluid Mechanics

Chorin’s Random Vortex Method for theIncompressible Navier-Stokes Equations

I When γ 6= 0, Euler =⇒ Navier-Stokes and (4.1b) can bethought of as an equation where vorticity is advected via fluid(the l.h.s. material derivative), and diffuses due to viscosity (ther.h.s term)

I As above, diffusion can be sampled by moving diffusing particlesvia the Gaussian (fundamental soln. of the diffusion eqn.)

I Thus we can extend the inviscid random vortex method (RVM) toNavier-Stokes through a fractional step approach, do the vorticityadvection via the inviscid RVM and then treat the diffusion ofvorticity equation by moving the vortex randomly

I This addition for diffusion of vorticity make Chorin’s RGM into aMCM

I Note that the RGM is also fractional step, but both steps areMCMs

Page 142: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Monte Carlo Methods for PDEs from Fluid Mechanics

Chorin’s RVM for the Incompressible Navier-StokesEquations

I Chorin’s RVM for 2D Navier-Stokes:1. Using stream function “blobs” represent ψ(x) =

∑Nn=1 ξiψ

0(x− xi )2. Advect each of the vortex “blobs” ∆t forward in time via the ODEs

for the x and y components of ξi , use your favorite (stable) ODEmethod:

dxi

dt= −

∑j 6=i

ξj∂ψ0(xi − xj )

∂y, i = 1, . . . ,N

dyi

dt=

∑j 6=i

ξj∂ψ0(xi − xj )

∂x, i = 1, . . . ,N

3. Update each xi (t + ∆t)→ xi (t + ∆t) + ηi where ηi is N(0, 2∆tγ)

Page 143: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Monte Carlo Methods for PDEs from Fluid Mechanics

Chorin’s RVM for the Incompressible Navier-StokesEquations

I Chorin’s RVM for 2D Navier-Stokes:1. Using stream function “blobs” represent ψ(x) =

∑Nn=1 ξiψ

0(x− xi )2. Advect each of the vortex “blobs” ∆t forward in time via the ODEs

for the x and y components of ξi , use your favorite (stable) ODEmethod:

dxi

dt= −

∑j 6=i

ξj∂ψ0(xi − xj )

∂y, i = 1, . . . ,N

dyi

dt=

∑j 6=i

ξj∂ψ0(xi − xj )

∂x, i = 1, . . . ,N

3. Update each xi (t + ∆t)→ xi (t + ∆t) + ηi where ηi is N(0, 2∆tγ)

Page 144: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Monte Carlo Methods for PDEs from Fluid Mechanics

Chorin’s RVM for the Incompressible Navier-StokesEquations

I Chorin’s RVM for 2D Navier-Stokes:1. Using stream function “blobs” represent ψ(x) =

∑Nn=1 ξiψ

0(x− xi )2. Advect each of the vortex “blobs” ∆t forward in time via the ODEs

for the x and y components of ξi , use your favorite (stable) ODEmethod:

dxi

dt= −

∑j 6=i

ξj∂ψ0(xi − xj )

∂y, i = 1, . . . ,N

dyi

dt=

∑j 6=i

ξj∂ψ0(xi − xj )

∂x, i = 1, . . . ,N

3. Update each xi (t + ∆t)→ xi (t + ∆t) + ηi where ηi is N(0, 2∆tγ)

Page 145: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Monte Carlo Methods for PDEs from Fluid Mechanics

Chorin’s RVM for the Incompressible Navier-StokesEquations

I Chorin’s RVM for 2D Navier-Stokes:1. Using stream function “blobs” represent ψ(x) =

∑Nn=1 ξiψ

0(x− xi )2. Advect each of the vortex “blobs” ∆t forward in time via the ODEs

for the x and y components of ξi , use your favorite (stable) ODEmethod:

dxi

dt= −

∑j 6=i

ξj∂ψ0(xi − xj )

∂y, i = 1, . . . ,N

dyi

dt=

∑j 6=i

ξj∂ψ0(xi − xj )

∂x, i = 1, . . . ,N

3. Update each xi (t + ∆t)→ xi (t + ∆t) + ηi where ηi is N(0, 2∆tγ)

Page 146: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Monte Carlo Methods for PDEs from Fluid Mechanics

Chorin’s RVM for the Incompressible Navier-StokesEquations

I Shortcoming of method is irrotational flows, i.e. ∇× u = 0,i.e. ξ = 0

1. Irrotational flow implies ∃φ s.t. u = ∇φ, i.e. u is a potential flow2. 2D potential flows reduce to solving ∇ · u = ∆φ = 0 with boundary

conditions3. Can use this to enforce boundary conditions, as can add a

potential flow that corrects for the rotational flow at the boundariesI As with RGM the RVM is adaptive in that regions of high vorticity

(rotation) get a high density of vortex “blobs”

Page 147: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Monte Carlo Methods for PDEs from Fluid Mechanics

Chorin’s RVM for the Incompressible Navier-StokesEquations

I Shortcoming of method is irrotational flows, i.e. ∇× u = 0,i.e. ξ = 0

1. Irrotational flow implies ∃φ s.t. u = ∇φ, i.e. u is a potential flow2. 2D potential flows reduce to solving ∇ · u = ∆φ = 0 with boundary

conditions3. Can use this to enforce boundary conditions, as can add a

potential flow that corrects for the rotational flow at the boundariesI As with RGM the RVM is adaptive in that regions of high vorticity

(rotation) get a high density of vortex “blobs”

Page 148: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Monte Carlo Methods for PDEs from Fluid Mechanics

Chorin’s RVM for the Incompressible Navier-StokesEquations

I Shortcoming of method is irrotational flows, i.e. ∇× u = 0,i.e. ξ = 0

1. Irrotational flow implies ∃φ s.t. u = ∇φ, i.e. u is a potential flow2. 2D potential flows reduce to solving ∇ · u = ∆φ = 0 with boundary

conditions3. Can use this to enforce boundary conditions, as can add a

potential flow that corrects for the rotational flow at the boundariesI As with RGM the RVM is adaptive in that regions of high vorticity

(rotation) get a high density of vortex “blobs”

Page 149: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Monte Carlo Methods for PDEs from Fluid Mechanics

Chorin’s RVM for the Incompressible Navier-StokesEquations

I Shortcoming of method is irrotational flows, i.e. ∇× u = 0,i.e. ξ = 0

1. Irrotational flow implies ∃φ s.t. u = ∇φ, i.e. u is a potential flow2. 2D potential flows reduce to solving ∇ · u = ∆φ = 0 with boundary

conditions3. Can use this to enforce boundary conditions, as can add a

potential flow that corrects for the rotational flow at the boundariesI As with RGM the RVM is adaptive in that regions of high vorticity

(rotation) get a high density of vortex “blobs”

Page 150: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Monte Carlo Methods for PDEs from Fluid Mechanics

Chorin’s RVM for the Incompressible Navier-StokesEquations

I Shortcoming of method is irrotational flows, i.e. ∇× u = 0,i.e. ξ = 0

1. Irrotational flow implies ∃φ s.t. u = ∇φ, i.e. u is a potential flow2. 2D potential flows reduce to solving ∇ · u = ∆φ = 0 with boundary

conditions3. Can use this to enforce boundary conditions, as can add a

potential flow that corrects for the rotational flow at the boundariesI As with RGM the RVM is adaptive in that regions of high vorticity

(rotation) get a high density of vortex “blobs”

Page 151: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Representations for Other PDEs

MCMs for Other PDEsI We have constructed MCMs for both elliptic and parabolic PDEs

but have not considered MCMs for hyperbolic PDEs except forBerger’s equation (was a very special case)

I In general MCMs for hyperbolic PDEs (like the wave equation:utt = c2uxx ) are hard to derive as Brownian motion isfundamentally related to diffusion (parabolic PDEs) and to theequilibrium of diffusion processes (elliptic PDEs), in contrasthyperbolic problems model distortion free informationpropagation which is fundamentally nonrandom

I A famous special case of an hyperbolic MCM for thetelegrapher’s equation (Kac, 1956):

1c2∂2F∂t2 +

2ac2∂F∂t

= ∆F ,

F (x,0) = φ(x),∂F (x,0)

∂t= 0

Page 152: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Representations for Other PDEs

MCMs for Other PDEsI We have constructed MCMs for both elliptic and parabolic PDEs

but have not considered MCMs for hyperbolic PDEs except forBerger’s equation (was a very special case)

I In general MCMs for hyperbolic PDEs (like the wave equation:utt = c2uxx ) are hard to derive as Brownian motion isfundamentally related to diffusion (parabolic PDEs) and to theequilibrium of diffusion processes (elliptic PDEs), in contrasthyperbolic problems model distortion free informationpropagation which is fundamentally nonrandom

I A famous special case of an hyperbolic MCM for thetelegrapher’s equation (Kac, 1956):

1c2∂2F∂t2 +

2ac2∂F∂t

= ∆F ,

F (x,0) = φ(x),∂F (x,0)

∂t= 0

Page 153: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Representations for Other PDEs

MCMs for Other PDEsI We have constructed MCMs for both elliptic and parabolic PDEs

but have not considered MCMs for hyperbolic PDEs except forBerger’s equation (was a very special case)

I In general MCMs for hyperbolic PDEs (like the wave equation:utt = c2uxx ) are hard to derive as Brownian motion isfundamentally related to diffusion (parabolic PDEs) and to theequilibrium of diffusion processes (elliptic PDEs), in contrasthyperbolic problems model distortion free informationpropagation which is fundamentally nonrandom

I A famous special case of an hyperbolic MCM for thetelegrapher’s equation (Kac, 1956):

1c2∂2F∂t2 +

2ac2∂F∂t

= ∆F ,

F (x,0) = φ(x),∂F (x,0)

∂t= 0

Page 154: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Representations for Other PDEs

MCMs for Other PDEs

I The telegrapher’s equation approaches both the wave and heatequations in different limiting cases

1. Wave equation: a→ 02. Heat equation: a, c →∞, 2a/c2 → 1

D

I Consider the one-dimensional telegrapher’s equation, whena = 0 we know the solution is given by F (x , t) = φ(x+ct)+φ(x−ct)

2I If we think of a as the probability per unit time of a Poisson

process then N(t) = # of events occurring up to time t has thedistribution PN(t) = k = e−at (at)k

k!

I If a particle moves with velocity c for time t it travels ct =∫ t

0 c dτ ,if it undergoes random Poisson distributed direction reversal withprobability per unit time a, the distance traveled in time t is∫ t

0 c(−1)N(τ) dτ

Page 155: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Representations for Other PDEs

MCMs for Other PDEs

I The telegrapher’s equation approaches both the wave and heatequations in different limiting cases

1. Wave equation: a→ 02. Heat equation: a, c →∞, 2a/c2 → 1

D

I Consider the one-dimensional telegrapher’s equation, whena = 0 we know the solution is given by F (x , t) = φ(x+ct)+φ(x−ct)

2I If we think of a as the probability per unit time of a Poisson

process then N(t) = # of events occurring up to time t has thedistribution PN(t) = k = e−at (at)k

k!

I If a particle moves with velocity c for time t it travels ct =∫ t

0 c dτ ,if it undergoes random Poisson distributed direction reversal withprobability per unit time a, the distance traveled in time t is∫ t

0 c(−1)N(τ) dτ

Page 156: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Representations for Other PDEs

MCMs for Other PDEs

I The telegrapher’s equation approaches both the wave and heatequations in different limiting cases

1. Wave equation: a→ 02. Heat equation: a, c →∞, 2a/c2 → 1

D

I Consider the one-dimensional telegrapher’s equation, whena = 0 we know the solution is given by F (x , t) = φ(x+ct)+φ(x−ct)

2I If we think of a as the probability per unit time of a Poisson

process then N(t) = # of events occurring up to time t has thedistribution PN(t) = k = e−at (at)k

k!

I If a particle moves with velocity c for time t it travels ct =∫ t

0 c dτ ,if it undergoes random Poisson distributed direction reversal withprobability per unit time a, the distance traveled in time t is∫ t

0 c(−1)N(τ) dτ

Page 157: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Representations for Other PDEs

MCMs for Other PDEs

I The telegrapher’s equation approaches both the wave and heatequations in different limiting cases

1. Wave equation: a→ 02. Heat equation: a, c →∞, 2a/c2 → 1

D

I Consider the one-dimensional telegrapher’s equation, whena = 0 we know the solution is given by F (x , t) = φ(x+ct)+φ(x−ct)

2I If we think of a as the probability per unit time of a Poisson

process then N(t) = # of events occurring up to time t has thedistribution PN(t) = k = e−at (at)k

k!

I If a particle moves with velocity c for time t it travels ct =∫ t

0 c dτ ,if it undergoes random Poisson distributed direction reversal withprobability per unit time a, the distance traveled in time t is∫ t

0 c(−1)N(τ) dτ

Page 158: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Representations for Other PDEs

MCMs for Other PDEs

I The telegrapher’s equation approaches both the wave and heatequations in different limiting cases

1. Wave equation: a→ 02. Heat equation: a, c →∞, 2a/c2 → 1

D

I Consider the one-dimensional telegrapher’s equation, whena = 0 we know the solution is given by F (x , t) = φ(x+ct)+φ(x−ct)

2I If we think of a as the probability per unit time of a Poisson

process then N(t) = # of events occurring up to time t has thedistribution PN(t) = k = e−at (at)k

k!

I If a particle moves with velocity c for time t it travels ct =∫ t

0 c dτ ,if it undergoes random Poisson distributed direction reversal withprobability per unit time a, the distance traveled in time t is∫ t

0 c(−1)N(τ) dτ

Page 159: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Representations for Other PDEs

MCMs for Other PDEs

I The telegrapher’s equation approaches both the wave and heatequations in different limiting cases

1. Wave equation: a→ 02. Heat equation: a, c →∞, 2a/c2 → 1

D

I Consider the one-dimensional telegrapher’s equation, whena = 0 we know the solution is given by F (x , t) = φ(x+ct)+φ(x−ct)

2I If we think of a as the probability per unit time of a Poisson

process then N(t) = # of events occurring up to time t has thedistribution PN(t) = k = e−at (at)k

k!

I If a particle moves with velocity c for time t it travels ct =∫ t

0 c dτ ,if it undergoes random Poisson distributed direction reversal withprobability per unit time a, the distance traveled in time t is∫ t

0 c(−1)N(τ) dτ

Page 160: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Representations for Other PDEs

MCMs for Other PDEsI If we replace ct in the exact solution to the 1D wave equation by

the randomized distance traveled average over all Poissonreversing paths we get:

F (x , t) =12

E[φ

(x +

∫ t

0c(−1)N(τ) dτ

)]+

12

E[φ

(x −

∫ t

0c(−1)N(τ) dτ

)]which can be proven to solve the above IVP for the telegrapher’sequation

I In any dimension, an exact solution for the wave equation can beconverted into a solution to the telegrapher’s equation byreplacing t in the wave equation ansatz by the randomized time∫ t

0 (−1)N(τ) dτ and averagingI This is the basis of a MCM for the telegrapher’s equation, one

can also construct MCMs for finite-difference approximations tothe telegrapher’s equation

Page 161: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Representations for Other PDEs

MCMs for Other PDEsI If we replace ct in the exact solution to the 1D wave equation by

the randomized distance traveled average over all Poissonreversing paths we get:

F (x , t) =12

E[φ

(x +

∫ t

0c(−1)N(τ) dτ

)]+

12

E[φ

(x −

∫ t

0c(−1)N(τ) dτ

)]which can be proven to solve the above IVP for the telegrapher’sequation

I In any dimension, an exact solution for the wave equation can beconverted into a solution to the telegrapher’s equation byreplacing t in the wave equation ansatz by the randomized time∫ t

0 (−1)N(τ) dτ and averagingI This is the basis of a MCM for the telegrapher’s equation, one

can also construct MCMs for finite-difference approximations tothe telegrapher’s equation

Page 162: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Representations for Other PDEs

MCMs for Other PDEsI If we replace ct in the exact solution to the 1D wave equation by

the randomized distance traveled average over all Poissonreversing paths we get:

F (x , t) =12

E[φ

(x +

∫ t

0c(−1)N(τ) dτ

)]+

12

E[φ

(x −

∫ t

0c(−1)N(τ) dτ

)]which can be proven to solve the above IVP for the telegrapher’sequation

I In any dimension, an exact solution for the wave equation can beconverted into a solution to the telegrapher’s equation byreplacing t in the wave equation ansatz by the randomized time∫ t

0 (−1)N(τ) dτ and averagingI This is the basis of a MCM for the telegrapher’s equation, one

can also construct MCMs for finite-difference approximations tothe telegrapher’s equation

Page 163: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Representations for Other PDEs

Nonlinear Equations and the Feynman-Kac Formula

I Recall that in the Feynman-Kac formula the operator L enters inthrough the SDE which generates the sample paths over whichexpected values are taken

I If one replaces the sample paths from solutions of linear SDEswith paths derived from branching processes one can samplecertain nonlinear parabolic PDEs directly (McKean, 1988)

I Recall that the solution to the IVP for the heat equation isrepresented via Feynman-Kac as: u(x , t) = Ex [u0(β(t))]

Page 164: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Representations for Other PDEs

Nonlinear Equations and the Feynman-Kac Formula

I Recall that in the Feynman-Kac formula the operator L enters inthrough the SDE which generates the sample paths over whichexpected values are taken

I If one replaces the sample paths from solutions of linear SDEswith paths derived from branching processes one can samplecertain nonlinear parabolic PDEs directly (McKean, 1988)

I Recall that the solution to the IVP for the heat equation isrepresented via Feynman-Kac as: u(x , t) = Ex [u0(β(t))]

Page 165: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Representations for Other PDEs

Nonlinear Equations and the Feynman-Kac Formula

I Recall that in the Feynman-Kac formula the operator L enters inthrough the SDE which generates the sample paths over whichexpected values are taken

I If one replaces the sample paths from solutions of linear SDEswith paths derived from branching processes one can samplecertain nonlinear parabolic PDEs directly (McKean, 1988)

I Recall that the solution to the IVP for the heat equation isrepresented via Feynman-Kac as: u(x , t) = Ex [u0(β(t))]

Page 166: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Representations for Other PDEs

Nonlinear Equations and the Feynman-Kac Formula

Consider normal Brownian motion with exponentially distributedbranching with unit branching probability per unit time, then theFeynman-Kac representation above with expectations taken over thisbranching process instead of normal Brownian motion solves the IVPfor the Kolmogorov-Petrovskii-Piskunov equation:

∂u∂t

=12

∆u +(u2 − u

), u(x ,0) = u0(x)

has solution

u(x , t) = Ex

[ N(t)∏i=1

u0(xi (t))

]where the branching Brownian motion started at x at t = 0 leads toN(t) particles at time t with locations xi (t), i = 1, . . . ,N(t)

Page 167: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Representations for Other PDEs

Nonlinear Equations and the Feynman-Kac Formula

I If instead of binary branching at exponentially distributedbranching times there are n with probability pn,

∑∞n=2 pn = 1,

then the branching Feynman-Kac formula solves the IVP for:

∂u∂t

=12

∆u +

( ∞∑n=2

pnun − u

)

I Can also have pn < 0 with∑∞

n=2 |pn| = 1 with same result exceptthat must have two representations for the normal and the“negative” walkers

Page 168: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Representations for Other PDEs

Nonlinear Equations and the Feynman-Kac Formula

I If instead of binary branching at exponentially distributedbranching times there are n with probability pn,

∑∞n=2 pn = 1,

then the branching Feynman-Kac formula solves the IVP for:

∂u∂t

=12

∆u +

( ∞∑n=2

pnun − u

)

I Can also have pn < 0 with∑∞

n=2 |pn| = 1 with same result exceptthat must have two representations for the normal and the“negative” walkers

Page 169: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Representations for Other PDEs

Nonlinear Equations and the Feynman-Kac Formula

I If λ = the probability per unit time of branching then we can solvethe IVP for:

∂u∂t

=12

∆u + λ

( ∞∑n=2

pnun − u

)

I If we have k(x , t) as inhomogeneous branching probability perunit time, then we solve the IVP for:

∂u∂t

=12

∆u +

∫ t

0k(x , τ) dτ

( ∞∑n=2

pnun − u

)

Page 170: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Probabilistic Representations of PDEs

Probabilistic Representations for Other PDEs

Nonlinear Equations and the Feynman-Kac Formula

I If λ = the probability per unit time of branching then we can solvethe IVP for:

∂u∂t

=12

∆u + λ

( ∞∑n=2

pnun − u

)

I If we have k(x , t) as inhomogeneous branching probability perunit time, then we solve the IVP for:

∂u∂t

=12

∆u +

∫ t

0k(x , τ) dτ

( ∞∑n=2

pnun − u

)

Page 171: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Monte Carlo Methods and Linear Algebra

Monte Carlo Methods and Linear AlgebraI Monte Carlo has been, and continues to be used in linear algebraI Consider the linear system: x = Hx + b, if ||H|| = H < 1, then

the following iterative method converges:

xn+1 := Hxn + b, x0 = 0,

and in particular we have xk =∑k−1

i=0 H ib, and similarly theNeumann series converges:

N =∞∑i=0

H i = (I − H)−1, ||N|| =∞∑i=0

||H i || ≤∞∑i=0

Hi =1

1−H

I Formally, the solution is x = (I − H)−1bI Note: ||H|| can be defined in many ways, e. g.:

1. ||H|| = maxi

(∑j |hij |

)2. ||H|| = maxi (|λi (H)|) = ρ(H) (spectral radius)

Page 172: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Monte Carlo Methods and Linear Algebra

Monte Carlo Methods and Linear AlgebraI Monte Carlo has been, and continues to be used in linear algebraI Consider the linear system: x = Hx + b, if ||H|| = H < 1, then

the following iterative method converges:

xn+1 := Hxn + b, x0 = 0,

and in particular we have xk =∑k−1

i=0 H ib, and similarly theNeumann series converges:

N =∞∑i=0

H i = (I − H)−1, ||N|| =∞∑i=0

||H i || ≤∞∑i=0

Hi =1

1−H

I Formally, the solution is x = (I − H)−1bI Note: ||H|| can be defined in many ways, e. g.:

1. ||H|| = maxi

(∑j |hij |

)2. ||H|| = maxi (|λi (H)|) = ρ(H) (spectral radius)

Page 173: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Monte Carlo Methods and Linear Algebra

Monte Carlo Methods and Linear AlgebraI Monte Carlo has been, and continues to be used in linear algebraI Consider the linear system: x = Hx + b, if ||H|| = H < 1, then

the following iterative method converges:

xn+1 := Hxn + b, x0 = 0,

and in particular we have xk =∑k−1

i=0 H ib, and similarly theNeumann series converges:

N =∞∑i=0

H i = (I − H)−1, ||N|| =∞∑i=0

||H i || ≤∞∑i=0

Hi =1

1−H

I Formally, the solution is x = (I − H)−1bI Note: ||H|| can be defined in many ways, e. g.:

1. ||H|| = maxi

(∑j |hij |

)2. ||H|| = maxi (|λi (H)|) = ρ(H) (spectral radius)

Page 174: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Monte Carlo Methods and Linear Algebra

Monte Carlo Methods and Linear AlgebraI Monte Carlo has been, and continues to be used in linear algebraI Consider the linear system: x = Hx + b, if ||H|| = H < 1, then

the following iterative method converges:

xn+1 := Hxn + b, x0 = 0,

and in particular we have xk =∑k−1

i=0 H ib, and similarly theNeumann series converges:

N =∞∑i=0

H i = (I − H)−1, ||N|| =∞∑i=0

||H i || ≤∞∑i=0

Hi =1

1−H

I Formally, the solution is x = (I − H)−1bI Note: ||H|| can be defined in many ways, e. g.:

1. ||H|| = maxi

(∑j |hij |

)2. ||H|| = maxi (|λi (H)|) = ρ(H) (spectral radius)

Page 175: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Monte Carlo Methods and Linear Algebra

Monte Carlo Methods and Linear AlgebraI Monte Carlo has been, and continues to be used in linear algebraI Consider the linear system: x = Hx + b, if ||H|| = H < 1, then

the following iterative method converges:

xn+1 := Hxn + b, x0 = 0,

and in particular we have xk =∑k−1

i=0 H ib, and similarly theNeumann series converges:

N =∞∑i=0

H i = (I − H)−1, ||N|| =∞∑i=0

||H i || ≤∞∑i=0

Hi =1

1−H

I Formally, the solution is x = (I − H)−1bI Note: ||H|| can be defined in many ways, e. g.:

1. ||H|| = maxi

(∑j |hij |

)2. ||H|| = maxi (|λi (H)|) = ρ(H) (spectral radius)

Page 176: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Monte Carlo Methods and Linear Algebra

Monte Carlo Methods and Linear AlgebraI Monte Carlo has been, and continues to be used in linear algebraI Consider the linear system: x = Hx + b, if ||H|| = H < 1, then

the following iterative method converges:

xn+1 := Hxn + b, x0 = 0,

and in particular we have xk =∑k−1

i=0 H ib, and similarly theNeumann series converges:

N =∞∑i=0

H i = (I − H)−1, ||N|| =∞∑i=0

||H i || ≤∞∑i=0

Hi =1

1−H

I Formally, the solution is x = (I − H)−1bI Note: ||H|| can be defined in many ways, e. g.:

1. ||H|| = maxi

(∑j |hij |

)2. ||H|| = maxi (|λi (H)|) = ρ(H) (spectral radius)

Page 177: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Monte Carlo Methods and Linear Algebra

Monte Carlo Methods and Linear Algebra

I Recall: Monte Carlo can be used to form a sum: S =∑M

i=1 ai asfollows

1. Define pi ≥ 0 as the probability of choosing index i , with∑Mi=1 pi = 1, and pi > 0 whenever ai 6= 0

2. Then ai/pi with index i chosen with pi is an unbiased estimate ofS, as E [ai/pi ] =

∑Mi=1

(aipi

)pi = S

3. Var [ai/pi ] ∝∑M

i=1

(a2

ipi

), so optimal choice is pi ∝ a2

i

I Given ||H|| = H < 1, solving x = Hx + b via Neumann seriesrequires successive matrix-vector multiplication

I We can use the above sampling of a sum to construct a sampleof the Neumann series

Page 178: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Monte Carlo Methods and Linear Algebra

Monte Carlo Methods and Linear Algebra

I Recall: Monte Carlo can be used to form a sum: S =∑M

i=1 ai asfollows

1. Define pi ≥ 0 as the probability of choosing index i , with∑Mi=1 pi = 1, and pi > 0 whenever ai 6= 0

2. Then ai/pi with index i chosen with pi is an unbiased estimate ofS, as E [ai/pi ] =

∑Mi=1

(aipi

)pi = S

3. Var [ai/pi ] ∝∑M

i=1

(a2

ipi

), so optimal choice is pi ∝ a2

i

I Given ||H|| = H < 1, solving x = Hx + b via Neumann seriesrequires successive matrix-vector multiplication

I We can use the above sampling of a sum to construct a sampleof the Neumann series

Page 179: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Monte Carlo Methods and Linear Algebra

Monte Carlo Methods and Linear Algebra

I Recall: Monte Carlo can be used to form a sum: S =∑M

i=1 ai asfollows

1. Define pi ≥ 0 as the probability of choosing index i , with∑Mi=1 pi = 1, and pi > 0 whenever ai 6= 0

2. Then ai/pi with index i chosen with pi is an unbiased estimate ofS, as E [ai/pi ] =

∑Mi=1

(aipi

)pi = S

3. Var [ai/pi ] ∝∑M

i=1

(a2

ipi

), so optimal choice is pi ∝ a2

i

I Given ||H|| = H < 1, solving x = Hx + b via Neumann seriesrequires successive matrix-vector multiplication

I We can use the above sampling of a sum to construct a sampleof the Neumann series

Page 180: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Monte Carlo Methods and Linear Algebra

Monte Carlo Methods and Linear Algebra

I Recall: Monte Carlo can be used to form a sum: S =∑M

i=1 ai asfollows

1. Define pi ≥ 0 as the probability of choosing index i , with∑Mi=1 pi = 1, and pi > 0 whenever ai 6= 0

2. Then ai/pi with index i chosen with pi is an unbiased estimate ofS, as E [ai/pi ] =

∑Mi=1

(aipi

)pi = S

3. Var [ai/pi ] ∝∑M

i=1

(a2

ipi

), so optimal choice is pi ∝ a2

i

I Given ||H|| = H < 1, solving x = Hx + b via Neumann seriesrequires successive matrix-vector multiplication

I We can use the above sampling of a sum to construct a sampleof the Neumann series

Page 181: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Monte Carlo Methods and Linear Algebra

Monte Carlo Methods and Linear Algebra

I Recall: Monte Carlo can be used to form a sum: S =∑M

i=1 ai asfollows

1. Define pi ≥ 0 as the probability of choosing index i , with∑Mi=1 pi = 1, and pi > 0 whenever ai 6= 0

2. Then ai/pi with index i chosen with pi is an unbiased estimate ofS, as E [ai/pi ] =

∑Mi=1

(aipi

)pi = S

3. Var [ai/pi ] ∝∑M

i=1

(a2

ipi

), so optimal choice is pi ∝ a2

i

I Given ||H|| = H < 1, solving x = Hx + b via Neumann seriesrequires successive matrix-vector multiplication

I We can use the above sampling of a sum to construct a sampleof the Neumann series

Page 182: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Monte Carlo Methods and Linear Algebra

Monte Carlo Methods and Linear Algebra

I Recall: Monte Carlo can be used to form a sum: S =∑M

i=1 ai asfollows

1. Define pi ≥ 0 as the probability of choosing index i , with∑Mi=1 pi = 1, and pi > 0 whenever ai 6= 0

2. Then ai/pi with index i chosen with pi is an unbiased estimate ofS, as E [ai/pi ] =

∑Mi=1

(aipi

)pi = S

3. Var [ai/pi ] ∝∑M

i=1

(a2

ipi

), so optimal choice is pi ∝ a2

i

I Given ||H|| = H < 1, solving x = Hx + b via Neumann seriesrequires successive matrix-vector multiplication

I We can use the above sampling of a sum to construct a sampleof the Neumann series

Page 183: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Monte Carlo Methods and Linear Algebra

Monte Carlo Methods and Linear Algebra

The Ulam-von Neumann MethodI We first construct a Markov chain based on H and b to sample a`-fold product matrices as follows

I Define the transition probability matrix, P1. pij ≥ 0, and pij > 0 when hij 6= 02. Define pi = 1−

∑j pij

I Now define a Markov chain on states 0,1, . . . ,n with transitionprobability, pij , and termination probability from state i , pi

I Also define

vij =

hijpij, hij 6= 0

0, otherwise

Page 184: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Monte Carlo Methods and Linear Algebra

Monte Carlo Methods and Linear Algebra

The Ulam-von Neumann MethodI We first construct a Markov chain based on H and b to sample a`-fold product matrices as follows

I Define the transition probability matrix, P1. pij ≥ 0, and pij > 0 when hij 6= 02. Define pi = 1−

∑j pij

I Now define a Markov chain on states 0,1, . . . ,n with transitionprobability, pij , and termination probability from state i , pi

I Also define

vij =

hijpij, hij 6= 0

0, otherwise

Page 185: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Monte Carlo Methods and Linear Algebra

Monte Carlo Methods and Linear Algebra

The Ulam-von Neumann MethodI We first construct a Markov chain based on H and b to sample a`-fold product matrices as follows

I Define the transition probability matrix, P1. pij ≥ 0, and pij > 0 when hij 6= 02. Define pi = 1−

∑j pij

I Now define a Markov chain on states 0,1, . . . ,n with transitionprobability, pij , and termination probability from state i , pi

I Also define

vij =

hijpij, hij 6= 0

0, otherwise

Page 186: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Monte Carlo Methods and Linear Algebra

Monte Carlo Methods and Linear Algebra

The Ulam-von Neumann MethodI We first construct a Markov chain based on H and b to sample a`-fold product matrices as follows

I Define the transition probability matrix, P1. pij ≥ 0, and pij > 0 when hij 6= 02. Define pi = 1−

∑j pij

I Now define a Markov chain on states 0,1, . . . ,n with transitionprobability, pij , and termination probability from state i , pi

I Also define

vij =

hijpij, hij 6= 0

0, otherwise

Page 187: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Monte Carlo Methods and Linear Algebra

Monte Carlo Methods and Linear Algebra

The Ulam-von Neumann MethodI We first construct a Markov chain based on H and b to sample a`-fold product matrices as follows

I Define the transition probability matrix, P1. pij ≥ 0, and pij > 0 when hij 6= 02. Define pi = 1−

∑j pij

I Now define a Markov chain on states 0,1, . . . ,n with transitionprobability, pij , and termination probability from state i , pi

I Also define

vij =

hijpij, hij 6= 0

0, otherwise

Page 188: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Monte Carlo Methods and Linear Algebra

Monte Carlo Methods and Linear Algebra

The Ulam-von Neumann MethodI We first construct a Markov chain based on H and b to sample a`-fold product matrices as follows

I Define the transition probability matrix, P1. pij ≥ 0, and pij > 0 when hij 6= 02. Define pi = 1−

∑j pij

I Now define a Markov chain on states 0,1, . . . ,n with transitionprobability, pij , and termination probability from state i , pi

I Also define

vij =

hijpij, hij 6= 0

0, otherwise

Page 189: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Monte Carlo Methods and Linear Algebra

Monte Carlo Methods and Linear Algebra

I Given the desire to sample xi we create the following estimatorbased on

1. Generating a Markov chain γ(i0, i1, . . . , ik ) using pij and pi , wherestate ik is the penultimate before absorbtion

2. Form the estimator

X(γ) = Vm(γ)bik

pik, where Vm(γ) =

m∑j=1

vij−1 ij , m ≤ k

Page 190: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Monte Carlo Methods and Linear Algebra

Monte Carlo Methods and Linear Algebra

I Given the desire to sample xi we create the following estimatorbased on

1. Generating a Markov chain γ(i0, i1, . . . , ik ) using pij and pi , wherestate ik is the penultimate before absorbtion

2. Form the estimator

X(γ) = Vm(γ)bik

pik, where Vm(γ) =

m∑j=1

vij−1 ij , m ≤ k

Page 191: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Monte Carlo Methods and Linear Algebra

Monte Carlo Methods and Linear Algebra

I Given the desire to sample xi we create the following estimatorbased on

1. Generating a Markov chain γ(i0, i1, . . . , ik ) using pij and pi , wherestate ik is the penultimate before absorbtion

2. Form the estimator

X(γ) = Vm(γ)bik

pik, where Vm(γ) =

m∑j=1

vij−1 ij , m ≤ k

Page 192: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Monte Carlo Methods and Linear Algebra

Monte Carlo Methods and Linear Algebra

I We have

E [X(γ)|i0 = i] =∞∑

k=0

∑i1

· · ·∑

ik

pii1 . . . pik−1 ik vii1 . . . vik−1 ikbik

pik

=

∞∑k=0

∑i1

· · ·∑

ik

pii1vii1 . . . pik−1 ik vik−1 ikbik

pik

=

∞∑k=0

∑i1

· · ·∑

ik

hii1hik−1 ik bik

= the i th component of∞∑

k=0

Hk b

Page 193: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Monte Carlo Methods and Linear Algebra

Monte Carlo Methods and Linear Algebra

Elaborations on Monte Carlo for Linear AlgebraI Backward walks (Adjoint sampling)I The lower variance Wasow estimator

X∗(γ) =k∑

m=0

Vm(γ)bik

I Again with only matrix-vector multiplication can do the powermethod and shifted variants for sampling eigenvalues

I Newer, more highly convergent methods are based on randomlysampling smaller problems

I All these have integral equation and Feynman-Kac analogs

Page 194: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Monte Carlo Methods and Linear Algebra

Monte Carlo Methods and Linear Algebra

Elaborations on Monte Carlo for Linear AlgebraI Backward walks (Adjoint sampling)I The lower variance Wasow estimator

X∗(γ) =k∑

m=0

Vm(γ)bik

I Again with only matrix-vector multiplication can do the powermethod and shifted variants for sampling eigenvalues

I Newer, more highly convergent methods are based on randomlysampling smaller problems

I All these have integral equation and Feynman-Kac analogs

Page 195: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Monte Carlo Methods and Linear Algebra

Monte Carlo Methods and Linear Algebra

Elaborations on Monte Carlo for Linear AlgebraI Backward walks (Adjoint sampling)I The lower variance Wasow estimator

X∗(γ) =k∑

m=0

Vm(γ)bik

I Again with only matrix-vector multiplication can do the powermethod and shifted variants for sampling eigenvalues

I Newer, more highly convergent methods are based on randomlysampling smaller problems

I All these have integral equation and Feynman-Kac analogs

Page 196: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Monte Carlo Methods and Linear Algebra

Monte Carlo Methods and Linear Algebra

Elaborations on Monte Carlo for Linear AlgebraI Backward walks (Adjoint sampling)I The lower variance Wasow estimator

X∗(γ) =k∑

m=0

Vm(γ)bik

I Again with only matrix-vector multiplication can do the powermethod and shifted variants for sampling eigenvalues

I Newer, more highly convergent methods are based on randomlysampling smaller problems

I All these have integral equation and Feynman-Kac analogs

Page 197: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Monte Carlo Methods and Linear Algebra

Monte Carlo Methods and Linear Algebra

Elaborations on Monte Carlo for Linear AlgebraI Backward walks (Adjoint sampling)I The lower variance Wasow estimator

X∗(γ) =k∑

m=0

Vm(γ)bik

I Again with only matrix-vector multiplication can do the powermethod and shifted variants for sampling eigenvalues

I Newer, more highly convergent methods are based on randomlysampling smaller problems

I All these have integral equation and Feynman-Kac analogs

Page 198: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

Parallel Computing Overview

I Now that we know all these MCMs, we must discuss how toimplement these methods on parallel (and vector) computers

I There are two major classes of parallel computer beingcommercially produced

1. Single Instruction Multiple Data machines:I Only one data stream so the same instruction is broadcast to all

processorsI Usually these machines have many simple processors, often

they are bit serial (fine grained)I Usually these machines have distributed memoryI Connection Machine and vector-processing units are “classical”

examplesI Modern examples include GPGPUs

Page 199: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

Parallel Computing Overview

I Now that we know all these MCMs, we must discuss how toimplement these methods on parallel (and vector) computers

I There are two major classes of parallel computer beingcommercially produced

1. Single Instruction Multiple Data machines:I Only one data stream so the same instruction is broadcast to all

processorsI Usually these machines have many simple processors, often

they are bit serial (fine grained)I Usually these machines have distributed memoryI Connection Machine and vector-processing units are “classical”

examplesI Modern examples include GPGPUs

Page 200: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

Parallel Computing Overview

I Now that we know all these MCMs, we must discuss how toimplement these methods on parallel (and vector) computers

I There are two major classes of parallel computer beingcommercially produced

1. Single Instruction Multiple Data machines:I Only one data stream so the same instruction is broadcast to all

processorsI Usually these machines have many simple processors, often

they are bit serial (fine grained)I Usually these machines have distributed memoryI Connection Machine and vector-processing units are “classical”

examplesI Modern examples include GPGPUs

Page 201: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

Parallel Computing Overview

I Now that we know all these MCMs, we must discuss how toimplement these methods on parallel (and vector) computers

I There are two major classes of parallel computer beingcommercially produced

1. Single Instruction Multiple Data machines:I Only one data stream so the same instruction is broadcast to all

processorsI Usually these machines have many simple processors, often

they are bit serial (fine grained)I Usually these machines have distributed memoryI Connection Machine and vector-processing units are “classical”

examplesI Modern examples include GPGPUs

Page 202: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

Parallel Computing Overview

I Now that we know all these MCMs, we must discuss how toimplement these methods on parallel (and vector) computers

I There are two major classes of parallel computer beingcommercially produced

1. Single Instruction Multiple Data machines:I Only one data stream so the same instruction is broadcast to all

processorsI Usually these machines have many simple processors, often

they are bit serial (fine grained)I Usually these machines have distributed memoryI Connection Machine and vector-processing units are “classical”

examplesI Modern examples include GPGPUs

Page 203: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

Parallel Computing Overview

I Now that we know all these MCMs, we must discuss how toimplement these methods on parallel (and vector) computers

I There are two major classes of parallel computer beingcommercially produced

1. Single Instruction Multiple Data machines:I Only one data stream so the same instruction is broadcast to all

processorsI Usually these machines have many simple processors, often

they are bit serial (fine grained)I Usually these machines have distributed memoryI Connection Machine and vector-processing units are “classical”

examplesI Modern examples include GPGPUs

Page 204: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

Parallel Computing Overview

I Now that we know all these MCMs, we must discuss how toimplement these methods on parallel (and vector) computers

I There are two major classes of parallel computer beingcommercially produced

1. Single Instruction Multiple Data machines:I Only one data stream so the same instruction is broadcast to all

processorsI Usually these machines have many simple processors, often

they are bit serial (fine grained)I Usually these machines have distributed memoryI Connection Machine and vector-processing units are “classical”

examplesI Modern examples include GPGPUs

Page 205: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

Parallel Computing Overview

I Now that we know all these MCMs, we must discuss how toimplement these methods on parallel (and vector) computers

I There are two major classes of parallel computer beingcommercially produced

1. Single Instruction Multiple Data machines:I Only one data stream so the same instruction is broadcast to all

processorsI Usually these machines have many simple processors, often

they are bit serial (fine grained)I Usually these machines have distributed memoryI Connection Machine and vector-processing units are “classical”

examplesI Modern examples include GPGPUs

Page 206: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

Parallel Computing Overview

1. Multiple Instruction Multiple Data machines:I These machines are a collection of conventional computers with an

interconnection networkI Each processor can run its own programI Processors are usually large grainedI Can have shared or distributed memoryI Shared memory limits the number of processors though bus

technologyI Distributed memory can be implemented with many different

interconnection topologiesI Modern examples:

1.1 Multicore machines1.2 Clustered machines1.3 Shared-memory machines

Page 207: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

Parallel Computing Overview

1. Multiple Instruction Multiple Data machines:I These machines are a collection of conventional computers with an

interconnection networkI Each processor can run its own programI Processors are usually large grainedI Can have shared or distributed memoryI Shared memory limits the number of processors though bus

technologyI Distributed memory can be implemented with many different

interconnection topologiesI Modern examples:

1.1 Multicore machines1.2 Clustered machines1.3 Shared-memory machines

Page 208: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

Parallel Computing Overview

1. Multiple Instruction Multiple Data machines:I These machines are a collection of conventional computers with an

interconnection networkI Each processor can run its own programI Processors are usually large grainedI Can have shared or distributed memoryI Shared memory limits the number of processors though bus

technologyI Distributed memory can be implemented with many different

interconnection topologiesI Modern examples:

1.1 Multicore machines1.2 Clustered machines1.3 Shared-memory machines

Page 209: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

Parallel Computing Overview

1. Multiple Instruction Multiple Data machines:I These machines are a collection of conventional computers with an

interconnection networkI Each processor can run its own programI Processors are usually large grainedI Can have shared or distributed memoryI Shared memory limits the number of processors though bus

technologyI Distributed memory can be implemented with many different

interconnection topologiesI Modern examples:

1.1 Multicore machines1.2 Clustered machines1.3 Shared-memory machines

Page 210: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

Parallel Computing Overview

1. Multiple Instruction Multiple Data machines:I These machines are a collection of conventional computers with an

interconnection networkI Each processor can run its own programI Processors are usually large grainedI Can have shared or distributed memoryI Shared memory limits the number of processors though bus

technologyI Distributed memory can be implemented with many different

interconnection topologiesI Modern examples:

1.1 Multicore machines1.2 Clustered machines1.3 Shared-memory machines

Page 211: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

Parallel Computing Overview

1. Multiple Instruction Multiple Data machines:I These machines are a collection of conventional computers with an

interconnection networkI Each processor can run its own programI Processors are usually large grainedI Can have shared or distributed memoryI Shared memory limits the number of processors though bus

technologyI Distributed memory can be implemented with many different

interconnection topologiesI Modern examples:

1.1 Multicore machines1.2 Clustered machines1.3 Shared-memory machines

Page 212: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

Parallel Computing Overview

1. Multiple Instruction Multiple Data machines:I These machines are a collection of conventional computers with an

interconnection networkI Each processor can run its own programI Processors are usually large grainedI Can have shared or distributed memoryI Shared memory limits the number of processors though bus

technologyI Distributed memory can be implemented with many different

interconnection topologiesI Modern examples:

1.1 Multicore machines1.2 Clustered machines1.3 Shared-memory machines

Page 213: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

Parallel Computing Overview

1. Multiple Instruction Multiple Data machines:I These machines are a collection of conventional computers with an

interconnection networkI Each processor can run its own programI Processors are usually large grainedI Can have shared or distributed memoryI Shared memory limits the number of processors though bus

technologyI Distributed memory can be implemented with many different

interconnection topologiesI Modern examples:

1.1 Multicore machines1.2 Clustered machines1.3 Shared-memory machines

Page 214: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

Parallel Computing Overview

1. Multiple Instruction Multiple Data machines:I These machines are a collection of conventional computers with an

interconnection networkI Each processor can run its own programI Processors are usually large grainedI Can have shared or distributed memoryI Shared memory limits the number of processors though bus

technologyI Distributed memory can be implemented with many different

interconnection topologiesI Modern examples:

1.1 Multicore machines1.2 Clustered machines1.3 Shared-memory machines

Page 215: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

Parallel Computing Overview

1. Multiple Instruction Multiple Data machines:I These machines are a collection of conventional computers with an

interconnection networkI Each processor can run its own programI Processors are usually large grainedI Can have shared or distributed memoryI Shared memory limits the number of processors though bus

technologyI Distributed memory can be implemented with many different

interconnection topologiesI Modern examples:

1.1 Multicore machines1.2 Clustered machines1.3 Shared-memory machines

Page 216: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

Parallel Computing Overview

1. Multiple Instruction Multiple Data machines:I These machines are a collection of conventional computers with an

interconnection networkI Each processor can run its own programI Processors are usually large grainedI Can have shared or distributed memoryI Shared memory limits the number of processors though bus

technologyI Distributed memory can be implemented with many different

interconnection topologiesI Modern examples:

1.1 Multicore machines1.2 Clustered machines1.3 Shared-memory machines

Page 217: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

General Principles for Constructing Parallel Algorithms

General Principles for Constructing Parallel Algorithms

I Use widely replicated aspects of a given problem as the basis forparallelism

1. In MCMs can map each independent statistical sample to anindependent process with essentially no interprocessorcommunication

2. With spatial systems, spatial subdomains (domain decomposition)can be used but implicit schemes and global conditions (ellipticBVPs) will require considerable communication

3. In some cases solutions over time are desired as in ODE problems,and the iterative solution of the entire time trajectory (continuationmethods) can be distributed by time point

Page 218: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

General Principles for Constructing Parallel Algorithms

General Principles for Constructing Parallel Algorithms

I Use widely replicated aspects of a given problem as the basis forparallelism

1. In MCMs can map each independent statistical sample to anindependent process with essentially no interprocessorcommunication

2. With spatial systems, spatial subdomains (domain decomposition)can be used but implicit schemes and global conditions (ellipticBVPs) will require considerable communication

3. In some cases solutions over time are desired as in ODE problems,and the iterative solution of the entire time trajectory (continuationmethods) can be distributed by time point

Page 219: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

General Principles for Constructing Parallel Algorithms

General Principles for Constructing Parallel Algorithms

I Use widely replicated aspects of a given problem as the basis forparallelism

1. In MCMs can map each independent statistical sample to anindependent process with essentially no interprocessorcommunication

2. With spatial systems, spatial subdomains (domain decomposition)can be used but implicit schemes and global conditions (ellipticBVPs) will require considerable communication

3. In some cases solutions over time are desired as in ODE problems,and the iterative solution of the entire time trajectory (continuationmethods) can be distributed by time point

Page 220: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

General Principles for Constructing Parallel Algorithms

General Principles for Constructing Parallel Algorithms

I Use widely replicated aspects of a given problem as the basis forparallelism

1. In MCMs can map each independent statistical sample to anindependent process with essentially no interprocessorcommunication

2. With spatial systems, spatial subdomains (domain decomposition)can be used but implicit schemes and global conditions (ellipticBVPs) will require considerable communication

3. In some cases solutions over time are desired as in ODE problems,and the iterative solution of the entire time trajectory (continuationmethods) can be distributed by time point

Page 221: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

General Principles for Constructing Parallel Algorithms

General Approaches to the Construction of MCMs forElliptic BVPs

I As a simple example, consider the Dirichlet problem for theLaplace equation (1)

I The Wiener integral representation for the solution to (1) isu(x) = Ex [g(β(τ∂Ω))]

I General approaches are to use (i) “exact” samples of the Wienerintegral, (ii) sample from a discretization of the Wiener integral

I Can sample with spherical processes, use the MVP and spheresto walk until ε from the boundary, other elliptic L’s lead touniformity on ellipses instead of spheres (i)

Page 222: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

General Principles for Constructing Parallel Algorithms

General Approaches to the Construction of MCMs forElliptic BVPs

I As a simple example, consider the Dirichlet problem for theLaplace equation (1)

I The Wiener integral representation for the solution to (1) isu(x) = Ex [g(β(τ∂Ω))]

I General approaches are to use (i) “exact” samples of the Wienerintegral, (ii) sample from a discretization of the Wiener integral

I Can sample with spherical processes, use the MVP and spheresto walk until ε from the boundary, other elliptic L’s lead touniformity on ellipses instead of spheres (i)

Page 223: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

General Principles for Constructing Parallel Algorithms

General Approaches to the Construction of MCMs forElliptic BVPs

I As a simple example, consider the Dirichlet problem for theLaplace equation (1)

I The Wiener integral representation for the solution to (1) isu(x) = Ex [g(β(τ∂Ω))]

I General approaches are to use (i) “exact” samples of the Wienerintegral, (ii) sample from a discretization of the Wiener integral

I Can sample with spherical processes, use the MVP and spheresto walk until ε from the boundary, other elliptic L’s lead touniformity on ellipses instead of spheres (i)

Page 224: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

General Principles for Constructing Parallel Algorithms

General Approaches to the Construction of MCMs forElliptic BVPs

I As a simple example, consider the Dirichlet problem for theLaplace equation (1)

I The Wiener integral representation for the solution to (1) isu(x) = Ex [g(β(τ∂Ω))]

I General approaches are to use (i) “exact” samples of the Wienerintegral, (ii) sample from a discretization of the Wiener integral

I Can sample with spherical processes, use the MVP and spheresto walk until ε from the boundary, other elliptic L’s lead touniformity on ellipses instead of spheres (i)

Page 225: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

General Principles for Constructing Parallel Algorithms

General Approaches to the Construction of MCMs forElliptic BVPs

I Random Fourier series relationship to Brownian motion can beused to generate walks via a truncated series, in some cases thisgives an exact random series solution which is then sampled,with complicated Ω this approach is not practical (ii)

I Can use a high dimensional integral approximation of theinfinite-dimensional (Wiener) integral, the finite-dimensionalintegral is then evaluated via analytic or MCMs (ii)

I Spatially discretize the region and sample from the discreteWiener integral (ii)

I This last is a very fruitful approach especially w.r.t. parallelcomputers

Page 226: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

General Principles for Constructing Parallel Algorithms

General Approaches to the Construction of MCMs forElliptic BVPs

I Random Fourier series relationship to Brownian motion can beused to generate walks via a truncated series, in some cases thisgives an exact random series solution which is then sampled,with complicated Ω this approach is not practical (ii)

I Can use a high dimensional integral approximation of theinfinite-dimensional (Wiener) integral, the finite-dimensionalintegral is then evaluated via analytic or MCMs (ii)

I Spatially discretize the region and sample from the discreteWiener integral (ii)

I This last is a very fruitful approach especially w.r.t. parallelcomputers

Page 227: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

General Principles for Constructing Parallel Algorithms

General Approaches to the Construction of MCMs forElliptic BVPs

I Random Fourier series relationship to Brownian motion can beused to generate walks via a truncated series, in some cases thisgives an exact random series solution which is then sampled,with complicated Ω this approach is not practical (ii)

I Can use a high dimensional integral approximation of theinfinite-dimensional (Wiener) integral, the finite-dimensionalintegral is then evaluated via analytic or MCMs (ii)

I Spatially discretize the region and sample from the discreteWiener integral (ii)

I This last is a very fruitful approach especially w.r.t. parallelcomputers

Page 228: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

General Principles for Constructing Parallel Algorithms

General Approaches to the Construction of MCMs forElliptic BVPs

I Random Fourier series relationship to Brownian motion can beused to generate walks via a truncated series, in some cases thisgives an exact random series solution which is then sampled,with complicated Ω this approach is not practical (ii)

I Can use a high dimensional integral approximation of theinfinite-dimensional (Wiener) integral, the finite-dimensionalintegral is then evaluated via analytic or MCMs (ii)

I Spatially discretize the region and sample from the discreteWiener integral (ii)

I This last is a very fruitful approach especially w.r.t. parallelcomputers

Page 229: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

General Principles for Constructing Parallel Algorithms

Discrete Wiener Integrals

I All of the theory for continuous sample path Wiener integralsmentioned above carries over to the discrete cases

1. Can replace the continuous region Ω with a discretization Ωh whereh the characteristic discretization parameter

2. Replace β(·) with random walks on Ωh, βh(·), this requires atransition probability matrix for the walks on the grid [P]ij = pij

3. E.g. the discrete Wiener integral solution to equation (1):uh(x) = Eh

x [g(βh(τ∂Ωh ))]4. In this case if one has elliptic regularity of u(x) and a nonsingular

discretization, Ωh then uh(x) = u(x) + O(h2)

Page 230: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

General Principles for Constructing Parallel Algorithms

Discrete Wiener Integrals

I All of the theory for continuous sample path Wiener integralsmentioned above carries over to the discrete cases

1. Can replace the continuous region Ω with a discretization Ωh whereh the characteristic discretization parameter

2. Replace β(·) with random walks on Ωh, βh(·), this requires atransition probability matrix for the walks on the grid [P]ij = pij

3. E.g. the discrete Wiener integral solution to equation (1):uh(x) = Eh

x [g(βh(τ∂Ωh ))]4. In this case if one has elliptic regularity of u(x) and a nonsingular

discretization, Ωh then uh(x) = u(x) + O(h2)

Page 231: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

General Principles for Constructing Parallel Algorithms

Discrete Wiener Integrals

I All of the theory for continuous sample path Wiener integralsmentioned above carries over to the discrete cases

1. Can replace the continuous region Ω with a discretization Ωh whereh the characteristic discretization parameter

2. Replace β(·) with random walks on Ωh, βh(·), this requires atransition probability matrix for the walks on the grid [P]ij = pij

3. E.g. the discrete Wiener integral solution to equation (1):uh(x) = Eh

x [g(βh(τ∂Ωh ))]4. In this case if one has elliptic regularity of u(x) and a nonsingular

discretization, Ωh then uh(x) = u(x) + O(h2)

Page 232: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

General Principles for Constructing Parallel Algorithms

Discrete Wiener Integrals

I All of the theory for continuous sample path Wiener integralsmentioned above carries over to the discrete cases

1. Can replace the continuous region Ω with a discretization Ωh whereh the characteristic discretization parameter

2. Replace β(·) with random walks on Ωh, βh(·), this requires atransition probability matrix for the walks on the grid [P]ij = pij

3. E.g. the discrete Wiener integral solution to equation (1):uh(x) = Eh

x [g(βh(τ∂Ωh ))]4. In this case if one has elliptic regularity of u(x) and a nonsingular

discretization, Ωh then uh(x) = u(x) + O(h2)

Page 233: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

General Principles for Constructing Parallel Algorithms

Discrete Wiener Integrals

I All of the theory for continuous sample path Wiener integralsmentioned above carries over to the discrete cases

1. Can replace the continuous region Ω with a discretization Ωh whereh the characteristic discretization parameter

2. Replace β(·) with random walks on Ωh, βh(·), this requires atransition probability matrix for the walks on the grid [P]ij = pij

3. E.g. the discrete Wiener integral solution to equation (1):uh(x) = Eh

x [g(βh(τ∂Ωh ))]4. In this case if one has elliptic regularity of u(x) and a nonsingular

discretization, Ωh then uh(x) = u(x) + O(h2)

Page 234: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

Parallel N-body Potential Evaluation

Parallel N-body Potential Evaluation

I N-body potential problems are common in biochemistry, stellardynamics, fluid dynamics, materials simulation, the boundaryelement method for elliptic PDEs, etc.

I The solution of N-body potential problems requires evaluation offunction of the form: Φ(x) =

∑Nn=1 φ(x− xi ) for all values of

x = xj , j = 1, . . . ,NI One heuristic solution is to replace φ(x) with a cutoff versionφco(x) = φ(x) for |x| ≤ r and φco(x) = 0 otherwise, this reducesthe problem to only keeping track of r -neighborhood points

I Can use the xi , xj interaction as the basis for parallelism and useN2 processors to calculate the N(N − 1)/2 terms in parallel,initialization of the coordinates and accumulation the resultsrequires O(log2 N) operations

Page 235: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

Parallel N-body Potential Evaluation

Parallel N-body Potential Evaluation

I N-body potential problems are common in biochemistry, stellardynamics, fluid dynamics, materials simulation, the boundaryelement method for elliptic PDEs, etc.

I The solution of N-body potential problems requires evaluation offunction of the form: Φ(x) =

∑Nn=1 φ(x− xi ) for all values of

x = xj , j = 1, . . . ,NI One heuristic solution is to replace φ(x) with a cutoff versionφco(x) = φ(x) for |x| ≤ r and φco(x) = 0 otherwise, this reducesthe problem to only keeping track of r -neighborhood points

I Can use the xi , xj interaction as the basis for parallelism and useN2 processors to calculate the N(N − 1)/2 terms in parallel,initialization of the coordinates and accumulation the resultsrequires O(log2 N) operations

Page 236: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

Parallel N-body Potential Evaluation

Parallel N-body Potential Evaluation

I N-body potential problems are common in biochemistry, stellardynamics, fluid dynamics, materials simulation, the boundaryelement method for elliptic PDEs, etc.

I The solution of N-body potential problems requires evaluation offunction of the form: Φ(x) =

∑Nn=1 φ(x− xi ) for all values of

x = xj , j = 1, . . . ,NI One heuristic solution is to replace φ(x) with a cutoff versionφco(x) = φ(x) for |x| ≤ r and φco(x) = 0 otherwise, this reducesthe problem to only keeping track of r -neighborhood points

I Can use the xi , xj interaction as the basis for parallelism and useN2 processors to calculate the N(N − 1)/2 terms in parallel,initialization of the coordinates and accumulation the resultsrequires O(log2 N) operations

Page 237: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

Parallel N-body Potential Evaluation

Parallel N-body Potential Evaluation

I N-body potential problems are common in biochemistry, stellardynamics, fluid dynamics, materials simulation, the boundaryelement method for elliptic PDEs, etc.

I The solution of N-body potential problems requires evaluation offunction of the form: Φ(x) =

∑Nn=1 φ(x− xi ) for all values of

x = xj , j = 1, . . . ,NI One heuristic solution is to replace φ(x) with a cutoff versionφco(x) = φ(x) for |x| ≤ r and φco(x) = 0 otherwise, this reducesthe problem to only keeping track of r -neighborhood points

I Can use the xi , xj interaction as the basis for parallelism and useN2 processors to calculate the N(N − 1)/2 terms in parallel,initialization of the coordinates and accumulation the resultsrequires O(log2 N) operations

Page 238: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

Parallel N-body Potential Evaluation

Parallel N-body Potential Evaluation

I The fast multipole method is an efficiency improvement overdirect methods

Page 239: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

Parallel N-body Potential Evaluation

The Rokhlin-Greengard Fast Multipole MethodI Algorithm is based on multipole expansion and some theory from

complex variable series, consider the electrostatic descriptionI If z ∈ C then a charge of intensity q at z0 results in a complex

potential via Laurent series for |z| > |z0|:

φz0 (z) = q ln(z − z0) = q[

ln(z)−∞∑

k=1

1k

(z0

z

)k]

1. If we have m charges qi at locations zi then the potential inducedby them is given by the multipole expansion for|z| > r = maxi |zi |:

φ(z) = Q ln(z) +∞∑

k=1

ak

zk ,

where Q =m∑

i=1

qi , and ak =m∑

i=1

−qizki

k

Page 240: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

Parallel N-body Potential Evaluation

The Rokhlin-Greengard Fast Multipole MethodI Algorithm is based on multipole expansion and some theory from

complex variable series, consider the electrostatic descriptionI If z ∈ C then a charge of intensity q at z0 results in a complex

potential via Laurent series for |z| > |z0|:

φz0 (z) = q ln(z − z0) = q[

ln(z)−∞∑

k=1

1k

(z0

z

)k]

1. If we have m charges qi at locations zi then the potential inducedby them is given by the multipole expansion for|z| > r = maxi |zi |:

φ(z) = Q ln(z) +∞∑

k=1

ak

zk ,

where Q =m∑

i=1

qi , and ak =m∑

i=1

−qizki

k

Page 241: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

Parallel N-body Potential Evaluation

The Rokhlin-Greengard Fast Multipole MethodI Algorithm is based on multipole expansion and some theory from

complex variable series, consider the electrostatic descriptionI If z ∈ C then a charge of intensity q at z0 results in a complex

potential via Laurent series for |z| > |z0|:

φz0 (z) = q ln(z − z0) = q[

ln(z)−∞∑

k=1

1k

(z0

z

)k]

1. If we have m charges qi at locations zi then the potential inducedby them is given by the multipole expansion for|z| > r = maxi |zi |:

φ(z) = Q ln(z) +∞∑

k=1

ak

zk ,

where Q =m∑

i=1

qi , and ak =m∑

i=1

−qizki

k

Page 242: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

Parallel N-body Potential Evaluation

The Rokhlin-Greengard Fast Multipole Method

I Given an accuracy, ε, one can truncate the multipole expansionto a fixed number, p = d− logc εe, of terms, where c = | zr |

I With p determined one can store a multipole expansion asa1,a2 . . . ,ap

1. We can move a multipole’s center from z0 to the origin with newcoefficients:

bl = −Qz l

0

l+

l∑k=1

ak z l−k0

(l − 1k − 1

)

I Note that a1,a2 . . . ,ap can be used to exactly computeb1,b2 . . . ,bp

Page 243: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

Parallel N-body Potential Evaluation

The Rokhlin-Greengard Fast Multipole Method

I Given an accuracy, ε, one can truncate the multipole expansionto a fixed number, p = d− logc εe, of terms, where c = | zr |

I With p determined one can store a multipole expansion asa1,a2 . . . ,ap

1. We can move a multipole’s center from z0 to the origin with newcoefficients:

bl = −Qz l

0

l+

l∑k=1

ak z l−k0

(l − 1k − 1

)

I Note that a1,a2 . . . ,ap can be used to exactly computeb1,b2 . . . ,bp

Page 244: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

Parallel N-body Potential Evaluation

The Rokhlin-Greengard Fast Multipole Method

I Given an accuracy, ε, one can truncate the multipole expansionto a fixed number, p = d− logc εe, of terms, where c = | zr |

I With p determined one can store a multipole expansion asa1,a2 . . . ,ap

1. We can move a multipole’s center from z0 to the origin with newcoefficients:

bl = −Qz l

0

l+

l∑k=1

ak z l−k0

(l − 1k − 1

)

I Note that a1,a2 . . . ,ap can be used to exactly computeb1,b2 . . . ,bp

Page 245: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

Parallel N-body Potential Evaluation

The Rokhlin-Greengard Fast Multipole Method

I Given an accuracy, ε, one can truncate the multipole expansionto a fixed number, p = d− logc εe, of terms, where c = | zr |

I With p determined one can store a multipole expansion asa1,a2 . . . ,ap

1. We can move a multipole’s center from z0 to the origin with newcoefficients:

bl = −Qz l

0

l+

l∑k=1

ak z l−k0

(l − 1k − 1

)

I Note that a1,a2 . . . ,ap can be used to exactly computeb1,b2 . . . ,bp

Page 246: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

Parallel N-body Potential Evaluation

The Rokhlin-Greengard Fast Multipole Method

1. Can also convert a multipole Laurent expansion into a localTaylor expansion:

φ(z) =∞∑l=0

clz l , where

c0 = Q ln(−z0) +∞∑

k=0

ak

zk0

(−1)k , and

cl = − Qlz l

0+

1z l

0

∞∑k=1

ak

zk0

(l + k − 1

k − 1

)(−1)k

2. And translate local expansions:n∑

k=0

ak (z − z0)k =n∑

l=0

(n∑

k=l

ak

(kl

)(−z0)k−l

)z l

Page 247: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

Parallel N-body Potential Evaluation

The Rokhlin-Greengard Fast Multipole Method

1. Can also convert a multipole Laurent expansion into a localTaylor expansion:

φ(z) =∞∑l=0

clz l , where

c0 = Q ln(−z0) +∞∑

k=0

ak

zk0

(−1)k , and

cl = − Qlz l

0+

1z l

0

∞∑k=1

ak

zk0

(l + k − 1

k − 1

)(−1)k

2. And translate local expansions:n∑

k=0

ak (z − z0)k =n∑

l=0

(n∑

k=l

ak

(kl

)(−z0)k−l

)z l

Page 248: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

Parallel N-body Potential Evaluation

The Rokhlin-Greengard Fast Multipole Method

I Items (1)-(4) above are the machinery required to allow theconstruction and use of multipole expansions, and given amultipole expansion, it requires O(N) operations to evaluate it atN points, thus an algorithm for the construction of a multipoleexpansion from N point charges that requires O(N) operationsreduces the complexity of N-body problems to O(N) complexity

I The Rokhlin-Greengard algorithm achieves this by using amultiscale approach and (1)-(4)

I Consider a box enclosing z1, z2, . . . , zN , and n ≈ dlog4 Nerefinements of the box, in 2D one parent box becomes fourchildren boxes

Page 249: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

Parallel N-body Potential Evaluation

The Rokhlin-Greengard Fast Multipole Method

I Items (1)-(4) above are the machinery required to allow theconstruction and use of multipole expansions, and given amultipole expansion, it requires O(N) operations to evaluate it atN points, thus an algorithm for the construction of a multipoleexpansion from N point charges that requires O(N) operationsreduces the complexity of N-body problems to O(N) complexity

I The Rokhlin-Greengard algorithm achieves this by using amultiscale approach and (1)-(4)

I Consider a box enclosing z1, z2, . . . , zN , and n ≈ dlog4 Nerefinements of the box, in 2D one parent box becomes fourchildren boxes

Page 250: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

Parallel N-body Potential Evaluation

The Rokhlin-Greengard Fast Multipole Method

I Items (1)-(4) above are the machinery required to allow theconstruction and use of multipole expansions, and given amultipole expansion, it requires O(N) operations to evaluate it atN points, thus an algorithm for the construction of a multipoleexpansion from N point charges that requires O(N) operationsreduces the complexity of N-body problems to O(N) complexity

I The Rokhlin-Greengard algorithm achieves this by using amultiscale approach and (1)-(4)

I Consider a box enclosing z1, z2, . . . , zN , and n ≈ dlog4 Nerefinements of the box, in 2D one parent box becomes fourchildren boxes

Page 251: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

Parallel N-body Potential Evaluation

The Rokhlin-Greengard Fast Multipole Method

I Goal is to construct all p-term multipole expansions due to theparticles in each box and level (upward pass) and then use theseto construct local expansions in each box and level (downwardpass)

1. The upward pass:I At the finest level construct box-centered p-term multipole expansions

due to the particles in each box using (1)I At each coarser level shift child p-term multipole expansions to build

box-centered p-term multipole expansions due to the particles in theparent boxes using (2)

2. The downward pass:I Construct local Taylor expansion in each box at the finest level by

converting the p-term multipole expansions of boxes in the “interactionlist” via (3)

Page 252: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

Parallel N-body Potential Evaluation

The Rokhlin-Greengard Fast Multipole Method

I Goal is to construct all p-term multipole expansions due to theparticles in each box and level (upward pass) and then use theseto construct local expansions in each box and level (downwardpass)

1. The upward pass:I At the finest level construct box-centered p-term multipole expansions

due to the particles in each box using (1)I At each coarser level shift child p-term multipole expansions to build

box-centered p-term multipole expansions due to the particles in theparent boxes using (2)

2. The downward pass:I Construct local Taylor expansion in each box at the finest level by

converting the p-term multipole expansions of boxes in the “interactionlist” via (3)

Page 253: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

Parallel N-body Potential Evaluation

The Rokhlin-Greengard Fast Multipole Method

I Goal is to construct all p-term multipole expansions due to theparticles in each box and level (upward pass) and then use theseto construct local expansions in each box and level (downwardpass)

1. The upward pass:I At the finest level construct box-centered p-term multipole expansions

due to the particles in each box using (1)I At each coarser level shift child p-term multipole expansions to build

box-centered p-term multipole expansions due to the particles in theparent boxes using (2)

2. The downward pass:I Construct local Taylor expansion in each box at the finest level by

converting the p-term multipole expansions of boxes in the “interactionlist” via (3)

Page 254: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

Parallel N-body Potential Evaluation

The Rokhlin-Greengard Fast Multipole Method

I Goal is to construct all p-term multipole expansions due to theparticles in each box and level (upward pass) and then use theseto construct local expansions in each box and level (downwardpass)

1. The upward pass:I At the finest level construct box-centered p-term multipole expansions

due to the particles in each box using (1)I At each coarser level shift child p-term multipole expansions to build

box-centered p-term multipole expansions due to the particles in theparent boxes using (2)

2. The downward pass:I Construct local Taylor expansion in each box at the finest level by

converting the p-term multipole expansions of boxes in the “interactionlist” via (3)

Page 255: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

Parallel N-body Potential Evaluation

The Rokhlin-Greengard Fast Multipole Method

I Goal is to construct all p-term multipole expansions due to theparticles in each box and level (upward pass) and then use theseto construct local expansions in each box and level (downwardpass)

1. The upward pass:I At the finest level construct box-centered p-term multipole expansions

due to the particles in each box using (1)I At each coarser level shift child p-term multipole expansions to build

box-centered p-term multipole expansions due to the particles in theparent boxes using (2)

2. The downward pass:I Construct local Taylor expansion in each box at the finest level by

converting the p-term multipole expansions of boxes in the “interactionlist” via (3)

Page 256: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

Parallel N-body Potential Evaluation

The Rokhlin-Greengard Fast Multipole Method

I Goal is to construct all p-term multipole expansions due to theparticles in each box and level (upward pass) and then use theseto construct local expansions in each box and level (downwardpass)

1. The upward pass:I At the finest level construct box-centered p-term multipole expansions

due to the particles in each box using (1)I At each coarser level shift child p-term multipole expansions to build

box-centered p-term multipole expansions due to the particles in theparent boxes using (2)

2. The downward pass:I Construct local Taylor expansion in each box at the finest level by

converting the p-term multipole expansions of boxes in the “interactionlist” via (3)

Page 257: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

Parallel N-body Potential Evaluation

The Rokhlin-Greengard Fast Multipole Method

1. The downward pass (cont.):I Via (4) add these together to get local box-centered expansions for

all the particles outside the box’s neighborhood using coarser levelp-term multipole expansions

I Evaluate the above local expansions at each particle and add inthe directly compute nearest-neighbor interactions

I This algorithm is overall O(N) complex and has many parallelaspects

Page 258: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

Parallel N-body Potential Evaluation

The Rokhlin-Greengard Fast Multipole Method

1. The downward pass (cont.):I Via (4) add these together to get local box-centered expansions for

all the particles outside the box’s neighborhood using coarser levelp-term multipole expansions

I Evaluate the above local expansions at each particle and add inthe directly compute nearest-neighbor interactions

I This algorithm is overall O(N) complex and has many parallelaspects

Page 259: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

Parallel N-body Potential Evaluation

The Rokhlin-Greengard Fast Multipole Method

1. The downward pass (cont.):I Via (4) add these together to get local box-centered expansions for

all the particles outside the box’s neighborhood using coarser levelp-term multipole expansions

I Evaluate the above local expansions at each particle and add inthe directly compute nearest-neighbor interactions

I This algorithm is overall O(N) complex and has many parallelaspects

Page 260: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

Parallel N-body Potential Evaluation

The Rokhlin-Greengard Fast Multipole Method

1. The downward pass (cont.):I Via (4) add these together to get local box-centered expansions for

all the particles outside the box’s neighborhood using coarser levelp-term multipole expansions

I Evaluate the above local expansions at each particle and add inthe directly compute nearest-neighbor interactions

I This algorithm is overall O(N) complex and has many parallelaspects

Page 261: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

Parallel N-body Potential Evaluation

The Rokhlin-Greengard Fast Multipole Method

I Need only store the p-term multipole/local Taylor expansioncoefficients a1,a2, . . . ,ap, giving a trivial data structure withwhich to implementationally deal

I This version of the multipole algorithm depends on rather uniformdistribution of particles, however there is an adaptive versionwhere the “finest boxes” are allowed to be different sizess.t. each box has approximately one particle

Page 262: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

Parallel N-body Potential Evaluation

The Rokhlin-Greengard Fast Multipole Method

I Need only store the p-term multipole/local Taylor expansioncoefficients a1,a2, . . . ,ap, giving a trivial data structure withwhich to implementationally deal

I This version of the multipole algorithm depends on rather uniformdistribution of particles, however there is an adaptive versionwhere the “finest boxes” are allowed to be different sizess.t. each box has approximately one particle

Page 263: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

Parallel N-body Potential Evaluation

Parallel Implementation of the Fast Multipole MethodI In all the steps described in both the upward and downward

passes of the multipole method the multipole or local expansionscan all be done in parallel

I Thus the parallel complexity is proportional to the number oflevels, i.e. O(log4 N) = O(n)

I For the nonadaptive version there a serious load balancingproblem due to whether at the each level, l , each box containsexactly N/nl particles

I This is a multigrid algorithm and so there will be the problem ofidle processors at coarse levels on parallel machines with finegrainsize

I One can also implement the adaptive version of the multipolemethod in parallel

I For more detail on the parallel version see (Greengard andGropp, 1990)

Page 264: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

Parallel N-body Potential Evaluation

Parallel Implementation of the Fast Multipole MethodI In all the steps described in both the upward and downward

passes of the multipole method the multipole or local expansionscan all be done in parallel

I Thus the parallel complexity is proportional to the number oflevels, i.e. O(log4 N) = O(n)

I For the nonadaptive version there a serious load balancingproblem due to whether at the each level, l , each box containsexactly N/nl particles

I This is a multigrid algorithm and so there will be the problem ofidle processors at coarse levels on parallel machines with finegrainsize

I One can also implement the adaptive version of the multipolemethod in parallel

I For more detail on the parallel version see (Greengard andGropp, 1990)

Page 265: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

Parallel N-body Potential Evaluation

Parallel Implementation of the Fast Multipole MethodI In all the steps described in both the upward and downward

passes of the multipole method the multipole or local expansionscan all be done in parallel

I Thus the parallel complexity is proportional to the number oflevels, i.e. O(log4 N) = O(n)

I For the nonadaptive version there a serious load balancingproblem due to whether at the each level, l , each box containsexactly N/nl particles

I This is a multigrid algorithm and so there will be the problem ofidle processors at coarse levels on parallel machines with finegrainsize

I One can also implement the adaptive version of the multipolemethod in parallel

I For more detail on the parallel version see (Greengard andGropp, 1990)

Page 266: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

Parallel N-body Potential Evaluation

Parallel Implementation of the Fast Multipole MethodI In all the steps described in both the upward and downward

passes of the multipole method the multipole or local expansionscan all be done in parallel

I Thus the parallel complexity is proportional to the number oflevels, i.e. O(log4 N) = O(n)

I For the nonadaptive version there a serious load balancingproblem due to whether at the each level, l , each box containsexactly N/nl particles

I This is a multigrid algorithm and so there will be the problem ofidle processors at coarse levels on parallel machines with finegrainsize

I One can also implement the adaptive version of the multipolemethod in parallel

I For more detail on the parallel version see (Greengard andGropp, 1990)

Page 267: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

Parallel N-body Potential Evaluation

Parallel Implementation of the Fast Multipole MethodI In all the steps described in both the upward and downward

passes of the multipole method the multipole or local expansionscan all be done in parallel

I Thus the parallel complexity is proportional to the number oflevels, i.e. O(log4 N) = O(n)

I For the nonadaptive version there a serious load balancingproblem due to whether at the each level, l , each box containsexactly N/nl particles

I This is a multigrid algorithm and so there will be the problem ofidle processors at coarse levels on parallel machines with finegrainsize

I One can also implement the adaptive version of the multipolemethod in parallel

I For more detail on the parallel version see (Greengard andGropp, 1990)

Page 268: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Parallel Computing Overview

Parallel N-body Potential Evaluation

Parallel Implementation of the Fast Multipole MethodI In all the steps described in both the upward and downward

passes of the multipole method the multipole or local expansionscan all be done in parallel

I Thus the parallel complexity is proportional to the number oflevels, i.e. O(log4 N) = O(n)

I For the nonadaptive version there a serious load balancingproblem due to whether at the each level, l , each box containsexactly N/nl particles

I This is a multigrid algorithm and so there will be the problem ofidle processors at coarse levels on parallel machines with finegrainsize

I One can also implement the adaptive version of the multipolemethod in parallel

I For more detail on the parallel version see (Greengard andGropp, 1990)

Page 269: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Bibliography

Bibliography

Booth, T. E. (1981) “Exact Monte Carlo solutions of elliptic partialdifferential equations," J. Comp. Phys., 39: 396–404.Brandt A. (1977) “Multi-level adaptive solutions to boundary valueproblems," Math. Comp., 31: 333–390.Chorin, A. J. (1973) "Numerical study of slightly viscous flow,"J. Fluid Mech., 57: 785–796.Chorin, Alexandre J. and Hald, Ole H. (2006) Stochastic Tools inMathematics and Science, Surveys and Tutorials in the AppliedMathematical Sciences, Vol. 1, viii+147, Springer, New York.Courant, R., K. O. Friedrichs, and H. Lewy (1928) “Über diepartiellen Differenzengleichungen der mathematischen Physik,"Math. Ann., 100: 32–74 (In German), Reprinted inI.B.M. J. Res. and Dev., 11: 215–234 (In English).Curtiss, J. H. (1953) “Monte Carlo methods for the iteration oflinear operators," J. Math. and Phys., 32: 209–232.

Page 270: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Bibliography

Bibliography

Booth, T. E. (1981) “Exact Monte Carlo solutions of elliptic partialdifferential equations," J. Comp. Phys., 39: 396–404.Brandt A. (1977) “Multi-level adaptive solutions to boundary valueproblems," Math. Comp., 31: 333–390.Chorin, A. J. (1973) "Numerical study of slightly viscous flow,"J. Fluid Mech., 57: 785–796.Chorin, Alexandre J. and Hald, Ole H. (2006) Stochastic Tools inMathematics and Science, Surveys and Tutorials in the AppliedMathematical Sciences, Vol. 1, viii+147, Springer, New York.Courant, R., K. O. Friedrichs, and H. Lewy (1928) “Über diepartiellen Differenzengleichungen der mathematischen Physik,"Math. Ann., 100: 32–74 (In German), Reprinted inI.B.M. J. Res. and Dev., 11: 215–234 (In English).Curtiss, J. H. (1953) “Monte Carlo methods for the iteration oflinear operators," J. Math. and Phys., 32: 209–232.

Page 271: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Bibliography

Bibliography

Booth, T. E. (1981) “Exact Monte Carlo solutions of elliptic partialdifferential equations," J. Comp. Phys., 39: 396–404.Brandt A. (1977) “Multi-level adaptive solutions to boundary valueproblems," Math. Comp., 31: 333–390.Chorin, A. J. (1973) "Numerical study of slightly viscous flow,"J. Fluid Mech., 57: 785–796.Chorin, Alexandre J. and Hald, Ole H. (2006) Stochastic Tools inMathematics and Science, Surveys and Tutorials in the AppliedMathematical Sciences, Vol. 1, viii+147, Springer, New York.Courant, R., K. O. Friedrichs, and H. Lewy (1928) “Über diepartiellen Differenzengleichungen der mathematischen Physik,"Math. Ann., 100: 32–74 (In German), Reprinted inI.B.M. J. Res. and Dev., 11: 215–234 (In English).Curtiss, J. H. (1953) “Monte Carlo methods for the iteration oflinear operators," J. Math. and Phys., 32: 209–232.

Page 272: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Bibliography

Bibliography

Booth, T. E. (1981) “Exact Monte Carlo solutions of elliptic partialdifferential equations," J. Comp. Phys., 39: 396–404.Brandt A. (1977) “Multi-level adaptive solutions to boundary valueproblems," Math. Comp., 31: 333–390.Chorin, A. J. (1973) "Numerical study of slightly viscous flow,"J. Fluid Mech., 57: 785–796.Chorin, Alexandre J. and Hald, Ole H. (2006) Stochastic Tools inMathematics and Science, Surveys and Tutorials in the AppliedMathematical Sciences, Vol. 1, viii+147, Springer, New York.Courant, R., K. O. Friedrichs, and H. Lewy (1928) “Über diepartiellen Differenzengleichungen der mathematischen Physik,"Math. Ann., 100: 32–74 (In German), Reprinted inI.B.M. J. Res. and Dev., 11: 215–234 (In English).Curtiss, J. H. (1953) “Monte Carlo methods for the iteration oflinear operators," J. Math. and Phys., 32: 209–232.

Page 273: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Bibliography

Bibliography

Booth, T. E. (1981) “Exact Monte Carlo solutions of elliptic partialdifferential equations," J. Comp. Phys., 39: 396–404.Brandt A. (1977) “Multi-level adaptive solutions to boundary valueproblems," Math. Comp., 31: 333–390.Chorin, A. J. (1973) "Numerical study of slightly viscous flow,"J. Fluid Mech., 57: 785–796.Chorin, Alexandre J. and Hald, Ole H. (2006) Stochastic Tools inMathematics and Science, Surveys and Tutorials in the AppliedMathematical Sciences, Vol. 1, viii+147, Springer, New York.Courant, R., K. O. Friedrichs, and H. Lewy (1928) “Über diepartiellen Differenzengleichungen der mathematischen Physik,"Math. Ann., 100: 32–74 (In German), Reprinted inI.B.M. J. Res. and Dev., 11: 215–234 (In English).Curtiss, J. H. (1953) “Monte Carlo methods for the iteration oflinear operators," J. Math. and Phys., 32: 209–232.

Page 274: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Bibliography

Bibliography

Booth, T. E. (1981) “Exact Monte Carlo solutions of elliptic partialdifferential equations," J. Comp. Phys., 39: 396–404.Brandt A. (1977) “Multi-level adaptive solutions to boundary valueproblems," Math. Comp., 31: 333–390.Chorin, A. J. (1973) "Numerical study of slightly viscous flow,"J. Fluid Mech., 57: 785–796.Chorin, Alexandre J. and Hald, Ole H. (2006) Stochastic Tools inMathematics and Science, Surveys and Tutorials in the AppliedMathematical Sciences, Vol. 1, viii+147, Springer, New York.Courant, R., K. O. Friedrichs, and H. Lewy (1928) “Über diepartiellen Differenzengleichungen der mathematischen Physik,"Math. Ann., 100: 32–74 (In German), Reprinted inI.B.M. J. Res. and Dev., 11: 215–234 (In English).Curtiss, J. H. (1953) “Monte Carlo methods for the iteration oflinear operators," J. Math. and Phys., 32: 209–232.

Page 275: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Bibliography

BibliographyCurtiss, J. H. (1956) “A theoretical comparison of the efficienciesof two classical methods and a Monte Carlo method forcomputing one component of the solution of a set of linearalgebraic equations," in Symp. Monte Carlo Methods,H. A. Meyer, Ed., Wiley: New York, pp. 191–233.Donsker, M. D., and M. Kac (1951) “A sampling method fordetermining the lowest eigenvalue and the principleeigenfunction of Schrödinger’s equation,"J. Res. Nat. Bur. Standards, 44: 551–557.Ermakov, S. M. and G. A. Mikhailov (1982) Statistical Modeling,Nauka, Moscow, (in Russian).Ermakov, S. M., V. V. Nekrutkin and A. S. Sipin (1989) RandomProcesses for Classical Equations of Mathematical Physics,Kluwer Academic Publishers: Dordrecht.Forsythe, G. E., and R. A. Leibler (1950) “Matrix inversion by aMonte Carlo method," Math. Tab. Aids Comput., 4: 127–129.

Page 276: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Bibliography

BibliographyCurtiss, J. H. (1956) “A theoretical comparison of the efficienciesof two classical methods and a Monte Carlo method forcomputing one component of the solution of a set of linearalgebraic equations," in Symp. Monte Carlo Methods,H. A. Meyer, Ed., Wiley: New York, pp. 191–233.Donsker, M. D., and M. Kac (1951) “A sampling method fordetermining the lowest eigenvalue and the principleeigenfunction of Schrödinger’s equation,"J. Res. Nat. Bur. Standards, 44: 551–557.Ermakov, S. M. and G. A. Mikhailov (1982) Statistical Modeling,Nauka, Moscow, (in Russian).Ermakov, S. M., V. V. Nekrutkin and A. S. Sipin (1989) RandomProcesses for Classical Equations of Mathematical Physics,Kluwer Academic Publishers: Dordrecht.Forsythe, G. E., and R. A. Leibler (1950) “Matrix inversion by aMonte Carlo method," Math. Tab. Aids Comput., 4: 127–129.

Page 277: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Bibliography

BibliographyCurtiss, J. H. (1956) “A theoretical comparison of the efficienciesof two classical methods and a Monte Carlo method forcomputing one component of the solution of a set of linearalgebraic equations," in Symp. Monte Carlo Methods,H. A. Meyer, Ed., Wiley: New York, pp. 191–233.Donsker, M. D., and M. Kac (1951) “A sampling method fordetermining the lowest eigenvalue and the principleeigenfunction of Schrödinger’s equation,"J. Res. Nat. Bur. Standards, 44: 551–557.Ermakov, S. M. and G. A. Mikhailov (1982) Statistical Modeling,Nauka, Moscow, (in Russian).Ermakov, S. M., V. V. Nekrutkin and A. S. Sipin (1989) RandomProcesses for Classical Equations of Mathematical Physics,Kluwer Academic Publishers: Dordrecht.Forsythe, G. E., and R. A. Leibler (1950) “Matrix inversion by aMonte Carlo method," Math. Tab. Aids Comput., 4: 127–129.

Page 278: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Bibliography

BibliographyCurtiss, J. H. (1956) “A theoretical comparison of the efficienciesof two classical methods and a Monte Carlo method forcomputing one component of the solution of a set of linearalgebraic equations," in Symp. Monte Carlo Methods,H. A. Meyer, Ed., Wiley: New York, pp. 191–233.Donsker, M. D., and M. Kac (1951) “A sampling method fordetermining the lowest eigenvalue and the principleeigenfunction of Schrödinger’s equation,"J. Res. Nat. Bur. Standards, 44: 551–557.Ermakov, S. M. and G. A. Mikhailov (1982) Statistical Modeling,Nauka, Moscow, (in Russian).Ermakov, S. M., V. V. Nekrutkin and A. S. Sipin (1989) RandomProcesses for Classical Equations of Mathematical Physics,Kluwer Academic Publishers: Dordrecht.Forsythe, G. E., and R. A. Leibler (1950) “Matrix inversion by aMonte Carlo method," Math. Tab. Aids Comput., 4: 127–129.

Page 279: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Bibliography

BibliographyCurtiss, J. H. (1956) “A theoretical comparison of the efficienciesof two classical methods and a Monte Carlo method forcomputing one component of the solution of a set of linearalgebraic equations," in Symp. Monte Carlo Methods,H. A. Meyer, Ed., Wiley: New York, pp. 191–233.Donsker, M. D., and M. Kac (1951) “A sampling method fordetermining the lowest eigenvalue and the principleeigenfunction of Schrödinger’s equation,"J. Res. Nat. Bur. Standards, 44: 551–557.Ermakov, S. M. and G. A. Mikhailov (1982) Statistical Modeling,Nauka, Moscow, (in Russian).Ermakov, S. M., V. V. Nekrutkin and A. S. Sipin (1989) RandomProcesses for Classical Equations of Mathematical Physics,Kluwer Academic Publishers: Dordrecht.Forsythe, G. E., and R. A. Leibler (1950) “Matrix inversion by aMonte Carlo method," Math. Tab. Aids Comput., 4: 127–129.

Page 280: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Bibliography

Bibliography

Freidlin, M. (1985) Functional Integration and Partial DifferentialEquations, Princeton University Press: Princeton.Ghoniem, A. F. and F. S. Sherman (1985) “Grid-free simulation ofdiffusion using random walk methods," J. Comp. Phys., 61: 1-37.Greengard, L. F. (1988) The Rapid Evaluation of Potential Fieldsin Particle Systems, MIT Press: Cambridge, MA.Greengard, L. F. and W. Gropp (1990) “A parallel version of thefast multipole method," Computers Math. Applic., 20: 63-71.Hall, A. (1873) “On an experimental determination of π,"Messeng. Math., 2: 113–114.Halton, J. H. (1970), “A Retrospective and Prospective Survey ofthe Monte Carlo Method," SIAM Review, 12(1): 1–63,.Hammersley, J. M., and D. C. Handscomb (1964) Monte CarloMethods, Chapman and Hall, London.

Page 281: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Bibliography

Bibliography

Freidlin, M. (1985) Functional Integration and Partial DifferentialEquations, Princeton University Press: Princeton.Ghoniem, A. F. and F. S. Sherman (1985) “Grid-free simulation ofdiffusion using random walk methods," J. Comp. Phys., 61: 1-37.Greengard, L. F. (1988) The Rapid Evaluation of Potential Fieldsin Particle Systems, MIT Press: Cambridge, MA.Greengard, L. F. and W. Gropp (1990) “A parallel version of thefast multipole method," Computers Math. Applic., 20: 63-71.Hall, A. (1873) “On an experimental determination of π,"Messeng. Math., 2: 113–114.Halton, J. H. (1970), “A Retrospective and Prospective Survey ofthe Monte Carlo Method," SIAM Review, 12(1): 1–63,.Hammersley, J. M., and D. C. Handscomb (1964) Monte CarloMethods, Chapman and Hall, London.

Page 282: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Bibliography

Bibliography

Freidlin, M. (1985) Functional Integration and Partial DifferentialEquations, Princeton University Press: Princeton.Ghoniem, A. F. and F. S. Sherman (1985) “Grid-free simulation ofdiffusion using random walk methods," J. Comp. Phys., 61: 1-37.Greengard, L. F. (1988) The Rapid Evaluation of Potential Fieldsin Particle Systems, MIT Press: Cambridge, MA.Greengard, L. F. and W. Gropp (1990) “A parallel version of thefast multipole method," Computers Math. Applic., 20: 63-71.Hall, A. (1873) “On an experimental determination of π,"Messeng. Math., 2: 113–114.Halton, J. H. (1970), “A Retrospective and Prospective Survey ofthe Monte Carlo Method," SIAM Review, 12(1): 1–63,.Hammersley, J. M., and D. C. Handscomb (1964) Monte CarloMethods, Chapman and Hall, London.

Page 283: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Bibliography

Bibliography

Freidlin, M. (1985) Functional Integration and Partial DifferentialEquations, Princeton University Press: Princeton.Ghoniem, A. F. and F. S. Sherman (1985) “Grid-free simulation ofdiffusion using random walk methods," J. Comp. Phys., 61: 1-37.Greengard, L. F. (1988) The Rapid Evaluation of Potential Fieldsin Particle Systems, MIT Press: Cambridge, MA.Greengard, L. F. and W. Gropp (1990) “A parallel version of thefast multipole method," Computers Math. Applic., 20: 63-71.Hall, A. (1873) “On an experimental determination of π,"Messeng. Math., 2: 113–114.Halton, J. H. (1970), “A Retrospective and Prospective Survey ofthe Monte Carlo Method," SIAM Review, 12(1): 1–63,.Hammersley, J. M., and D. C. Handscomb (1964) Monte CarloMethods, Chapman and Hall, London.

Page 284: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Bibliography

Bibliography

Freidlin, M. (1985) Functional Integration and Partial DifferentialEquations, Princeton University Press: Princeton.Ghoniem, A. F. and F. S. Sherman (1985) “Grid-free simulation ofdiffusion using random walk methods," J. Comp. Phys., 61: 1-37.Greengard, L. F. (1988) The Rapid Evaluation of Potential Fieldsin Particle Systems, MIT Press: Cambridge, MA.Greengard, L. F. and W. Gropp (1990) “A parallel version of thefast multipole method," Computers Math. Applic., 20: 63-71.Hall, A. (1873) “On an experimental determination of π,"Messeng. Math., 2: 113–114.Halton, J. H. (1970), “A Retrospective and Prospective Survey ofthe Monte Carlo Method," SIAM Review, 12(1): 1–63,.Hammersley, J. M., and D. C. Handscomb (1964) Monte CarloMethods, Chapman and Hall, London.

Page 285: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Bibliography

Bibliography

Freidlin, M. (1985) Functional Integration and Partial DifferentialEquations, Princeton University Press: Princeton.Ghoniem, A. F. and F. S. Sherman (1985) “Grid-free simulation ofdiffusion using random walk methods," J. Comp. Phys., 61: 1-37.Greengard, L. F. (1988) The Rapid Evaluation of Potential Fieldsin Particle Systems, MIT Press: Cambridge, MA.Greengard, L. F. and W. Gropp (1990) “A parallel version of thefast multipole method," Computers Math. Applic., 20: 63-71.Hall, A. (1873) “On an experimental determination of π,"Messeng. Math., 2: 113–114.Halton, J. H. (1970), “A Retrospective and Prospective Survey ofthe Monte Carlo Method," SIAM Review, 12(1): 1–63,.Hammersley, J. M., and D. C. Handscomb (1964) Monte CarloMethods, Chapman and Hall, London.

Page 286: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Bibliography

Bibliography

Freidlin, M. (1985) Functional Integration and Partial DifferentialEquations, Princeton University Press: Princeton.Ghoniem, A. F. and F. S. Sherman (1985) “Grid-free simulation ofdiffusion using random walk methods," J. Comp. Phys., 61: 1-37.Greengard, L. F. (1988) The Rapid Evaluation of Potential Fieldsin Particle Systems, MIT Press: Cambridge, MA.Greengard, L. F. and W. Gropp (1990) “A parallel version of thefast multipole method," Computers Math. Applic., 20: 63-71.Hall, A. (1873) “On an experimental determination of π,"Messeng. Math., 2: 113–114.Halton, J. H. (1970), “A Retrospective and Prospective Survey ofthe Monte Carlo Method," SIAM Review, 12(1): 1–63,.Hammersley, J. M., and D. C. Handscomb (1964) Monte CarloMethods, Chapman and Hall, London.

Page 287: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Bibliography

BibliographyHalton, J. H. (1989) “Pseudo-random trees: multiple independentsequence generators for parallel and branching computations,"J. Comp. Phys., 84: 1–56.Hillis, D. (1985) The Connection Machine, M.I.T. University Press:Cambridge, MA.Hopf, E. (1950) “The partial differential equation ut + uux = µxx ,"Comm. Pure Applied Math., 3: 201–230.Itô, K. and H. P. McKean, Jr. (1965) Diffusion Processes andTheir Sample Paths, Springer-Verlag: Berlin, New York.Kac, M. (1947) “Random Walk and the Theory of BrownianMotion,"The American Mathematical Monthly, 54(7): 369–391.Kac, M. (1956) Some Stochastic Problems in Physics andMathematics, Colloquium Lectures in the Pure and AppliedSciences, No. 2, Magnolia Petroleum Co., Hectographed.Kac, M. (1980) Integration in Function Spaces and Some of ItsApplications, Lezioni Fermiane, Accademia Nazionale Dei LinceiScuola Normale Superiore, Pisa.

Page 288: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Bibliography

BibliographyHalton, J. H. (1989) “Pseudo-random trees: multiple independentsequence generators for parallel and branching computations,"J. Comp. Phys., 84: 1–56.Hillis, D. (1985) The Connection Machine, M.I.T. University Press:Cambridge, MA.Hopf, E. (1950) “The partial differential equation ut + uux = µxx ,"Comm. Pure Applied Math., 3: 201–230.Itô, K. and H. P. McKean, Jr. (1965) Diffusion Processes andTheir Sample Paths, Springer-Verlag: Berlin, New York.Kac, M. (1947) “Random Walk and the Theory of BrownianMotion,"The American Mathematical Monthly, 54(7): 369–391.Kac, M. (1956) Some Stochastic Problems in Physics andMathematics, Colloquium Lectures in the Pure and AppliedSciences, No. 2, Magnolia Petroleum Co., Hectographed.Kac, M. (1980) Integration in Function Spaces and Some of ItsApplications, Lezioni Fermiane, Accademia Nazionale Dei LinceiScuola Normale Superiore, Pisa.

Page 289: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Bibliography

BibliographyHalton, J. H. (1989) “Pseudo-random trees: multiple independentsequence generators for parallel and branching computations,"J. Comp. Phys., 84: 1–56.Hillis, D. (1985) The Connection Machine, M.I.T. University Press:Cambridge, MA.Hopf, E. (1950) “The partial differential equation ut + uux = µxx ,"Comm. Pure Applied Math., 3: 201–230.Itô, K. and H. P. McKean, Jr. (1965) Diffusion Processes andTheir Sample Paths, Springer-Verlag: Berlin, New York.Kac, M. (1947) “Random Walk and the Theory of BrownianMotion,"The American Mathematical Monthly, 54(7): 369–391.Kac, M. (1956) Some Stochastic Problems in Physics andMathematics, Colloquium Lectures in the Pure and AppliedSciences, No. 2, Magnolia Petroleum Co., Hectographed.Kac, M. (1980) Integration in Function Spaces and Some of ItsApplications, Lezioni Fermiane, Accademia Nazionale Dei LinceiScuola Normale Superiore, Pisa.

Page 290: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Bibliography

BibliographyHalton, J. H. (1989) “Pseudo-random trees: multiple independentsequence generators for parallel and branching computations,"J. Comp. Phys., 84: 1–56.Hillis, D. (1985) The Connection Machine, M.I.T. University Press:Cambridge, MA.Hopf, E. (1950) “The partial differential equation ut + uux = µxx ,"Comm. Pure Applied Math., 3: 201–230.Itô, K. and H. P. McKean, Jr. (1965) Diffusion Processes andTheir Sample Paths, Springer-Verlag: Berlin, New York.Kac, M. (1947) “Random Walk and the Theory of BrownianMotion,"The American Mathematical Monthly, 54(7): 369–391.Kac, M. (1956) Some Stochastic Problems in Physics andMathematics, Colloquium Lectures in the Pure and AppliedSciences, No. 2, Magnolia Petroleum Co., Hectographed.Kac, M. (1980) Integration in Function Spaces and Some of ItsApplications, Lezioni Fermiane, Accademia Nazionale Dei LinceiScuola Normale Superiore, Pisa.

Page 291: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Bibliography

BibliographyHalton, J. H. (1989) “Pseudo-random trees: multiple independentsequence generators for parallel and branching computations,"J. Comp. Phys., 84: 1–56.Hillis, D. (1985) The Connection Machine, M.I.T. University Press:Cambridge, MA.Hopf, E. (1950) “The partial differential equation ut + uux = µxx ,"Comm. Pure Applied Math., 3: 201–230.Itô, K. and H. P. McKean, Jr. (1965) Diffusion Processes andTheir Sample Paths, Springer-Verlag: Berlin, New York.Kac, M. (1947) “Random Walk and the Theory of BrownianMotion,"The American Mathematical Monthly, 54(7): 369–391.Kac, M. (1956) Some Stochastic Problems in Physics andMathematics, Colloquium Lectures in the Pure and AppliedSciences, No. 2, Magnolia Petroleum Co., Hectographed.Kac, M. (1980) Integration in Function Spaces and Some of ItsApplications, Lezioni Fermiane, Accademia Nazionale Dei LinceiScuola Normale Superiore, Pisa.

Page 292: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Bibliography

BibliographyHalton, J. H. (1989) “Pseudo-random trees: multiple independentsequence generators for parallel and branching computations,"J. Comp. Phys., 84: 1–56.Hillis, D. (1985) The Connection Machine, M.I.T. University Press:Cambridge, MA.Hopf, E. (1950) “The partial differential equation ut + uux = µxx ,"Comm. Pure Applied Math., 3: 201–230.Itô, K. and H. P. McKean, Jr. (1965) Diffusion Processes andTheir Sample Paths, Springer-Verlag: Berlin, New York.Kac, M. (1947) “Random Walk and the Theory of BrownianMotion,"The American Mathematical Monthly, 54(7): 369–391.Kac, M. (1956) Some Stochastic Problems in Physics andMathematics, Colloquium Lectures in the Pure and AppliedSciences, No. 2, Magnolia Petroleum Co., Hectographed.Kac, M. (1980) Integration in Function Spaces and Some of ItsApplications, Lezioni Fermiane, Accademia Nazionale Dei LinceiScuola Normale Superiore, Pisa.

Page 293: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Bibliography

BibliographyHalton, J. H. (1989) “Pseudo-random trees: multiple independentsequence generators for parallel and branching computations,"J. Comp. Phys., 84: 1–56.Hillis, D. (1985) The Connection Machine, M.I.T. University Press:Cambridge, MA.Hopf, E. (1950) “The partial differential equation ut + uux = µxx ,"Comm. Pure Applied Math., 3: 201–230.Itô, K. and H. P. McKean, Jr. (1965) Diffusion Processes andTheir Sample Paths, Springer-Verlag: Berlin, New York.Kac, M. (1947) “Random Walk and the Theory of BrownianMotion,"The American Mathematical Monthly, 54(7): 369–391.Kac, M. (1956) Some Stochastic Problems in Physics andMathematics, Colloquium Lectures in the Pure and AppliedSciences, No. 2, Magnolia Petroleum Co., Hectographed.Kac, M. (1980) Integration in Function Spaces and Some of ItsApplications, Lezioni Fermiane, Accademia Nazionale Dei LinceiScuola Normale Superiore, Pisa.

Page 294: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Bibliography

Bibliography

Knuth, D. E. (1981) The Art of Computer Programming, Vol. 2:Seminumerical Algorithms, Second edition, Addison-Wesley:Reading, MA.Marsaglia, G. and A. Zaman “A new class of random numbergenerators," submitted to SIAM J. Sci. Stat. Comput.Mascagni, M. (1991) “High dimensional numerical integration andmassively parallel computing," Contemp. Math., , 115: 53–73.Mascagni, M. (1990) “A tale of two architectures: parallel Wienerintegral methods for elliptic boundary value problems," SIAMNews, 23: 8,12.McKean, H. P. (1975 )“Application of Brownian Motion to theEquation of Kolmogorov-Petrovskii-Piskunov" Communicationson Pure and Applied Mathematics, XXVIII: 323–331.

Page 295: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Bibliography

Bibliography

Knuth, D. E. (1981) The Art of Computer Programming, Vol. 2:Seminumerical Algorithms, Second edition, Addison-Wesley:Reading, MA.Marsaglia, G. and A. Zaman “A new class of random numbergenerators," submitted to SIAM J. Sci. Stat. Comput.Mascagni, M. (1991) “High dimensional numerical integration andmassively parallel computing," Contemp. Math., , 115: 53–73.Mascagni, M. (1990) “A tale of two architectures: parallel Wienerintegral methods for elliptic boundary value problems," SIAMNews, 23: 8,12.McKean, H. P. (1975 )“Application of Brownian Motion to theEquation of Kolmogorov-Petrovskii-Piskunov" Communicationson Pure and Applied Mathematics, XXVIII: 323–331.

Page 296: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Bibliography

Bibliography

Knuth, D. E. (1981) The Art of Computer Programming, Vol. 2:Seminumerical Algorithms, Second edition, Addison-Wesley:Reading, MA.Marsaglia, G. and A. Zaman “A new class of random numbergenerators," submitted to SIAM J. Sci. Stat. Comput.Mascagni, M. (1991) “High dimensional numerical integration andmassively parallel computing," Contemp. Math., , 115: 53–73.Mascagni, M. (1990) “A tale of two architectures: parallel Wienerintegral methods for elliptic boundary value problems," SIAMNews, 23: 8,12.McKean, H. P. (1975 )“Application of Brownian Motion to theEquation of Kolmogorov-Petrovskii-Piskunov" Communicationson Pure and Applied Mathematics, XXVIII: 323–331.

Page 297: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Bibliography

Bibliography

Knuth, D. E. (1981) The Art of Computer Programming, Vol. 2:Seminumerical Algorithms, Second edition, Addison-Wesley:Reading, MA.Marsaglia, G. and A. Zaman “A new class of random numbergenerators," submitted to SIAM J. Sci. Stat. Comput.Mascagni, M. (1991) “High dimensional numerical integration andmassively parallel computing," Contemp. Math., , 115: 53–73.Mascagni, M. (1990) “A tale of two architectures: parallel Wienerintegral methods for elliptic boundary value problems," SIAMNews, 23: 8,12.McKean, H. P. (1975 )“Application of Brownian Motion to theEquation of Kolmogorov-Petrovskii-Piskunov" Communicationson Pure and Applied Mathematics, XXVIII: 323–331.

Page 298: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Bibliography

Bibliography

Knuth, D. E. (1981) The Art of Computer Programming, Vol. 2:Seminumerical Algorithms, Second edition, Addison-Wesley:Reading, MA.Marsaglia, G. and A. Zaman “A new class of random numbergenerators," submitted to SIAM J. Sci. Stat. Comput.Mascagni, M. (1991) “High dimensional numerical integration andmassively parallel computing," Contemp. Math., , 115: 53–73.Mascagni, M. (1990) “A tale of two architectures: parallel Wienerintegral methods for elliptic boundary value problems," SIAMNews, 23: 8,12.McKean, H. P. (1975 )“Application of Brownian Motion to theEquation of Kolmogorov-Petrovskii-Piskunov" Communicationson Pure and Applied Mathematics, XXVIII: 323–331.

Page 299: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Bibliography

BibliographyMcKean, H. P. (1988) Private communication. M. E. Muller,"Some Continuous Monte Carlo Methods for the DirichletProblem," The Annals of Mathematical Statistics, 27(3): 569-589,1956.Mikhailov, G. A. (1995) New Monte Carlo Methods WithEstimating Derivatives, V. S. P. Publishers.Niederreiter, H. (1978) “Quasi-Monte Carlo methods andpseudo-random numbers," Bull. Amer. Math. Soc., 84: 957–1041.Rubenstein, M. (1981) Simulation and the Monte Carlo Method,Wiley-Interscience: New York.Sabelfeld, K. K. (1991), Monte Carlo Methods in Boundary ValueProblems, Springer-Verlag, Berlin, Heidelberg, New York.Sabelfeld, K. and N. Mozartova (2009) “SparsifiedRandomization Algorithms for large systems of linear equationsand a new version of the Random Walk on Boundary method,"Monte Carlo Methods and Applications, 15(3): 257–284.

Page 300: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Bibliography

BibliographyMcKean, H. P. (1988) Private communication. M. E. Muller,"Some Continuous Monte Carlo Methods for the DirichletProblem," The Annals of Mathematical Statistics, 27(3): 569-589,1956.Mikhailov, G. A. (1995) New Monte Carlo Methods WithEstimating Derivatives, V. S. P. Publishers.Niederreiter, H. (1978) “Quasi-Monte Carlo methods andpseudo-random numbers," Bull. Amer. Math. Soc., 84: 957–1041.Rubenstein, M. (1981) Simulation and the Monte Carlo Method,Wiley-Interscience: New York.Sabelfeld, K. K. (1991), Monte Carlo Methods in Boundary ValueProblems, Springer-Verlag, Berlin, Heidelberg, New York.Sabelfeld, K. and N. Mozartova (2009) “SparsifiedRandomization Algorithms for large systems of linear equationsand a new version of the Random Walk on Boundary method,"Monte Carlo Methods and Applications, 15(3): 257–284.

Page 301: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Bibliography

BibliographyMcKean, H. P. (1988) Private communication. M. E. Muller,"Some Continuous Monte Carlo Methods for the DirichletProblem," The Annals of Mathematical Statistics, 27(3): 569-589,1956.Mikhailov, G. A. (1995) New Monte Carlo Methods WithEstimating Derivatives, V. S. P. Publishers.Niederreiter, H. (1978) “Quasi-Monte Carlo methods andpseudo-random numbers," Bull. Amer. Math. Soc., 84: 957–1041.Rubenstein, M. (1981) Simulation and the Monte Carlo Method,Wiley-Interscience: New York.Sabelfeld, K. K. (1991), Monte Carlo Methods in Boundary ValueProblems, Springer-Verlag, Berlin, Heidelberg, New York.Sabelfeld, K. and N. Mozartova (2009) “SparsifiedRandomization Algorithms for large systems of linear equationsand a new version of the Random Walk on Boundary method,"Monte Carlo Methods and Applications, 15(3): 257–284.

Page 302: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Bibliography

BibliographyMcKean, H. P. (1988) Private communication. M. E. Muller,"Some Continuous Monte Carlo Methods for the DirichletProblem," The Annals of Mathematical Statistics, 27(3): 569-589,1956.Mikhailov, G. A. (1995) New Monte Carlo Methods WithEstimating Derivatives, V. S. P. Publishers.Niederreiter, H. (1978) “Quasi-Monte Carlo methods andpseudo-random numbers," Bull. Amer. Math. Soc., 84: 957–1041.Rubenstein, M. (1981) Simulation and the Monte Carlo Method,Wiley-Interscience: New York.Sabelfeld, K. K. (1991), Monte Carlo Methods in Boundary ValueProblems, Springer-Verlag, Berlin, Heidelberg, New York.Sabelfeld, K. and N. Mozartova (2009) “SparsifiedRandomization Algorithms for large systems of linear equationsand a new version of the Random Walk on Boundary method,"Monte Carlo Methods and Applications, 15(3): 257–284.

Page 303: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Bibliography

BibliographyMcKean, H. P. (1988) Private communication. M. E. Muller,"Some Continuous Monte Carlo Methods for the DirichletProblem," The Annals of Mathematical Statistics, 27(3): 569-589,1956.Mikhailov, G. A. (1995) New Monte Carlo Methods WithEstimating Derivatives, V. S. P. Publishers.Niederreiter, H. (1978) “Quasi-Monte Carlo methods andpseudo-random numbers," Bull. Amer. Math. Soc., 84: 957–1041.Rubenstein, M. (1981) Simulation and the Monte Carlo Method,Wiley-Interscience: New York.Sabelfeld, K. K. (1991), Monte Carlo Methods in Boundary ValueProblems, Springer-Verlag, Berlin, Heidelberg, New York.Sabelfeld, K. and N. Mozartova (2009) “SparsifiedRandomization Algorithms for large systems of linear equationsand a new version of the Random Walk on Boundary method,"Monte Carlo Methods and Applications, 15(3): 257–284.

Page 304: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Bibliography

BibliographyMcKean, H. P. (1988) Private communication. M. E. Muller,"Some Continuous Monte Carlo Methods for the DirichletProblem," The Annals of Mathematical Statistics, 27(3): 569-589,1956.Mikhailov, G. A. (1995) New Monte Carlo Methods WithEstimating Derivatives, V. S. P. Publishers.Niederreiter, H. (1978) “Quasi-Monte Carlo methods andpseudo-random numbers," Bull. Amer. Math. Soc., 84: 957–1041.Rubenstein, M. (1981) Simulation and the Monte Carlo Method,Wiley-Interscience: New York.Sabelfeld, K. K. (1991), Monte Carlo Methods in Boundary ValueProblems, Springer-Verlag, Berlin, Heidelberg, New York.Sabelfeld, K. and N. Mozartova (2009) “SparsifiedRandomization Algorithms for large systems of linear equationsand a new version of the Random Walk on Boundary method,"Monte Carlo Methods and Applications, 15(3): 257–284.

Page 305: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Bibliography

Bibliography

Sherman, A. S., and C. S. Peskin (1986) “A Monte Carlo methodfor scalar reaction diffusion equations”, SIAMJ. Sci. Stat. Comput., 7: 1360–1372.Sherman, A. S., and C. S. Peskin (1988) “Solving theHodgkin-Huxley equations by a random walk method”, SIAMJ. Sci. Stat. Comput., 9: 170–190.Shreider, Y. A. (1966) The Monte Carlo Method. The Method ofStatistical Trial, Pergamon Press: New York.Spanier, J. and E. M. Gelbard (1969) Monte Carlo Principles andNeutron Transport Problems, Addison-Wesley: Reading, MA.Wasow, W. R. (1952), “A Note on the Inversion of Matrices byRandom Walks," Mathematical Tables and Other Aids toComputation, 6(38): 78–81.

Page 306: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Bibliography

Bibliography

Sherman, A. S., and C. S. Peskin (1986) “A Monte Carlo methodfor scalar reaction diffusion equations”, SIAMJ. Sci. Stat. Comput., 7: 1360–1372.Sherman, A. S., and C. S. Peskin (1988) “Solving theHodgkin-Huxley equations by a random walk method”, SIAMJ. Sci. Stat. Comput., 9: 170–190.Shreider, Y. A. (1966) The Monte Carlo Method. The Method ofStatistical Trial, Pergamon Press: New York.Spanier, J. and E. M. Gelbard (1969) Monte Carlo Principles andNeutron Transport Problems, Addison-Wesley: Reading, MA.Wasow, W. R. (1952), “A Note on the Inversion of Matrices byRandom Walks," Mathematical Tables and Other Aids toComputation, 6(38): 78–81.

Page 307: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Bibliography

Bibliography

Sherman, A. S., and C. S. Peskin (1986) “A Monte Carlo methodfor scalar reaction diffusion equations”, SIAMJ. Sci. Stat. Comput., 7: 1360–1372.Sherman, A. S., and C. S. Peskin (1988) “Solving theHodgkin-Huxley equations by a random walk method”, SIAMJ. Sci. Stat. Comput., 9: 170–190.Shreider, Y. A. (1966) The Monte Carlo Method. The Method ofStatistical Trial, Pergamon Press: New York.Spanier, J. and E. M. Gelbard (1969) Monte Carlo Principles andNeutron Transport Problems, Addison-Wesley: Reading, MA.Wasow, W. R. (1952), “A Note on the Inversion of Matrices byRandom Walks," Mathematical Tables and Other Aids toComputation, 6(38): 78–81.

Page 308: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Bibliography

Bibliography

Sherman, A. S., and C. S. Peskin (1986) “A Monte Carlo methodfor scalar reaction diffusion equations”, SIAMJ. Sci. Stat. Comput., 7: 1360–1372.Sherman, A. S., and C. S. Peskin (1988) “Solving theHodgkin-Huxley equations by a random walk method”, SIAMJ. Sci. Stat. Comput., 9: 170–190.Shreider, Y. A. (1966) The Monte Carlo Method. The Method ofStatistical Trial, Pergamon Press: New York.Spanier, J. and E. M. Gelbard (1969) Monte Carlo Principles andNeutron Transport Problems, Addison-Wesley: Reading, MA.Wasow, W. R. (1952), “A Note on the Inversion of Matrices byRandom Walks," Mathematical Tables and Other Aids toComputation, 6(38): 78–81.

Page 309: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Bibliography

Bibliography

Sherman, A. S., and C. S. Peskin (1986) “A Monte Carlo methodfor scalar reaction diffusion equations”, SIAMJ. Sci. Stat. Comput., 7: 1360–1372.Sherman, A. S., and C. S. Peskin (1988) “Solving theHodgkin-Huxley equations by a random walk method”, SIAMJ. Sci. Stat. Comput., 9: 170–190.Shreider, Y. A. (1966) The Monte Carlo Method. The Method ofStatistical Trial, Pergamon Press: New York.Spanier, J. and E. M. Gelbard (1969) Monte Carlo Principles andNeutron Transport Problems, Addison-Wesley: Reading, MA.Wasow, W. R. (1952), “A Note on the Inversion of Matrices byRandom Walks," Mathematical Tables and Other Aids toComputation, 6(38): 78–81.

Page 310: Monte Carlo Methods for Partial Differential Equationsmascagni/mcpdenew.pdf · Monte Carlo Methods for Partial Differential Equations ... Introduction Early History of ... where L

MCM for PDEs

Bibliography

c©Michael Mascagni, 2011


Recommended