QMCPySou-Cheng T. Choi Fred J. Hickernell Michael McCourt
Jagadeeswaran Rathinavel
Aleksei Sorokin
CONTENTS
1 About Our QMC Software Community 1 1.1 Quasi-Monte Carlo
Community Software . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . 1
1.1.1 Installation . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . 1 1.1.2 The QMCPy Framework .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. 1 1.1.3 Quickstart . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . 2 1.1.4 Developers . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . 3 1.1.5 Collaborators . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . 3 1.1.6
Contributors . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . 3 1.1.7 Citation . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . 3 1.1.8 References . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . 4 1.1.9 Sponsors . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . 5
2 License 7
3 QMCPy Documentation 9 3.1 Discrete Distribution Class . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10
3.1.1 Abstract Discrete Distribution Class . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . 10 3.1.2 Digital Net Base 2 . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . 11 3.1.3 Lattice . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . 14 3.1.4 Halton . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . 16 3.1.5 IID Standard Uniform . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.2 True Measure Class . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . 18 3.2.1 Abstract Measure
Class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . 18 3.2.2 Uniform . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.2.3
Gaussian . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . 19 3.2.4 Brownian Motion . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
20 3.2.5 Lebesgue . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . 20 3.2.6 Continuous Bernoulli
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . 21 3.2.7 Johnson’s SU . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . 21 3.2.8 Kumaraswamy
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . 22 3.2.9 SciPy Wrapper . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.3 Integrand Class . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . 23 3.3.1 Abstract
Integrand Class . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . 24 3.3.2 Custom Function . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 3.3.3
Keister Function . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . 26 3.3.4 Box Integral . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. 27 3.3.5 European Option . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . 27 3.3.6 Asian Option . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . 28 3.3.7 Multilevel Call Options with Milstein
Discretization . . . . . . . . . . . . . . . . . . . . . 30 3.3.8
Linear Function . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . 31
i
3.3.9 Sobol’ Indices . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . 31 3.4 Stopping Criterion
Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . 34
3.4.1 Abstract Stopping Criterion Class . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . 34 3.4.2 Guaranteed Digital Net
Cubature (QMC) . . . . . . . . . . . . . . . . . . . . . . . . . .
. 35 3.4.3 Guaranteed Lattice Cubature (QMC) . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . 39 3.4.4 Bayesian Lattice
Cubature (QMC) . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . 42 3.4.5 Bayesian Digital Net Cubature (QMC) . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . 43 3.4.6 CLT QMC
Cubature (with Replications) . . . . . . . . . . . . . . . . . . .
. . . . . . . . . 45 3.4.7 Guaranteed MC Cubature . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . 48 3.4.8 CLT
MC Cubature . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . 50 3.4.9 Continuation Multilevel QMC Cubature
. . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 3.4.10
Multilevel QMC Cubature . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . 53 3.4.11 Continuation Multilevel MC
Cubature . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
55 3.4.12 Multilevel MC Cubature . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . 57
3.5 Utilities . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . 59
4.1.1 References . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . 63 4.2 Welcome to QMCPy . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . 63
4.2.1 Importing QMCPy . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . 63 4.2.2 Important Notes . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . 64
4.3 Integration Examples using QMCPy package . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . 67 4.3.1 Keister Example . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . 67 4.3.2 Arithmetic-Mean Asian Put Option: Single Level . .
. . . . . . . . . . . . . . . . . . . . . 68 4.3.3 Arithmetic-Mean
Asian Put Option: Multi-Level . . . . . . . . . . . . . . . . . . .
. . . . 69 4.3.4 Keister Example using Bayesian Cubature . . . . .
. . . . . . . . . . . . . . . . . . . . . . 70
4.4 QMCPy for Lebesgue Integration . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . 71 4.4.1 Sample Problem 1 .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . 71 4.4.2 Sample Problem 2 . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . 72 4.4.3 Sample
Problem 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . 73 4.4.4 Sample Problem 4 . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
73
4.5 Scatter Plots of Samples . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . 74 4.5.1 IID Samples .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . 74 4.5.2 LD Samples . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . 75 4.5.3
Transform to the True Distribution . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . 76 4.5.4 Shift and Stretch the True
Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . 78 4.5.5 Plots samples on a 2D Keister function . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . 79
4.6 A Monte Carlo vs Quasi-Monte Carlo Comparison . . . . . . . . .
. . . . . . . . . . . . . . . . . . 81 4.6.1 Vary Absolute
Tolerance . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . 81 4.6.2 Vary Dimension . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . 82
4.7 Quasi-Random Sequence Generator Comparison . . . . . . . . . .
. . . . . . . . . . . . . . . . . . 83 4.7.1 General Usage . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . 84 4.7.2 QMCPy Generator Times Comparison . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . 84
4.8 Importance Sampling Examples . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . 89 4.8.1 Game Example . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . 89 4.8.2 Asian Call Option Example . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . 91 4.8.3 Importance
Sampling MC vs QMC . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . 94
4.9 NEI (Noisy Expected Improvement) Demo . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . 96 4.9.1 Goal . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . 101 4.9.2 Computation of the QEI quantity using qmcpy . . .
. . . . . . . . . . . . . . . . . . . . . 102
4.10 QEI (Q-Noisy Expected Improvement) Demo for Blog . . . . . . .
. . . . . . . . . . . . . . . . . . 104 4.10.1 Problem setup . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . 105 4.10.2 Computation of the qEI quantity using qmcpy .
. . . . . . . . . . . . . . . . . . . . . . . 105
ii
4.10.3 GP model definition (kernel information) and qEI definition
. . . . . . . . . . . . . . . . . 106 4.10.4 Demonstrate the
concept of qEI on 2 points . . . . . . . . . . . . . . . . . . . .
. . . . . . 107 4.10.5 Choose some set of next points against which
to test the computation . . . . . . . . . . . . . 108
4.11 Basic Ray Tracing . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . 112 4.12 A closer look
at QMCPy’s Sobol’ generator . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . 119
4.12.1 Basic usage . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . 119 4.12.2 Randomize with
digital shift / linear matrix scramble . . . . . . . . . . . . . .
. . . . . . . 120 4.12.3 Support for graycode and natural ordering
. . . . . . . . . . . . . . . . . . . . . . . . . . . 120
4.13 Custom Dimensions . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . 121 4.13.1 Custom
generating matricies . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . 121 4.13.2 Skipping points vs randomization .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
127
4.14 LatNetBuilder . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . 129 4.14.1 Ordinary
Lattice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . 130 4.14.2 Polynomial Lattice . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
4.14.3 Sobol’ . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . 132 4.14.4 Output Directories
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . 133
4.15 Some True Measures . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . 134 4.15.1 Mathematics .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . 134 4.15.2 Imports . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 4.15.3
1D Density Plot . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . 135 4.15.4 2D Density Plot . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . 136 4.15.5 1D Expected Values . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . 138 4.15.6 Importance
Samling with a Single Kumaraswamy . . . . . . . . . . . . . . . . .
. . . . . . 139 4.15.7 Importance Sampling with 2 (Composed)
Kumaraswamys . . . . . . . . . . . . . . . . . . 140 4.15.8 Can we
Improve the Keister function? . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . 141
4.16 Comparison of multilevel (Quasi-)Monte Carlo for an Asian
option problem . . . . . . . . . . . . . 144 4.17 Control Variates
in QMCPy . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . 150
4.17.1 Setup . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . 150 4.17.2 Problem 1:
Polynomial Function . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . 151 4.17.3 Problem 2: Keister Function . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 4.17.4
Problem 3: Option Pricing . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . 152
4.18 Elliptic PDE . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . 153 4.18.1 1. Problem
definition . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . 154 4.18.2 2. Single-level methods . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
4.18.3 3. Multilevel methods . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . 163 4.18.4 4. Convergence
tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . 168
4.19 Gaussian Diagnostics . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . 172 4.19.1 Example 1:
Exponential of Cosine . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . 178 4.19.2 Example 2: Random function . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . 185 4.19.3
Example 3a: Keister integrand: npts = 64 . . . . . . . . . . . . .
. . . . . . . . . . . . . . 243 4.19.4 Example 3b: Keister
integrand: npts = 1024 . . . . . . . . . . . . . . . . . . . . . .
. . . . 251
4.20 ML Sensitivity Indices . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . 259 4.20.1 Load Data .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . 260 4.20.2 Importance of Decision Tree
Hyperparameters . . . . . . . . . . . . . . . . . . . . . . . . .
262 4.20.3 Bayesian Optimization of Hyperparameters . . . . . . . .
. . . . . . . . . . . . . . . . . . 265 4.20.4 Best Decision Tree
Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . 266
5 Indices and tables 275
6 Sponsors 277 6.1 Illinois Tech . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 6.2
Kamakura Corporation . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . 277 6.3 SigOpt . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . 277
iii
ABOUT OUR QMC SOFTWARE COMMUNITY
1.1 Quasi-Monte Carlo Community Software
Quasi-Monte Carlo (QMC) methods are used to approximate
multivariate integrals. They have four main components: an
integrand, a discrete distribution, summary output data, and
stopping criterion. Information about the integrand is obtained as
a sequence of values of the function sampled at the data-sites of
the discrete distribution. The stopping criterion tells the
algorithm when the user-specified error tolerance has been
satisfied. We are developing a framework that allows collaborators
in the QMC community to develop plug-and-play modules in an effort
to produce more efficient and portable QMC software. Each of the
above four components is an abstract class. Abstract classes
specify the common properties and methods of all subclasses. The
ways in which the four kinds of classes interact with each other
are also specified. Subclasses then flesh out different integrands,
sampling schemes, and stopping criteria. Besides providing
developers a way to link their new ideas with those implemented by
the rest of the QMC community, we also aim to provide practitioners
with state-of-the-art QMC software for their applications.
Homepage ~ Article ~ GitHub ~ Read the Docs ~ PyPI ~ Blogs ~
DockerHub ~ Contributing ~ Issues
1.1.1 Installation
1.1.2 The QMCPy Framework
The central package including the 5 main components as listed
below. Each component is implemented as abstract classes with
concrete implementations. For example, the lattice and Sobol’
sequences are implemented as concrete implementations of the
DiscreteDistribution abstract class. A complete list of concrete
implementations and thorough documentation can be found in the
QMCPy Read the Docs.
• Stopping Criterion: determines the number of samples necessary to
meet an error tolerance.
• Integrand: the function/process whose expected value will be
approximated.
• True Measure: the distribution to be integrated over.
• Discrete Distribution: a generator of nodes/sequences that can be
either IID (for Monte Carlo) or low- discrepancy (for quasi-Monte
Carlo), that mimic a standard distribution.
• Accumulate Data: stores and updates data used in the integration
process.
1.1.3 Quickstart
Note: If the following mathematics is not rendering try using
Google Chrome and installing the Mathjax Plugin for GitHub.
We will approximate the expected value of the dimensional Keister
integrand [18]
() = /2 cos()
where ∼ (0, /2).
We may choose a Sobol’ discrete distribution with a corresponding
Sobol’ cubature stopping criterion to preform quasi-Monte Carlo
integration.
import qmcpy as qp from numpy import pi, cos, sqrt, linalg d = 2
dnb2 = qp.DigitalNetB2(d) gauss_sobol = qp.Gaussian(dnb2, mean=0,
covariance=1/2) k = qp.CustomFun(
true_measure = gauss_sobol, g = lambda x:
pi**(d/2)*cos(linalg.norm(x,axis=1)))
qmc_sobol_algorithm = qp.CubQMCSobolG(k, abs_tol=1e-3)
solution,data = qmc_sobol_algorithm.integrate() print(data)
Running the above code outputs
LDTransformData (AccumulateData Object) solution 1.808 error_bound
4.68e-04 n_total 2^(13) time_integrate 0.008
CubQMCSobolG (StoppingCriterion Object) abs_tol 0.001 rel_tol 0
n_init 2^(10) n_max 2^(35)
CustomFun (Integrand Object) Gaussian (TrueMeasure Object)
mean 0 covariance 2^(-1) decomp_type PCA
Sobol (DiscreteDistribution Object) d 2^(1) dvec [0 1] randomize
LMS_DS graycode 0 entropy 127071403717453177593768120720330942628
spawn_key ()
A more detailed quickstart can be found in our GitHub repo at
QMCSoftware/demos/quickstart.ipynb or in this Google Colab
quickstart notebook.
We also highly recommend you take a look at Fred Hickernell’s
tutorial at the Monte Carlo Quasi-Monte Carlo 2020 Conference and
the corresponding MCQMC2020 Google Colab notebook.
2 Chapter 1. About Our QMC Software Community
• Christiane Lemieux
• Dirk Nuyens
• Onyekachi Osisiogu
• Art Owen
• Pieterjan Robbe
1.1.6 Contributors
• Jungtaek Kim
1.1.7 Citation
If you find QMCPy helpful in your work, please support us by citing
the following work:
Choi, S.-C. T., Hickernell, F. J., McCourt, M., Rathinavel, J.
& Sorokin, A. QMCPy: A quasi-Monte Carlo Python Library.
Working. 2020. https://qmcsoftware.github.io/QMCSoftware/
BibTex citation available here
[1] F. Y. Kuo and D. Nuyens. “Application of quasi-Monte Carlo
methods to elliptic PDEs with random diffusion coefficients - a
survey of analysis and implementation,” Foundations of
Computational Mathematics, 16(6):1631-1696, 2016. (springer link,
arxiv link)
[2] Fred J. Hickernell, Lan Jiang, Yuewei Liu, and Art B. Owen,
“Guaranteed conservative fixed width confidence intervals via Monte
Carlo sampling,” Monte Carlo and Quasi-Monte Carlo Methods 2012 (J.
Dick, F.Y. Kuo, G. W. Peters, and I. H. Sloan, eds.), pp. 105-128,
Springer-Verlag, Berlin, 2014. DOI:
10.1007/978-3-642-41095-6_5
[3] Sou-Cheng T. Choi, Yuhan Ding, Fred J. Hickernell, Lan Jiang,
Lluis Antoni Jimenez Rugama, Da Li, Ja- gadeeswaran Rathinavel, Xin
Tong, Kan Zhang, Yizhi Zhang, and Xuan Zhou, GAIL: Guaranteed
Automatic Inte- gration Library (Version 2.3.1) [MATLAB Software],
2020. Available from http://gailgithub.github.io/GAIL_Dev/.
[4] Sou-Cheng T. Choi, “MINRES-QLP Pack and Reliable Reproducible
Research via Supportable Scientific Soft- ware,” Journal of Open
Research Software, Volume 2, Number 1, e22, pp. 1-7, 2014.
[5] Sou-Cheng T. Choi and Fred J. Hickernell, “IIT MATH-573
Reliable Mathematical Software” [Course Slides], Illinois Institute
of Technology, Chicago, IL, 2013. Available from
http://gailgithub.github.io/GAIL_Dev/.
[6] Daniel S. Katz, Sou-Cheng T. Choi, Hilmar Lapp, Ketan
Maheshwari, Frank Loffler, Matthew Turk, Marcus D. Hanwell, Nancy
Wilkins-Diehr, James Hetherington, James Howison, Shel Swenson,
Gabrielle D. Allen, Anne C. Elster, Bruce Berriman, Colin Venters,
“Summary of the First Workshop On Sustainable Software for Science:
Practice and Experiences (WSSSPE1),” Journal of Open Research
Software, Volume 2, Number 1, e6, pp. 1-21, 2014.
[7] Fang, K.-T., and Wang, Y. (1994). Number-theoretic Methods in
Statistics. London, UK: CHAPMAN & HALL
[8] Lan Jiang, Guaranteed Adaptive Monte Carlo Methods for
Estimating Means of Random Variables, PhD Thesis, Illinois
Institute of Technology, 2016.
[9] Lluis Antoni Jimenez Rugama and Fred J. Hickernell, “Adaptive
multidimensional integration based on rank- 1 lattices,” Monte
Carlo and Quasi-Monte Carlo Methods: MCQMC, Leuven, Belgium, April
2014 (R. Cools and D. Nuyens, eds.), Springer Proceedings in
Mathematics and Statistics, vol. 163, Springer-Verlag, Berlin,
2016, arXiv:1411.1966, pp. 407-422.
[10] Kai-Tai Fang and Yuan Wang, Number-theoretic Methods in
Statistics, Chapman & Hall, London, 1994.
[11] Fred J. Hickernell and Lluis Antoni Jimenez Rugama, “Reliable
adaptive cubature using digital sequences,” Monte Carlo and
Quasi-Monte Carlo Methods: MCQMC, Leuven, Belgium, April 2014 (R.
Cools and D. Nuyens, eds.), Springer Proceedings in Mathematics and
Statistics, vol. 163, Springer-Verlag, Berlin, 2016,
arXiv:1410.8615 [math.NA], pp. 367-383.
[12] Marius Hofert and Christiane Lemieux (2019). qrng:
(Randomized) Quasi-Random Number Generators. R package version
0.0-7. https://CRAN.R-project.org/package=qrng.
[13] Faure, Henri, and Christiane Lemieux. “Implementation of
Irreducible Sobol’ Sequences in Prime Power Bases,” Mathematics and
Computers in Simulation 161 (2019): 13–22.
[14] M. B. Giles. “Multi-level Monte Carlo path simulation,”
Operations Research, 56(3):607-617, 2008. http://
people.maths.ox.ac.uk/~gilesm/files/OPRE_2008.pdf.
[15] M. B. Giles. “Improved multilevel Monte Carlo convergence
using the Milstein scheme,” 343-358, in Monte Carlo and Quasi-Monte
Carlo Methods 2006, Springer, 2008.
http://people.maths.ox.ac.uk/~gilesm/files/mcqmc06.pdf.
[16] M. B. Giles and B. J. Waterhouse. “Multilevel quasi-Monte
Carlo path simulation,” pp.165-181 in Advanced Financial Modelling,
in Radon Series on Computational and Applied Mathematics, de
Gruyter, 2009. http://people.
maths.ox.ac.uk/~gilesm/files/radon.pdf.
[17] Owen, A. B. “A randomized Halton algorithm in R,” 2017.
arXiv:1706.02808 [stat.CO]
[18] B. D. Keister, Multidimensional Quadrature Algorithms,
‘Computers in Physics’, 10, pp. 119-122, 1996.
4 Chapter 1. About Our QMC Software Community
[19] L’Ecuyer, Pierre & Munger, David. (2015). LatticeBuilder:
A General Software Tool for Constructing Rank-1 Lattice Rules. ACM
Transactions on Mathematical Software. 42. 10.1145/2754929.
[20] Fischer, Gregory & Carmon, Ziv & Zauberman, Gal &
L’Ecuyer, Pierre. (1999). Good Parameters and Im- plementations for
Combined Multiple Recursive Random Number Generators. Operations
Research. 47. 159-164. 10.1287/opre.47.1.159.
[21] I.M. Sobol’, V.I. Turchaninov, Yu.L. Levitan, B.V. Shukhman:
“Quasi-Random Sequence Generators” Keldysh Institute of Applied
Mathematics, Russian Acamdey of Sciences, Moscow (1992).
[22] Sobol, Ilya & Asotsky, Danil & Kreinin, Alexander
& Kucherenko, Sergei. (2011). Construction and Comparison of
High-Dimensional Sobol’ Generators. Wilmott. 2011.
10.1002/wilm.10056.
[23] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J.,
Chanan, G., . . . Chintala, S. (2019). Py- Torch: An Imperative
Style, High-Performance Deep Learning Library. In H. Wallach, H.
Larochelle, A. Beygelzimer, F. d extquotesingle Alch’e-Buc, E. Fox,
& R. Garnett (Eds.), Advances in Neural Information Pro-
cessing Systems 32 (pp. 8024–8035). Curran Associates, Inc.
Retrieved from http://papers.neurips.cc/paper/
9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf
[24] S. Joe and F. Y. Kuo, Constructing Sobol sequences with better
two-dimensional projections, SIAM J. Sci. Com- put. 30, 2635-2654
(2008).
[25] Paul Bratley and Bennett L. Fox. 1988. Algorithm 659:
Implementing Sobol’s quasirandom sequence generator. ACM Trans.
Math. Softw. 14, 1 (March 1988), 88–100.
DOI:https://doi.org/10.1145/42288.214372
[26] P. L’Ecuyer, P. Marion, M. Godin, and F. Puchhammer, ``A Tool
for Custom Construction of QMC and RQMC Point Sets,” Monte Carlo
and Quasi-Monte Carlo Methods 2020.
[27] P Kumaraswamy, A generalized probability density function for
double-bounded random processes. J. Hydrol. 46, 79–88 (1980).
[28] D Li, Reliable quasi-Monte Carlo with control variates.
Master’s thesis, Illinois Institute of Technology (2016)
[29] D.H. Bailey, J.M. Borwein, R.E. Crandall, Box integrals,
Journal of Computational and Applied Mathematics, Volume 206, Issue
1, 2007, Pages 196-208, ISSN 0377-0427,
https://doi.org/10.1016/j.cam.2006.06.010.
[30] Art B. Owen.Monte Carlo theory, methods and examples.
2013.
1.1.9 Sponsors
CHAPTER
TWO
LICENSE
Copyright [2021] [Illinois Institute of Technology]
Licensed under the Apache License, Version 2.0 (the “License”); you
may not use this file except in compliance with the License. You
may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an “AS IS” BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied. See the License for the specific language governing
permissions and limitations under the License.
class
qmcpy.discrete_distribution._discrete_distribution.DiscreteDistribution(dimension,
seed)
__init__(dimension, seed)
Parameters
• dimension (int or ndarray) – dimension of the generator. If an
int is passed in, use sequence dimensions [0,. . . ,dimensions-1].
If a ndarray is passed in, use these dimension indices in the
sequence. Note that this is not relevant for IID generators.
• seed (int or numpy.random.SeedSequence) – seed to create random
number generator
gen_samples(*args) ABSTRACT METHOD to generate samples from this
discrete distribution.
Parameters args (tuple) – tuple of positional argument. See
implementations for details
Returns n x d array of samples
10 Chapter 3. QMCPy Documentation
QMCPy, Release 1.3
Return type ndarray
pdf(x) ABSTRACT METHOD to evaluate pdf of distribution the samples
mimic at locations of x.
spawn(s=1, dimensions=None) Spawn new instances of the current
discrete distribution but with new seeds and dimensions. Developed
for multi-level and multi-replication (Q)MC algorithms.
Parameters
• s (int) – number of spawn
• dimensions (ndarray) – length s array of dimension for each
spawn. Defaults to current dimension
Returns list of DiscreteDistribution instances with new seeds and
dimensions
Return type list
class
qmcpy.discrete_distribution.digital_net_b2.digital_net_b2.DigitalNetB2(dimension=1,
ran- dom- ize='LMS_DS', gray- code=False, seed=None, gen- er- at-
ing_matrices='sobol_mat.21201.32.32.msb.npy', d_max=None,
t_max=None, m_max=None, msb=None, t_lms=None, _ver-
bose=False)
Quasi-Random digital nets in base 2.
>>> dnb2 = DigitalNetB2(2,seed=7) >>>
dnb2.gen_samples(4) array([[0.56269008, 0.17377997],
[0.346653 , 0.65070632], [0.82074548, 0.95490574], [0.10422261,
0.49458097]])
>>> dnb2.gen_samples(1) array([[0.56269008, 0.17377997]])
>>> dnb2 DigitalNetB2 (DiscreteDistribution Object)
d 2^(1) dvec [0 1] randomize LMS_DS graycode 0 entropy 7 spawn_key
()
(continues on next page)
[0.75, 0.25]])
[1] Marius Hofert and Christiane Lemieux (2019). qrng: (Randomized)
Quasi-Random Number Generators. R package version 0.0-7.
https://CRAN.R-project.org/package=qrng.
[2] Faure, Henri, and Christiane Lemieux. “Implementation of
Irreducible Sobol’ Sequences in Prime Power Bases.” Mathematics and
Computers in Simulation 161 (2019): 13–22. Crossref. Web.
[3] F.Y. Kuo & D. Nuyens. Application of quasi-Monte Carlo
methods to elliptic PDEs with random dif- fusion coefficients - a
survey of analysis and implementation, Foundations of Computational
Mathematics, 16(6):1631-1696, 2016. springer link:
https://link.springer.com/article/10.1007/s10208-016-9329-5 arxiv
link: https://arxiv.org/abs/1606.06613
[4] D. Nuyens, The Magic Point Shop of QMC point generators and
generating vectors. MATLAB and Python software, 2018. Available
from https://people.cs.kuleuven.be/~dirk.nuyens/
https://bitbucket.org/dnuyens/
qmc-generators/src/cb0f2fb10fa9c9f2665e41419097781b611daa1e/cpp/digitalseq_b2g.hpp
[5] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J.,
Chanan, G., . . . Chintala, S. (2019). PyTorch: An Imperative
Style, High-Performance Deep Learning Library. In H. Wallach, H.
Larochelle, A. Beygelz- imer, F. d extquotesingle Alch'e-Buc, E.
Fox, & R. Garnett (Eds.), Advances in Neural Information
Process- ing Systems 32 (pp. 8024–8035). Curran Associates, Inc.
Retrieved from http://papers.neurips.cc/paper/
9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf
[6] I.M. Sobol’, V.I. Turchaninov, Yu.L. Levitan, B.V. Shukhman:
“Quasi-Random Sequence Generators” Keldysh Institute of Applied
Mathematics, Russian Acamdey of Sciences, Moscow (1992).
[7] Sobol, Ilya & Asotsky, Danil & Kreinin, Alexander &
Kucherenko, Sergei. (2011). Construction and Comparison of
High-Dimensional Sobol’ Generators. Wilmott. 2011.
10.1002/wilm.10056.
[8] Paul Bratley and Bennett L. Fox. 1988. Algorithm 659:
Implementing Sobol’s quasirandom sequence generator. ACM Trans.
Math. Softw. 14, 1 (March 1988), 88–100.
DOI:https://doi.org/10.1145/42288.214372
__init__(dimension=1, randomize='LMS_DS', graycode=False,
seed=None, generat- ing_matrices='sobol_mat.21201.32.32.msb.npy',
d_max=None, t_max=None, m_max=None, msb=None, t_lms=None,
_verbose=False)
Parameters
• dimension (int or ndarray) – dimension of the generator. If an
int is passed in, use sequence dimensions [0,. . . ,dimensions-1].
If a ndarray is passed in, use these dimension indices in the
sequence.
• randomize (bool) – apply randomization? True defaults to LMS_DS.
Can also ex- plicitly pass in ‘LMS_DS’: linear matrix scramble with
digital shift ‘LMS’: linear matrix scramble only ‘DS’: digital
shift only
• graycode (bool) – indicator to use graycode ordering (True) or
natural ordering (False)
• seed (list) – int seed of list of seeds, one for each
dimension.
12 Chapter 3. QMCPy Documentation
• generating_matrices (ndarray or str) – generating matricies or
path to generating matrices. ndarray should have shape (d_max,
m_max) where each int has t_max bits generating_matrices sould be
formatted like gen_mat.21201.32.32.msb.npy with
name.d_max.t_max.m_max.{msb,lsb}.npy
• d_max (int) – max dimension
• t_max (int) – number of bits in each int of each generating
matrix. aka: number of rows in a generating matrix with ints
expanded into columns
• m_max (int) – 2^m_max is the number of samples supported. aka:
number of columns in a generating matrix with ints expanded into
columns
• msb (bool) – bit storage as ints. e.g. if t_max=3, then 6 is [1 1
0] in MSB (True) and [0 1 1] in LSB (False)
• t_lms (int) – LMS scrambling matrix will be t_lms x t_max for
generating matrix of shape t_max x m_max
• _verbose (bool) – print randomization details
gen_samples(n=None, n_min=0, n_max=8, warn=True,
return_unrandomized=False) Generate samples
Parameters
• n (int) – if n is supplied, generate from n_min=0 to n_max=n
samples. Otherwise use the n_min and n_max explicitly supplied as
the following 2 arguments
• n_min (int) – Starting index of sequence.
• n_max (int) – Final index of sequence.
• return_unrandomized (bool) – return unrandomized samples as well?
If True, re- turn randomized_samples,unrandomized_samples. Note
that this only applies when ran- domize includes Digital Shift.
Also note that unrandomized samples included linear matrix
scrambling if applicable.
Returns (n_max-n_min) x d (dimension) array of samples
Return type ndarray
class
qmcpy.discrete_distribution.digital_net_b2.digital_net_b2.Sobol(dimension=1,
ran- dom- ize='LMS_DS', gray- code=False, seed=None, gen- erat-
ing_matrices='sobol_mat.21201.32.32.msb.npy', d_max=None,
t_max=None, m_max=None, msb=None, t_lms=None, _ver-
bose=False)
3.1. Discrete Distribution Class 13
Quasi-Random Lattice nets in base 2.
>>> l = Lattice(2,seed=7) >>> l.gen_samples(4)
array([[0.04386058, 0.58727432],
[0.54386058, 0.08727432], [0.29386058, 0.33727432], [0.79386058,
0.83727432]])
>>> l.gen_samples(1) array([[0.04386058, 0.58727432]])
>>> l Lattice (DiscreteDistribution Object)
d 2^(1) dvec [0 1] randomize 1 order natural entropy 7 spawn_key
()
>>>
Lattice(dimension=2,randomize=False,order='natural').gen_samples(4,
→warn=False) array([[0. , 0. ],
[0.5 , 0.5 ], [0.25, 0.75], [0.75, 0.25]])
>>>
Lattice(dimension=2,randomize=False,order='linear').gen_samples(4,
warn=False) array([[0. , 0. ],
[0.25, 0.75], [0.5 , 0.5 ], [0.75, 0.25]])
>>>
Lattice(dimension=2,randomize=False,order='mps').gen_samples(4,
warn=False) array([[0. , 0. ],
[0.5 , 0.5 ], [0.25, 0.75], [0.75, 0.25]])
References
[1] Sou-Cheng T. Choi, Yuhan Ding, Fred J. Hickernell, Lan Jiang,
Lluis Antoni Jimenez Rugama, Da Li, Jagadeeswaran Rathinavel, Xin
Tong, Kan Zhang, Yizhi Zhang, and Xuan Zhou, GAIL: Guaranteed
Automatic Integration Library (Version 2.3) [MATLAB Software],
2019. Available from http://gailgithub.github.io/GAIL_ Dev/
[2] F.Y. Kuo & D. Nuyens. Application of quasi-Monte Carlo
methods to elliptic PDEs with random dif- fusion coefficients - a
survey of analysis and implementation, Foundations of Computational
Mathematics, 16(6):1631-1696, 2016. springer link:
https://link.springer.com/article/10.1007/s10208-016-9329-5 arxiv
link: https://arxiv.org/abs/1606.06613
14 Chapter 3. QMCPy Documentation
[3] D. Nuyens, The Magic Point Shop of QMC point generators and
generating vectors. MATLAB and Python software, 2018. Available
from https://people.cs.kuleuven.be/~dirk.nuyens/
[4] Constructing embedded lattice rules for multivariate
integration R Cools, FY Kuo, D Nuyens - SIAM J. Sci. Comput.,
28(6), 2162-2188.
[5] L’Ecuyer, Pierre & Munger, David. (2015). LatticeBuilder: A
General Software Tool for Constructing Rank-1 Lattice Rules. ACM
Transactions on Mathematical Software. 42. 10.1145/2754929.
__init__(dimension=1, randomize=True, order='natural', seed=None,
generat- ing_vector='lattice_vec.3600.20.npy', d_max=None,
m_max=None)
Parameters
• dimension (int or ndarray) – dimension of the generator. If an
int is passed in, use sequence dimensions [0,. . . ,dimensions-1].
If a ndarray is passed in, use these dimension indices in the
sequence.
• randomize (bool) – If True, apply shift to generated samples.
Note: Non-randomized lattice sequence includes the origin.
• order (str) – ‘linear’, ‘natural’, or ‘mps’ ordering.
• seed (None or int or numpy.random.SeedSeq) – seed the random
number generator for reproducibility
• generating_vector (ndarray or str) – generating matrix or path to
generat- ing matrices. ndarray should have shape (d_max). a string
generating_vector should be formatted like
‘lattice_vec.3600.20.npy’ where ‘name.d_max.m_max.npy’
• d_max (int) – maximum dimension
• m_max (int) – 2^m_max is the max number of supported
samples
Note: d_max and m_max are required if generating_vector is a
ndarray. If generating_vector is an string (path), d_max and m_max
can be taken from the file name if None
gen_samples(n=None, n_min=0, n_max=8, warn=True,
return_unrandomized=False) Generate lattice samples
Parameters
• n (int) – if n is supplied, generate from n_min=0 to n_max=n
samples. Otherwise use the n_min and n_max explicitly supplied as
the following 2 arguments
• n_min (int) – Starting index of sequence.
• n_max (int) – Final index of sequence.
• return_unrandomized (bool) – return samples without randomization
as 2nd re- turn value. Will not be returned if
randomize=False.
Returns (n_max-n_min) x d (dimension) array of samples
Return type ndarray
Note: Lattice generates in blocks from 2**m to 2**(m+1) so
generating n_min=3 to n_max=9 requires necessarily produces samples
from n_min=2 to n_max=16 and automatically subsets. May be
inefficient for non-powers-of-2 samples sizes.
3.1. Discrete Distribution Class 15
3.1.4 Halton
Quasi-Random Halton nets.
>>> h_qrng =
Halton(2,randomize='QRNG',generalize=True,seed=7) >>>
h_qrng.gen_samples(4) array([[0.35362988, 0.38733489],
[0.85362988, 0.72066823], [0.10362988, 0.05400156], [0.60362988,
0.498446 ]])
>>> h_qrng.gen_samples(1) array([[0.35362988,
0.38733489]]) >>> h_qrng Halton (DiscreteDistribution
Object)
d 2^(1) dvec [0 1] randomize QRNG generalize 1 entropy 7 spawn_key
()
>>> h_owen =
Halton(2,randomize='OWEN',generalize=False,seed=7) >>>
h_owen.gen_samples(4) array([[0.64637012, 0.48226667],
[0.14637012, 0.81560001], [0.89637012, 0.14893334], [0.39637012,
0.59337779]])
>>> h_owen Halton (DiscreteDistribution Object)
d 2^(1) dvec [0 1] randomize OWEN generalize 0 entropy 7 spawn_key
()
References
[1] Marius Hofert and Christiane Lemieux (2019). qrng: (Randomized)
Quasi-Random Number Generators. R package version 0.0-7.
https://CRAN.R-project.org/package=qrng.
[2] Owen, A. B. “A randomized Halton algorithm in R,” 2017.
arXiv:1706.02808 [stat.CO]
__init__(dimension=1, randomize=True, generalize=True,
seed=None)
Parameters
• dimension (int or ndarray) – dimension of the generator. If an
int is passed in, use sequence dimensions [0,. . . ,dimensions-1].
If a ndarray is passed in, use these dimension indices in the
sequence.
• randomize (str/bool) – select randomization method from ‘QRNG”
[1], (max di- mension = 360, supports generalize=True, default if
randomize=True) or ‘OWEN’ [2],
16 Chapter 3. QMCPy Documentation
• generalize (bool) – generalize flag, only applicable to the QRNG
generator
• seed (None or int or numpy.random.SeedSeq) – seed the random
number generator for reproducibility
gen_samples(n=None, n_min=0, n_max=8, warn=True) Generate
samples
Parameters
• n (int) – if n is supplied, generate from n_min=0 to n_max=n
samples. Otherwise use the n_min and n_max explicitly supplied as
the following 2 arguments
• n_min (int) – Starting index of sequence.
• n_max (int) – Final index of sequence.
Returns (n_max-n_min) x d (dimension) array of samples
Return type ndarray
halton_owen(n, n0, d0=0) see gen_samples method and [1] Owen, A.
B.A randomized Halton algorithm in R2017. arXiv:1706.02808
[stat.CO].
pdf(x) ABSTRACT METHOD to evaluate pdf of distribution the samples
mimic at locations of x.
3.1.5 IID Standard Uniform
A wrapper around NumPy’s IID Standard Uniform generator
numpy.random.rand.
>>> dd = IIDStdUniform(dimension=2,seed=7) >>>
dd.gen_samples(4) array([[0.04386058, 0.58727432],
[0.3691824 , 0.65212985], [0.69669968, 0.10605352], [0.63025643,
0.13630282]])
>>> dd IIDStdUniform (DiscreteDistribution Object)
__init__(dimension=1, seed=None)
• dimension (int) – dimension of samples
• seed (None or int or numpy.random.SeedSeq) – seed the random
number generator for reproducibility
gen_samples(n) Generate samples
Returns n x self.d array of samples
3.1. Discrete Distribution Class 17
QMCPy, Release 1.3
Return type ndarray
pdf(x) ABSTRACT METHOD to evaluate pdf of distribution the samples
mimic at locations of x.
3.2 True Measure Class
3.2.1 Abstract Measure Class
gen_samples(*args, **kwargs) Generate samples from the discrete
distribution and transform them via the transform method.
Parameters
• kwargs (dict) – keyword arguments to the discrete distributions
gen_samples method
Returns n x d matrix of transformed samples
Return type ndarray
spawn(s=1, dimensions=None) Spawn new instances of the current
discrete distribution but with new seeds and dimensions. Developed
for multi-level and multi-replication (Q)MC algorithms.
Parameters
QMCPy, Release 1.3
• dimensions (ndarray) – length s array of dimension for each
spawn. Defaults to current dimension
Returns list of TrueMeasures linked to newly spawned
DiscreteDistributions
Return type list
>>> u =
Uniform(DigitalNetB2(2,seed=7),lower_bound=[0,.5],upper_bound=[2,3])
>>> u.gen_samples(4) array([[1.12538017,
0.93444992],
[0.693306 , 2.12676579], [1.64149095, 2.88726434], [0.20844522,
1.73645241]])
>>> u Uniform (TrueMeasure Object)
__init__(sampler, lower_bound=0.0, upper_bound=1.0)
Parameters
• lower_bound (float) – a for Uniform(a,b)
• upper_bound (float) – b for Uniform(a,b)
3.2.3 Gaussian
Normal Measure.
[ 2.45994002, 2.17853622], [-0.22923897, -1.92667105], [ 4.6127697
, 4.25820377]])
>>> g Gaussian (TrueMeasure Object)
[4 5]] decomp_type PCA
Parameters
3.2. True Measure Class 19
• mean (float) – mu for Normal(mu,sigma^2)
• covariance (ndarray) – sigma^2 for Normal(mu,sigma^2). A float or
d (dimension) vector input will be extended to
covariance*eye(d)
• decomp_type (str) – method of decomposition either “PCA” for
principal component analysis or “Cholesky” for cholesky
decomposition.
class qmcpy.true_measure.gaussian.Normal(sampler, mean=0.0,
covariance=1.0, de- comp_type='PCA')
3.2.4 Brownian Motion
[1.97549563, 2.27002956, 2.92802765, 4.77126959]]) >>> bm
BrownianMotion (TrueMeasure Object)
time_vec [0.5 1. 1.5 2. ] drift 2^(1) mean [1. 2. 3. 4.] covariance
[[0.5 0.5 0.5 0.5]
[0.5 1. 1. 1. ] [0.5 1. 1.5 1.5] [0.5 1. 1.5 2. ]]
decomp_type PCA
Parameters
• drift (int) – Gaussian mean is time_vec*drift
• decomp_type (str) – method of decomposition either “PCA” for
principal component analysis or “Cholesky” for cholesky
decomposition.
3.2.5 Lebesgue
class qmcpy.true_measure.lebesgue.Lebesgue(sampler)
transform Gaussian (TrueMeasure Object) mean 0 covariance 1
decomp_type PCA
(continues on next page)
transform Uniform (TrueMeasure Object) lower_bound 0 upper_bound
1
__init__(sampler)
Parameters sampler (TrueMeasure) – A true measure by which to
compose a transform.
3.2.6 Continuous Bernoulli
>>> bc BernoulliCont (TrueMeasure Object)
• sampler (DiscreteDistribution/TrueMeasure) – A discrete
distribution from which to transform samples or a true measure by
which to compose a transform
• lam (ndarray) – 0 < lambda < 1, a shape parameter,
independent for each dimension
3.2.7 Johnson’s SU
>>> jsu =
JohnsonsSU(DigitalNetB2(2,seed=7),gamma=1,xi=2,delta=3,lam=4)
>>> jsu.gen_samples(4) array([[ 0.86224892,
-0.76967276],
[ 0.07317047, 1.17727769], [ 1.89093286, 2.9341619 ], [-1.30283298,
0.62269632]])
>>> jsu JohnsonsSU (TrueMeasure Object)
gamma 1 xi 2^(1) delta 3 lam 2^(2)
See https://en.wikipedia.org/wiki/Johnson%27s_SU-distribution
Parameters
• gamma (ndarray) – gamma
• xi (ndarray) – xi
>>> k =
Kumaraswamy(DigitalNetB2(2,seed=7),a=[1,2],b=[3,4]) >>>
k.gen_samples(4) array([[0.24096272, 0.21587652],
[0.13227662, 0.4808615 ], [0.43615893, 0.73428949], [0.03602294,
0.39602319]])
>>> k Kumaraswamy (TrueMeasure Object)
See https://en.wikipedia.org/wiki/Kumaraswamy_distribution
Parameters
• a (ndarray) – alpha > 0
• b (ndarray) – beta > 0
[1.6995412 , 3.88553573], [2.79502629, 5.24025887],
(continues on next page)
[1.30634136, 3.45650562]]) >>> triangular SciPyWrapper
(TrueMeasure Object)
scipy_distrib triang c [0.1 0.2] loc [1 2] scale [3 4]
__init__(sampler, scipy_distrib, **scipy_distrib_kwargs)
• scipy_stat_distrib (float) – a CONTINUOUS UNIVARIATE scipy.stats
dis- tribution e.g. scipy.stats.norm, see
https://docs.scipy.org/doc/scipy/reference/stats.html#
continuous-distributions
• **scipy_distrib_kwargs (float) – keyword arguments for
scipy_stat_distrib.{pdf,ppf or pmf}. Note that you may pass in
vectors of keyword arguments and they will be distributed
appropriotly across the dimensions. Also note that positional
aguments to scipy_stat_distrib.{pdf,ppf or pmf} must still be
supplied as keyword arguments to QMCPy’s SciPyWrapper, see e.g. the
above doctest and https:
//docs.scipy.org/doc/scipy/reference/generated/scipy.stats.triang.html#scipy.stats.triang
3.3 Integrand Class
__init__(dprime, parallel)
• dprime (tuple) – function output dimension shape.
• parallel (int) – If parallel is False, 0, or 1: function
evaluation is done in serial fashion. Otherwise, parallel specifies
the number of CPUs used by multiprocessing.Pool. Passing
parallel=True sets the number of CPUs equal to
os.cpu_count().
bound_fun(bound_low, bound_high) Compute the bounds on the combined
function based on bounds for the individual functions. Defaults to
the identity where we essentially do not combine integrands, but
instead integrate each function individu- ally.
Parameters
Returns
• (ndarray): lower bound on function combining estimates
• (ndarray): upper bound on function combining estimates
• (ndarray): bool flags to override sufficient combined integrand
estimation, e.g., when ap- proximating a ratio of integrals, if the
denominator’s bounds straddle 0, then returning True here forces
ratio to be flagged as insufficiently approximated.
dependency(flags_comb) takes a vector of indicators of weather of
not the error bound is satisfied for combined integrands and which
returns flags for individual integrands. For example, if we are
taking the ratio of 2 individual inte- grands, then getting
flag_comb=True means the ratio has not been approximated to within
the tolerance, so the dependency function should return [True,True]
indicating that both the numerator and denominator integrands need
to be better approximated. :param flags_comb: flags indicating
weather the combined integrals are insufficiently approximated
:type flags_comb: bool ndarray
Returns length (Integrand.dprime) flags for individual
integrands
Return type (bool ndarray)
Parameters
• x (ndarray) – n x d array of samples from a discrete
distribution
• periodization_transform (str) – periodization transform
• **kwargs (dict) – other keyword args to g
Returns length n vector of function evaluations
24 Chapter 3. QMCPy Documentation
g(t, *args, **kwargs) ABSTRACT METHOD for original integrand to be
integrated.
Parameters t (ndarray) – n x d array of samples to be input into
original integrand.
Returns n vector of function evaluations
Return type ndarray
spawn(levels) Spawn new instances of the current integrand at the
specified levels.
Parameters levels (ndarray) – array of levels at which to spawn new
integrands
Returns list of Integrands linked to newly spawned TrueMeasures and
DiscreteDistributions
Return type list
3.3.2 Custom Function
>>> cf = CustomFun( ... true_measure =
Gaussian(DigitalNetB2(2,seed=7),mean=[1,2]), ... g = lambda x:
x[:,0]**2*x[:,1], ... dprime = 1) >>> x =
cf.discrete_distrib.gen_samples(2**10) >>> y = cf.f(x)
>>> y.shape (1024, 1) >>> y.mean() 3.995...
>>> cf = CustomFun( ... true_measure =
Uniform(DigitalNetB2(3,seed=7),lower_bound=[2,3,4],upper_
→bound=[4,5,6]), ... g = lambda x,compute_flags=None: x, ... dprime
= 3) >>> x = cf.discrete_distrib.gen_samples(2**10)
>>> y = cf.f(x) >>> y.shape (1024, 3)
>>> y.mean(0) array([3., 4., 5.])
__init__(true_measure, g, dprime=1, parallel=False)
Parameters
• dprime (tuple) – function output dimension shape.
• parallel (int) – If parallel is False, 0, or 1: function
evaluation is done in serial fashion. Otherwise, parallel specifies
the number of CPUs used by multiprocessing.Pool. Passing
parallel=True sets the number of CPUs equal to os.cpu_count(). Do
NOT set g to a lambda function when doing parallel
computation
3.3. Integrand Class 25
g(t, *args, **kwargs) ABSTRACT METHOD for original integrand to be
integrated.
Parameters t (ndarray) – n x d array of samples to be input into
original integrand.
Returns n vector of function evaluations
Return type ndarray
3.3.3 Keister Function
class qmcpy.integrand.keister.Keister(sampler) () = /2 cos().
The standard example integrates the Keister integrand with respect
to an IID Gaussian distribution with variance 1./2.
>>> k = Keister(DigitalNetB2(2,seed=7)) >>> x =
k.discrete_distrib.gen_samples(2**10) >>> y = k.f(x)
>>> y.mean() 1.808... >>> k.true_measure Gaussian
(TrueMeasure Object)
mean 0 covariance 2^(-1) decomp_type PCA
>>> k =
Keister(Gaussian(DigitalNetB2(2,seed=7),mean=0,covariance=2))
>>> x = k.discrete_distrib.gen_samples(2**12) >>>
y = k.f(x) >>> y.mean() 1.808... >>> yp =
k.f(x,periodization_transform='c2sin') >>> yp.mean()
1.807...
References
[1] B. D. Keister, Multidimensional Quadrature Algorithms,
Computers in Physics, 10, pp. 119-122, 1996.
__init__(sampler)
Parameters sampler (DiscreteDistribution/TrueMeasure) – A discrete
distribu- tion from which to transform samples or a true measure by
which to compose a transform
exact_integ(d) computes the true value of the Keister integral in
dimension d. Accuracy might degrade as d increases due to round-off
error. :param d: :return: true_integral
g(t) ABSTRACT METHOD for original integrand to be integrated.
Parameters t (ndarray) – n x d array of samples to be input into
original integrand.
Returns n vector of function evaluations
Return type ndarray
QMCPy, Release 1.3
3.3.4 Box Integral
() = (∑
)/2
References:
[2] https://www.davidhbailey.com/dhbpapers/boxintegrals.pdf
Parameters
• sampler (DiscreteDistribution/TrueMeasure) – A discrete
distribution from which to transform samples or a true measure by
which to compose a transform
• s (list or ndarray) – vectorized s parameter, len(s) is the
number of vectorized integrals to evalute.
g(t, **kwargs) ABSTRACT METHOD for original integrand to be
integrated.
Parameters t (ndarray) – n x d array of samples to be input into
original integrand.
Returns n vector of function evaluations
Return type ndarray
3.3.5 European Option
European financial option.
(continues on next page)
3.3. Integrand Class 27
(continued from previous page)
volatility 2^(-1) call_put put start_price 30 strike_price 35
interest_rate 0
>>> x = eo.discrete_distrib.gen_samples(2**12)
>>> y = eo.f(x) >>> y.mean() 9.209...
>>> eo =
EuropeanOption(BrownianMotion(DigitalNetB2(4,seed=7),drift=1),call_put=
→'put') >>> x = eo.discrete_distrib.gen_samples(2**12)
>>> y = eo.f(x) >>> y.mean() 9.162...
>>> eo.get_exact_value() 9.211452976234058
__init__(sampler, volatility=0.5, start_price=30, strike_price=35,
interest_rate=0, t_final=1, call_put='call')
Parameters
• start_price (float) – S(0), the asset value at t=0
• strike_price (float) – strike_price, the call/put offer
• interest_rate (float) – r, the annual interest rate
• t_final (float) – exercise time
g(t) See abstract method.
get_exact_value() Get the fair price of a European call/put
option.
Returns fair price
Return type float
3.3.6 Asian Option
Asian financial option.
AsianOption (Integrand Object) volatility 2^(-1) call_put call
start_price 30 strike_price 35 interest_rate 0 mean_type arithmetic
dim_frac 0
>>> x = ac.discrete_distrib.gen_samples(2**12)
>>> y = ac.f(x) >>> y.mean() 1.768...
>>> level_dims = [2,4,8] >>> ac2_multilevel =
AsianOption(DigitalNetB2(seed=7),multilevel_dims=level_dims)
>>> levels_to_spawn = arange(ac2_multilevel.max_level+1)
>>> ac2_single_levels =
ac2_multilevel.spawn(levels_to_spawn) >>> yml = 0
>>> for ac2_single_level in ac2_single_levels: ... x =
ac2_single_level.discrete_distrib.gen_samples(2**12) ... level_est
= ac2_single_level.f(x).mean() ... yml += level_est >>>
yml 1.779...
__init__(sampler, volatility=0.5, start_price=30.0,
strike_price=35.0, interest_rate=0.0, t_final=1, call_put='call',
mean_type='arithmetic', multilevel_dims=None, _dim_frac=0)
Parameters
• start_price (float) – S(0), the asset value at t=0
• strike_price (float) – strike_price, the call/put offer
• interest_rate (float) – r, the annual interest rate
• t_final (float) – exercise time
• mean_type (string) – ‘arithmetic’ or ‘geometric’ mean
• multilevel_dims (list of ints) – list of dimensions at each
level. Leave as None for single-level problems
• _dim_frac (float) – for internal use only, users should not set
this parameter.
g(t) ABSTRACT METHOD for original integrand to be integrated.
Parameters t (ndarray) – n x d array of samples to be input into
original integrand.
Returns n vector of function evaluations
Return type ndarray
class qmcpy.integrand.ml_call_options.MLCallOptions(sampler,
option='european', volatility=0.2, start_strike_price=100.0, in-
terest_rate=0.05, t_final=1.0, _level=0)
Various call options from finance using Milstein discretization
with 2 timesteps on level .
>>> mlco_original = MLCallOptions(DigitalNetB2(seed=7))
>>> mlco_original MLCallOptions (Integrand Object)
option european sigma 0.200 k 100 r 0.050 t 1 b 85 level 0
>>> mlco_ml_dims = mlco_original.spawn(levels=arange(4))
>>> yml = 0 >>> for mlco in mlco_ml_dims: ... x =
mlco.discrete_distrib.gen_samples(2**10) ... yml +=
mlco.f(x).mean() >>> yml 10.393...
References:
[1] M.B. Giles. Improved multilevel Monte Carlo convergence using
the Milstein scheme. 343-358, in Monte Carlo and Quasi-Monte Carlo
Methods 2006, Springer, 2008. http://people.maths.ox.ac.uk/~gilesm/
files/mcqmc06.pdf.
__init__(sampler, option='european', volatility=0.2,
start_strike_price=100.0, interest_rate=0.05, t_final=1.0,
_level=0)
Parameters
• sampler (DiscreteDistribution/TrueMeasure) – A discrete
distribution from which to transform samples or a true measure by
which to compose a transform
• option_type (str) – type of option in [“European”,”Asian”]
• volatility (float) – sigma, the volatility of the asset
• start_strike_price (float) – S(0), the asset value at t=0, and K,
the strike price. Assume start_price = strike_price
• interest_rate (float) – r, the annual interest rate
• t_final (float) – exercise time
• _level (int) – for internal use only, users should not set this
parameter.
g(t)
Parameters t (ndarray) – Gaussian(0,1^2) samples
Returns First, an ndarray of length 6 vector of summary statistic
sums. Second, a float of cost on this level.
Return type tuple
3.3.8 Linear Function
__init__(sampler)
Parameters sampler (DiscreteDistribution/TrueMeasure) – A discrete
distribu- tion from which to transform samples or a true measure by
which to compose a transform
g(t) ABSTRACT METHOD for original integrand to be integrated.
Parameters t (ndarray) – n x d array of samples to be input into
original integrand.
Returns n vector of function evaluations
Return type ndarray
3.3.9 Sobol’ Indices
[0.33884667, 0.33857811, 0.33884115]]) >>> data
LDTransformData (AccumulateData Object)
solution [[0.328 0.328 0.328] [0.339 0.339 0.339]]
indv_error [[0.002 0.002 0.002] [0.002 0.002 0.002] [0. 0. 0. ] [0.
0. 0. ] [0.001 0.001 0.001] [0.003 0.003 0.003]]
(continues on next page)
3.3. Integrand Class 31
(continued from previous page)
ci_low [[1.67 1.67 1.671] [1.725 1.724 1.725] [2.168 2.168 2.168]
[2.168 2.168 2.168] [9.799 9.799 9.799] [9.797 9.797 9.797]]
ci_high [[1.675 1.674 1.675] [1.73 1.729 1.73 ] [2.168 2.168 2.168]
[2.169 2.169 2.169] [9.802 9.802 9.802] [9.803 9.803 9.803]]
ci_comb_low [[0.327 0.327 0.328] [0.338 0.338 0.338]]
ci_comb_high [[0.329 0.329 0.329] [0.34 0.339 0.34 ]]
flags_comb [[False False False] [False False False]]
flags_indv [[False False False] [False False False] [False False
False] [False False False] [False False False] [False False
False]]
n_total 2^(16) n [[65536. 65536. 65536.]
[32768. 32768. 32768.] [65536. 65536. 65536.] [32768. 32768.
32768.] [65536. 65536. 65536.] [32768. 32768. 32768.]]
time_integrate ... CubQMCNetG (StoppingCriterion Object)
abs_tol 0.001 rel_tol 0 n_init 2^(10) n_max 2^(35)
SobolIndices (Integrand Object) indices [[0]
[1] [2]]
mean 0 covariance 2^(-1) decomp_type PCA
DigitalNetB2 (DiscreteDistribution Object) d 6 dvec [0 1 2 3 4 5]
randomize LMS_DS graycode 0 entropy 7 spawn_key (0,)
32 Chapter 3. QMCPy Documentation
QMCPy, Release 1.3
References
[1] Art B. Owen.Monte Carlo theory, methods and examples. 2013.
Appendix A.
__init__(integrand, indices='singletons')
• integrand (Integrand) – integrand to find Sobol’ indices of
• indices (list of lists) – each element of indices should be a
list of indices, u, at which to compute the Sobol’ indices. The
default indices=’singletons’ sets in- dices=[[0],[1],. . . [d-1]].
Should not include [], the null set
bound_fun(bound_low, bound_high) Compute the bounds on the combined
function based on bounds for the individual functions. Defaults to
the identity where we essentially do not combine integrands, but
instead integrate each function individu- ally.
Parameters
Returns
• (ndarray): lower bound on function combining estimates
• (ndarray): upper bound on function combining estimates
• (ndarray): bool flags to override sufficient combined integrand
estimation, e.g., when ap- proximating a ratio of integrals, if the
denominator’s bounds straddle 0, then returning True here forces
ratio to be flagged as insufficiently approximated.
dependency(flags_comb) takes a vector of indicators of weather of
not the error bound is satisfied for combined integrands and which
returns flags for individual integrands. For example, if we are
taking the ratio of 2 individual inte- grands, then getting
flag_comb=True means the ratio has not been approximated to within
the tolerance, so the dependency function should return [True,True]
indicating that both the numerator and denominator integrands need
to be better approximated. :param flags_comb: flags indicating
weather the combined integrals are insufficiently approximated
:type flags_comb: bool ndarray
Returns length (Integrand.dprime) flags for individual
integrands
Return type (bool ndarray)
f(x, *args, **kwargs) Evaluate transformed integrand based on true
measures and discrete distribution
Parameters
• x (ndarray) – n x d array of samples from a discrete
distribution
• periodization_transform (str) – periodization transform
• **kwargs (dict) – other keyword args to g
Returns length n vector of function evaluations
3.3. Integrand Class 33
class
qmcpy.stopping_criterion._stopping_criterion.StoppingCriterion(allowed_levels,
al- lowed_distribs, al- low_vectorized_integrals)
__init__(allowed_levels, allowed_distribs,
allow_vectorized_integrals)
• allowed_levels (list) – which integrand types are supported:
‘single’, ‘fixed- multi’, ‘adaptive-multi’
• allowed_distribs (list) – list of compatible DiscreteDistribution
classes
integrate() ABSTRACT METHOD to determine the number of samples
needed to satisfy the tolerance.
Returns
• data (AccumulateData): an AccumulateData object
Return type tuple
set_tolerance(*args, **kwargs) ABSTRACT METHOD to reset the
absolute tolerance.
34 Chapter 3. QMCPy Documentation
class qmcpy.stopping_criterion.cub_qmc_net_g.CubQMCNetG(integrand,
abs_tol=0.01, rel_tol=0.0, n_init=1024.0, n_max=34359738368.0,
fudge=<function CubQM- CNetG.<lambda>>,
check_cone=False, con- trol_variates=[], con-
trol_variate_means=[], update_beta=False, error_fun=<function
CubQMC- NetG.<lambda>>)
Quasi-Monte Carlo method using Sobol’ cubature over the
d-dimensional region to integrate within a specified generalized
error tolerance with guarantees under Walsh-Fourier coefficients
cone decay assumptions.
>>> k = Keister(DigitalNetB2(2,seed=7)) >>> sc =
CubQMCNetG(k,abs_tol=.05) >>> solution,data =
sc.integrate() >>> data LDTransformData (AccumulateData
Object)
solution 1.809 indv_error 0.005 ci_low 1.804 ci_high 1.814
ci_comb_low 1.804 ci_comb_high 1.814 flags_comb 0 flags_indv 0
n_total 2^(10) n 2^(10) time_integrate ...
CubQMCNetG (StoppingCriterion Object) abs_tol 0.050 rel_tol 0
n_init 2^(10) n_max 2^(35)
Keister (Integrand Object) Gaussian (TrueMeasure Object)
mean 0 covariance 2^(-1) decomp_type PCA
DigitalNetB2 (DiscreteDistribution Object) d 2^(1) dvec [0 1]
randomize LMS_DS graycode 0 entropy 7 spawn_key ()
>>> dd = DigitalNetB2(3,seed=7) >>> g1 =
CustomFun(Uniform(dd,0,2),lambda t:
10*t[:,0]-5*t[:,1]**2+t[:,2]**3) >>> cv1 =
CustomFun(Uniform(dd,0,2),lambda t: t[:,0]) >>> cv2 =
CustomFun(Uniform(dd,0,2),lambda t: t[:,1]**2) >>> sc =
CubQMCNetG(g1,abs_tol=1e-6,check_cone=True, ... control_variates =
[cv1,cv2],
(continues on next page)
QMCPy, Release 1.3
... control_variate_means = [1,4/3]) >>> sol,data =
sc.integrate() >>> sol array([5.33333333]) >>>
exactsol = 16/3 >>> abs(sol-exactsol)<1e-6 array([
True]) >>> dnb2 = DigitalNetB2(3,seed=7) >>> f =
BoxIntegral(dnb2, s=[-1,1]) >>> abs_tol = 1e-3
>>> sc = CubQMCNetG(f, abs_tol=abs_tol) >>>
solution,data = sc.integrate() >>> solution
array([1.18944142, 0.96064165]) >>> sol3neg1 =
-pi/4-1/2*log(2)+log(5+3*sqrt(3)) >>> sol31 =
sqrt(3)/4+1/2*log(2+sqrt(3))-pi/24 >>> true_value =
array([sol3neg1,sol31]) >>>
(abs(true_value-solution)<abs_tol).all() True >>> f2 =
BoxIntegral(dnb2,s=[3,4]) >>> sc =
CubQMCNetG(f2,control_variates=f,control_variate_means=true_value,update_
→beta=True) >>> solution,data = sc.integrate()
>>> solution array([1.10168119, 1.26661293]) >>>
data LDTransformData (AccumulateData Object)
solution [1.102 1.267] indv_error [0.002 0.005] ci_low [1.099
1.262] ci_high [1.104 1.271] ci_comb_low [1.099 1.262] ci_comb_high
[1.104 1.271] flags_comb [False False] flags_indv [False False]
n_total 2^(10) n [1024. 1024.] time_integrate ...
CubQMCNetG (StoppingCriterion Object) abs_tol 0.010 rel_tol 0
n_init 2^(10) n_max 2^(35) cv BoxIntegral (Integrand Object)
s [-1 1] cv_mu [1.19 0.961] update_beta 1
BoxIntegral (Integrand Object) s [3 4]
Uniform (TrueMeasure Object) lower_bound 0 upper_bound 1
DigitalNetB2 (DiscreteDistribution Object) d 3 dvec [0 1 2]
randomize LMS_DS graycode 0
(continues on next page)
QMCPy, Release 1.3
solution [[1. 2. 3.] [4. 5. 6.]]
indv_error [[2.825e-08 6.101e-07 2.456e-10] [4.547e-12 3.725e-07
3.499e-09]]
ci_low [[1. 2. 3.] [4. 5. 6.]]
ci_high [[1. 2. 3.] [4. 5. 6.]]
ci_comb_low [[1. 2. 3.] [4. 5. 6.]]
ci_comb_high [[1. 2. 3.] [4. 5. 6.]]
flags_comb [[False False False] [False False False]]
flags_indv [[False False False] [False False False]]
n_total 2^(13) n [[2048. 1024. 1024.]
[8192. 4096. 2048.]] time_integrate ...
CubQMCNetG (StoppingCriterion Object) abs_tol 1.00e-06 rel_tol 0
n_init 2^(10) n_max 2^(35)
CustomFun (Integrand Object) Uniform (TrueMeasure Object)
lower_bound 0 upper_bound 1
DigitalNetB2 (DiscreteDistribution Object) d 6 dvec [0 1 2 3 4 5]
randomize LMS_DS graycode 0 entropy 7 spawn_key ()
Original Implementation:
References
[1] Fred J. Hickernell and Lluis Antoni Jimenez Rugama, Reliable
adaptive cubature using digital sequences, 2014. Submitted for
publication: arXiv:1410.8615.
[2] Sou-Cheng T. Choi, Yuhan Ding, Fred J. Hickernell, Lan Jiang,
Lluis Antoni Jimenez Rugama, Da Li, Jagadeeswaran Rathinavel, Xin
Tong, Kan Zhang, Yizhi Zhang, and Xuan Zhou, GAIL: Guaranteed
Automatic Integration Library (Version 2.3) [MATLAB Software],
2019. Available from http://gailgithub.github.io/GAIL_ Dev/
Guarantee: This algorithm computes the integral of real valued
functions in [0, 1] with a prescribed gener- alized error
tolerance. The Fourier coefficients of the integrand are assumed to
be absolutely convergent. If the algorithm terminates without
warning messages, the output is given with guarantees under the as-
sumption that the integrand lies inside a cone of functions. The
guarantee is based on the decay rate of the Fourier coefficients.
For integration over domains other than [0, 1], this cone condition
applies to (the composition of the functions) where is the
transformation function for [0, 1] to the desired region. For more
details on how the cone is defined, please refer to the references
below.
__init__(integrand, abs_tol=0.01, rel_tol=0.0, n_init=1024.0,
n_max=34359738368.0, fudge=<function
CubQMCNetG.<lambda>>, check_cone=False,
control_variates=[], control_variate_means=[], update_beta=False,
error_fun=<function CubQMC- NetG.<lambda>>)
Parameters
• abs_tol (float) – absolute error tolerance
• rel_tol (float) – relative error tolerance
• n_init (int) – initial number of samples
• n_max (int) – maximum number of samples
• fudge (function) – positive function multiplying the finite sum
of Fast Fourier coef- ficients specified in the cone of
functions
• check_cone (boolean) – check if the function falls in the
cone
• control_variates (list) – list of integrand objects to be used as
control variates. Control variates are currently only compatible
with single level problems. The same dis- crete distribution
instance must be used for the integrand and each of the control
variates.
• control_variate_means (list) – list of means for each control
variate
• update_beta (bool) – update control variate beta coefficients at
each iteration?
• error_fun – function taking in the approximate solution vector,
absolute tolerance, and relative tolerance which returns the
approximate error. Default indicates integration until either
absolute OR relative tolerance is satisfied.
38 Chapter 3. QMCPy Documentation
3.4.3 Guaranteed Lattice Cubature (QMC)
class
qmcpy.stopping_criterion.cub_qmc_lattice_g.CubQMCLatticeG(integrand,
abs_tol=0.01, rel_tol=0.0, n_init=1024.0, n_max=34359738368.0,
fudge=<function CubQMCLat- ticeG.<lambda>>,
check_cone=False, ptrans- form='Baker', er- ror_fun=<function
CubQMCLat- ticeG.<lambda>>)
Stopping Criterion quasi-Monte Carlo method using rank-1 Lattices
cubature over a d-dimensional region to integrate within a
specified generalized error tolerance with guarantees under Fourier
coefficients cone decay assumptions.
>>> k = Keister(Lattice(2,seed=7)) >>> sc =
CubQMCLatticeG(k,abs_tol=.05) >>> solution,data =
sc.integrate() >>> data LDTransformData (AccumulateData
Object)
solution 1.810 indv_error 0.005 ci_low 1.806 ci_high 1.815
ci_comb_low 1.806 ci_comb_high 1.815 flags_comb 0 flags_indv 0
n_total 2^(10) n 2^(10) time_integrate ...
(continues on next page)
QMCPy, Release 1.3
CubQMCLatticeG (StoppingCriterion Object) abs_tol 0.050 rel_tol 0
n_init 2^(10) n_max 2^(35)
Keister (Integrand Object) Gaussian (TrueMeasure Object)
mean 0 covariance 2^(-1) decomp_type PCA
Lattice (DiscreteDistribution Object) d 2^(1) dvec [0 1] randomize
1 order natural entropy 7 spawn_key ()
>>> f = BoxIntegral(Lattice(3,seed=7), s=[-1,1])
>>> abs_tol = 1e-3 >>> sc = CubQMCLatticeG(f,
abs_tol=abs_tol) >>> solution,data = sc.integrate()
>>> solution array([1.18954582, 0.96056304]) >>>
sol3neg1 = -pi/4-1/2*log(2)+log(5+3*sqrt(3)) >>> sol31 =
sqrt(3)/4+1/2*log(2+sqrt(3))-pi/24 >>> true_value =
array([sol3neg1,sol31]) >>>
(abs(true_value-solution)<abs_tol).all() True >>> cf =
CustomFun( ... true_measure = Uniform(Lattice(6,seed=7)), ... g =
lambda x,compute_flags=None: (2*arange(1,7)*x).reshape(-1,2,3), ...
dprime = (2,3)) >>> sol,data =
CubQMCLatticeG(cf,abs_tol=1e-6).integrate() >>> data
LDTransformData (AccumulateData Object)
solution [[1. 2. 3.] [4. 5. 6.]]
indv_error [[9.658e-07 4.835e-07 7.244e-07] [9.655e-07 3.017e-07
3.625e-07]]
ci_low [[1. 2. 3.] [4. 5. 6.]]
ci_high [[1. 2. 3.] [4. 5. 6.]]
ci_comb_low [[1. 2. 3.] [4. 5. 6.]]
ci_comb_high [[1. 2. 3.] [4. 5. 6.]]
flags_comb [[False False False] [False False False]]
flags_indv [[False False False] [False False False]]
n_total 2^(15) n [[ 8192. 16384. 16384.]
[16384. 32768. 32768.]] time_integrate ...
(continues on next page)
QMCPy, Release 1.3
CustomFun (Integrand Object) Uniform (TrueMeasure Object)
lower_bound 0 upper_bound 1
Lattice (DiscreteDistribution Object) d 6 dvec [0 1 2 3 4 5]
randomize 1 order natural entropy 7 spawn_key ()
Original Implementation:
References
[1] Lluis Antoni Jimenez Rugama and Fred J. Hickernell, “Adaptive
multidimensional integration based on rank-1 lattices,” Monte Carlo
and Quasi-Monte Carlo Methods: MCQMC, Leuven, Belgium, April 2014
(R. Cools and D. Nuyens, eds.), Springer Proceedings in Mathematics
and Statistics, vol. 163, Springer-Verlag, Berlin, 2016,
arXiv:1411.1966, pp. 407-422.
[2] Sou-Cheng T. Choi, Yuhan Ding, Fred J. Hickernell, Lan Jiang,
Lluis Antoni Jimenez Rugama, Da Li, Jagadeeswaran Rathinavel, Xin
Tong, Kan Zhang, Yizhi Zhang, and Xuan Zhou, GAIL: Guaranteed
Automatic Integration Library (Version 2.3) [MATLAB Software],
2019. Available from http://gailgithub.github.io/GAIL_ Dev/
Guarantee This algorithm computes the integral of real valued
functions in [0, 1] with a prescribed generalized error tolerance.
The Fourier coefficients of the integrand are assumed to be
absolutely convergent. If the algorithm terminates without warning
messages, the output is given with guarantees under the assumption
that the integrand lies inside a cone of functions. The guarantee
is based on the decay rate of the Fourier coefficients. For
integration over domains other than [0, 1], this cone condition
applies to (the composition of the functions) where is the
transformation function for [0, 1] to the desired region. For more
details on how the cone is defined, please refer to the references
below.
__init__(integrand, abs_tol=0.01, rel_tol=0.0, n_init=1024.0,
n_max=34359738368.0, fudge=<function
CubQMCLatticeG.<lambda>>, check_cone=False,
ptransform='Baker', error_fun=<function
CubQMCLatticeG.<lambda>>)
Parameters
• abs_tol (float) – absolute error tolerance
• rel_tol (float) – relative error tolerance
• n_init (int) – initial number of samples
• n_max (int) – maximum number of samples
• fudge (function) – positive function multiplying the finite sum
of Fast Fourier coef- ficients specified in the cone of
functions
3.4. Stopping Criterion Algorithms 41
• check_cone (boolean) – check if the function falls in the
cone
• error_fun – function taking in the approximate solution vector,
absolute tolerance, and relative tolerance which returns the
approximate error. Default indicates integration until either
absolute OR relative tolerance is satisfied.
3.4.4 Bayesian Lattice Cubature (QMC)
class
qmcpy.stopping_criterion.cub_qmc_bayes_lattice_g.CubBayesLatticeG(integrand,
abs_tol=0.01, rel_tol=0, n_init=256, n_max=4194304, or- der=2, al-
pha=0.01, ptrans- form='C1sin', er- ror_fun=<function Cub-
BayesLat- ticeG.<lambda>>)
Stopping criterion for Bayesian Cubature using rank-1 Lattice
sequence with guaranteed accuracy over a d- dimensional region to
integrate within a specified generalized error tolerance with
guarantees under Bayesian assumptions.
>>> k = Keister(Lattice(2, order='linear',
seed=123456789)) >>> sc = CubBayesLatticeG(k,abs_tol=.05)
>>> solution,data = sc.integrate() >>> data
LDTransformBayesData (AccumulateData Object)
solution 1.808 indv_error 6.41e-04 ci_low 1.808 ci_high 1.809
ci_comb_low 1.808 ci_comb_high 1.809 flags_comb 0 flags_indv 0
n_total 2^(8) n 2^(8) time_integrate ...
CubBayesLatticeG (StoppingCriterion Object) abs_tol 0.050 rel_tol 0
n_init 2^(8) n_max 2^(22) order 2^(1)
Keister (Integrand Object) Gaussian (TrueMeasure Object)
mean 0 covariance 2^(-1) decomp_type PCA
Lattice (DiscreteDistribution Object) (continues on next
page)
42 Chapter 3. QMCPy Documentation
QMCPy, Release 1.3
(continued from previous page)
d 2^(1) dvec [0 1] randomize 1 order linear entropy 123456789
spawn_key ()
Adapted from GAIL cubBayesLattice_g.
Reference [1] Sou-Cheng T. Choi, Yuhan Ding, Fred J. Hickernell,
Lan Jiang, Lluis Antoni Jimenez Rugama, Da Li, Jagadeeswaran
Rathinavel, Xin Tong, Kan Zhang, Yizhi Zhang, and Xuan Zhou, GAIL:
Guaranteed Automatic Integration Library (Version 2.3) [MATLAB
Software], 2019. Available from GAIL.
Guarantee This algorithm attempts to calculate the integral of
function f over the hyperbox [0,1]^d to a pre- scribed error
tolerance tolfun:= max(abstol, reltol*| I |) with guaranteed
confidence level, e.g., 99% when alpha=0.5%. If the algorithm
terminates without showing any warning messages and provides an
answer Q, then the following inequality would be satisfied:
Pr(| Q - I | <= tolfun) = 99%.
This Bayesian cubature algorithm guarantees for integrands that are
considered to be an instance of a gaussian process that fall in the
middle of samples space spanned. Where The sample space is spanned
by the covariance kernel parametrized by the scale and shape
parameter inferred from the sampled values of the integrand. For
more details on how the covariance kernels are defined and the
parameters are obtained, please refer to the references
below.
3.4.5 Bayesian Digital Net Cubature (QMC)
class
qmcpy.stopping_criterion.cub_qmc_bayes_net_g.CubBayesNetG(integrand,
abs_tol=0.01, rel_tol=0, n_init=256, n_max=4194304, alpha=0.01, er-
ror_fun=<function Cub- BayesNetG.<lambda>>)
Stopping criterion for Bayesian Cubature using digital net sequence
with guaranteed accuracy over a d- dimensional region to integrate
within a specified generalized error tolerance with guarantees
under Bayesian assumptions.
>>> k = Keister(DigitalNetB2(2, seed=123456789))
>>> sc = CubBayesNetG(k,abs_tol=.05) >>>
solution,data = sc.integrate() >>> data
LDTransformBayesData (AccumulateData Object)
solution 1.812 indv_error 0.015 ci_low 1.796 ci_high 1.827
ci_comb_low 1.796 ci_comb_high 1.827 flags_comb 0
(continues on next page)
CubBayesNetG (StoppingCriterion Object) abs_tol 0.050 rel_tol 0
n_init 2^(8) n_max 2^(22)
Keister (Integrand Object) Gaussian (TrueMeasure Object)
mean 0 covariance 2^(-1) decomp_type PCA
DigitalNetB2 (DiscreteDistribution Object) d 2^(1) dvec [0 1]
randomize LMS_DS graycode 0 entropy 123456789 spawn_key ()
Adapted from GAIL cubBayesNet_g.
Reference [1] Sou-Cheng T. Choi, Yuhan Ding, Fred J. Hickernell,
Lan Jiang, Lluis Antoni Jimenez Rugama, Da Li, Jagadeeswaran
Rathinavel, Xin Tong, Kan Zhang, Yizhi Zhang, and Xuan Zhou, GAIL:
Guaranteed Automatic Integration Library (Version 2.3) [MATLAB
Software], 2019. Available from GAIL.
Guarantee This algorithm attempts to calculate the integral of
function f over the hyperbox [0,1]^d to a pre- scribed error
tolerance
tolfun:= max(abstol, reltol*| I |)
with guaranteed confidence level, e.g., 99% when alpha=0.5%. If the
algorithm terminates without show- ing any warning messages and
provides an answer Q, then the following inequality would be
satisfied:
Pr(| Q - I | <= tolfun) = 99%.
This Bayesian cubature algorithm guarantees for integrands that are
considered to be an instance of a gaussian process that fall in the
middle of samples space spanned. Where The sample space is spanned
by the covariance kernel parametrized by the scale and shape
parameter inferred from the sampled values of the integrand. For
more details on how the covariance kernels are defined and the
parameters are obtained, please refer to the references
below.
class
qmcpy.stopping_criterion.cub_qmc_bayes_net_g.CubBayesSobolG(integrand,
abs_tol=0.01, rel_tol=0, n_init=256, n_max=4194304, al- pha=0.01,
er- ror_fun=<function Cub- BayesNetG.<lambda>>)
44 Chapter 3. QMCPy Documentation
class qmcpy.stopping_criterion.cub_qmc_clt.CubQMCCLT(integrand,
abs_tol=0.01, rel_tol=0.0, n_init=256.0, n_max=1073741824, in-
flate=1.2, alpha=0.01, replications=16.0, er- ror_fun=<function
CubQMC- CLT.<lambda>>)
Stopping criterion based on Central Limit Theorem for multiple
replications.
>>> k = Keister(Lattice(seed=7)) >>> sc =
CubQMCCLT(k,abs_tol=.05) >>> solution,data =
sc.integrate() >>> solution array([1.38030146])
>>> data MeanVarDataRep (AccumulateData Object)
solution 1.380 indv_error 6.92e-04 ci_low 1.380 ci_high 1.381
ci_comb_low 1.380 ci_comb_high 1.381 flags_comb 0 flags_indv 0
n_total 2^(12) n 2^(12) n_rep 2^(8) time_integrate ...
CubQMCCLT (StoppingCriterion Object) inflate 1.200 alpha 0.010
abs_tol 0.050 rel_tol 0 n_init 2^(8) n_max 2^(30) replications
2^(4)
Keister (Integrand Object) Gaussian (TrueMeasure Object)
mean 0 covariance 2^(-1) decomp_type PCA
Lattice (DiscreteDistribution Object) d 1 dvec 0 randomize 1 order
natural entropy 7 spawn_key ()
>>> f = BoxIntegral(Lattice(3,seed=7), s=[-1,1])
>>> abs_tol = 1e-3 >>> sc = CubQMCCLT(f,
abs_tol=abs_tol) >>> solution,data = sc.integrate()
>>> solution array([1.19023153, 0.96068581]) >>>
data
(continues on next page)
QMCPy, Release 1.3
(continued from previous page)
MeanVarDataRep (AccumulateData Object) solution [1.19 0.961]
indv_error [0.001 0.001] ci_low [1.19 0.96] ci_high [1.191 0.961]
ci_comb_low [1.19 0.96] ci_comb_high [1.191 0.961] flags_comb
[False False] flags_indv [False False] n_total 2^(21) n [2097152.
8192.] n_rep [131072. 512.] time_integrate ...
CubQMCCLT (StoppingCriterion Object) inflate 1.200 alpha 0.010
abs_tol 0.001 rel_tol 0 n_init 2^(8) n_max 2^(30) replications
2^(4)
BoxIntegral (Integrand Object) s [-1 1]
Uniform (TrueMeasure Object) lower_bound 0 upper_bound 1
Lattice (DiscreteDistribution Object) d 3 dvec [0 1 2] randomize 1
order natural entropy 7 spawn_key ()
>>> sol3neg1 = -pi/4-1/2*log(2)+log(5+3*sqrt(3))
>>> sol31 = sqrt(3)/4+1/2*log(2+sqrt(3))-pi/24
>>> true_value = array([sol3neg1,sol31]) >>>
(abs(true_value-solution)<abs_tol).all() True >>> cf =
CustomFun( ... true_measure = Uniform(DigitalNetB2(6,seed=7)), ...
g = lambda x,compute_flags=None: (2*arange(1,7)*x).reshape(-1,2,3),
... dprime = (2,3)) >>> sol,data =
CubQMCCLT(cf,abs_tol=1e-4).integrate() >>> data
MeanVarDataRep (AccumulateData Object)
solution [[1. 2. 3.] [4. 5. 6.]]
indv_error [[2.484e-05 0.000e+00 0.000e+00] [5.708e-06 2.178e-10
0.000e+00]]
ci_low [[1. 2. 3.] [4. 5. 6.]]
ci_high [[1. 2. 3.] [4. 5. 6.]]
ci_comb_low [[1. 2. 3.] [4. 5. 6.]]
ci_comb_high [[1. 2. 3.] [4. 5. 6.]]
(continues on next page)
QMCPy, Release 1.3
[1024. 256. 256.]] time_integrate ...
CustomFun (Integrand Object) Uniform (TrueMeasure Object)
lower_bound 0 upper_bound 1
DigitalNetB2 (DiscreteDistribution Object) d 6 dvec [0 1 2 3 4 5]
randomize LMS_DS graycode 0 entropy 7 spawn_key ()
__init__(integrand, abs_tol=0.01, rel_tol=0.0, n_init=256.0,
n_max=1073741824, inflate=1.2, al- pha=0.01, replications=16.0,
error_fun=<function CubQMCCLT.<lambda>>)
Parameters
• inflate (float) – inflation factor when estimating variance
• alpha (float) – significance level for confidence interval
• abs_tol (float) – absolute error tolerance
• rel_tol (float) – relative error tolerance
• n_max (int) – maximum number of samples
• replications (int) – number of replications
• error_fun – function taking in the approximate solution vector,
absolute tolerance, and relative tolerance which returns the
approximate error. Default indicates integration until either
absolute OR relative tolerance is satisfied.
integrate() See abstract method.
Parameters
• abs_tol (float) – absolute tolerance. Reset if supplied, ignored
if not.
3.4. Stopping Criterion Algorithms 47
• rel_tol (float) – relative tolerance. Reset if supplied, ignored
if not.
3.4.7 Guaranteed MC Cubature
Stopping criterion with guaranteed accuracy.
>>> k = Keister(IIDStdUniform(2,seed=7)) >>> sc =
CubMCG(k,abs_tol=.05) >>> solution,data = sc.integrate()
>>> data MeanVarData (AccumulateData Object)
solution 1.807 error_bound 0.050 n_total 15256 n 14232 levels 1
time_integrate ...
CubMCG (StoppingCriterion Object) abs_tol 0.050 rel_tol 0 n_init
2^(10) n_max 10000000000 inflate 1.200 alpha 0.010
Keister (Integrand Object) Gaussian (TrueMeasure Object)
mean 0 covariance 2^(-1) decomp_type PCA
IIDStdUniform (DiscreteDistribution Object) d 2^(1) entropy 7
spawn_key ()
>>> dd = IIDStdUniform(1,seed=7) >>> k =
Keister(dd) >>> cv1 = CustomFun(Uniform(dd),lambda x:
sin(pi*x).sum(1)) >>> cv1mean = 2/pi >>> cv2 =
CustomFun(Uniform(dd),lambda x: (-3*(x-.5)**2+1).sum(1))
>>> cv2mean = 3/4 >>> sc1 =
CubMCG(k,abs_tol=.05,control_variates=[cv1,cv2],control_variate_
→means=[cv1mean,cv2mean]) >>> sol,data = sc1.integrate()
>>> sol 1.384...
Original Implementation:
References
[1] Fred J. Hickernell, Lan Jiang, Yuewei Liu, and Art B. Owen,
“Guaranteed conservative fixed width confi- dence intervals via
Monte Carlo sampling,” Monte Carlo and Quasi-Monte Carlo Methods
2012 (J. Dick, F. Y. Kuo, G. W. Peters, and I. H. Sloan, eds.), pp.
105-128, Springer-Verlag, Berlin, 2014. DOI: 10.1007/978-3-642-
41095-6_5
[2] Sou-Cheng T. Choi, Yuhan Ding, Fred J. Hickernell, Lan Jiang,
Lluis Antoni Jimenez Rugama, Da Li, Jagadeeswaran Rathinavel, Xin
Tong, Kan Zhang, Yizhi Zhang, and Xuan Zhou, GAIL: Guaranteed
Automatic Integration Library (Version 2.3) [MATLAB Software],
2019. Available from http://gailgithub.github.io/GAIL_ Dev/
Guarantee: This algorithm attempts to calculate the mean, mu, of a
random variable to a prescribed error tolerance, _tol_fun:=
max(abstol,reltol*|mu|), with guaranteed confidence level 1-alpha.
If the algorithm terminates without showing any warning messages
and provides an answer tmu, then the follow inequality would be
satisfied: ¶(|− | _) 1 − .
__init__(integrand, abs_tol=0.01, rel_tol=0.0, n_init=1024.0,
n_max=10000000000.0, inflate=1.2, alpha=0.01, control_variates=[],
control_variate_means=[])
Parameters
• abs_tol – absolute error tolerance
• rel_tol – relative error tolerance
• n_init – initial number of samples
• n_max – maximum number of samples
• control_variates (list) – list of integrand objects to be used as
control variates. Control variates are currently only compatible
with single level problems. The same dis- crete distribution
instance must be used for the integrand and each of the control
variates.
• control_variate_means (list) – list of means for each control
variate
integrate() See abstract method.
Parameters
• abs_tol (float) – absolute tolerance. Reset if supplied, ignored
if not.
• rel_tol (float) – relative tolerance. Reset if supplied, ignored
if not.
3.4. Stopping Criterion Algorithms 49
Stopping criterion based on the Central Limit Theorem.
>>> ao = AsianOption(IIDStdUniform(seed=7)) >>>
sc = CubMCCLT(ao,abs_tol=.05) >>> solution,data =
sc.integrate() >>> data MeanVarData (AccumulateData
Object)
solution 1.519 error_bound 0.046 n_total 96028 n 95004 levels 1
time_integrate ...
CubMCCLT (StoppingCriterion Object) abs_tol 0.050 rel_tol 0 n_init
2^(10) n_max 10000000000 inflate 1.200 alpha 0.010
AsianOption (Integrand Object) volatility 2^(-1) call_put call
start_price 30 strike_price 35 interest_rate 0 mean_type arithmetic
dim_frac 0
BrownianMotion (TrueMeasure Object) time_vec 1 drift 0 mean 0
covariance 1 decomp_type PCA
IIDStdUniform (DiscreteD