+ All Categories
Home > Documents > Random fields, Fall 2014 - Chalmersrootzen/fields/lectures/RF1.pdf · “Level sets and extrema of...

Random fields, Fall 2014 - Chalmersrootzen/fields/lectures/RF1.pdf · “Level sets and extrema of...

Date post: 19-Aug-2020
Category:
Upload: others
View: 0 times
Download: 0 times
Share this document with a friend
31
Nobel prize 2996 to John C. Mather George F. Smoot for discovery of the blackbody form and anisotropy of the cosmic microwave background radiation" Ocean waves The oceans cover 72% of the earth’s surface. Essential for life on earth, and huge economic importance through fishing, transportation, oil and gas extraction fMRI brain scan PET brain scan Random fields, Fall 2014 1
Transcript
Page 1: Random fields, Fall 2014 - Chalmersrootzen/fields/lectures/RF1.pdf · “Level sets and extrema of random processes and fields” by Jean-Marc Azais and Mario Wschebor, Wiley, 2009

Nobel prize 2996 to John C. Mather George F. Smoot “for discovery of the

blackbody form and

anisotropy of the cosmic

microwave background

radiation"

Ocean waves The oceans cover 72% of the earth’s surface. Essential for life on earth, and huge economic importance through fishing, transportation, oil and gas extraction

fMRI brain scan PET brain scan

Random fields, Fall 2014 1

TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A

Page 2: Random fields, Fall 2014 - Chalmersrootzen/fields/lectures/RF1.pdf · “Level sets and extrema of random processes and fields” by Jean-Marc Azais and Mario Wschebor, Wiley, 2009

The course

• Kolmogorov existence theorem, separable processes, measurable processes

• Stationarity and isotropy

• Orthogonal and spectral representations

• Geometry

• Exceedance sets

• Rice formula

• Slepian models

Page 3: Random fields, Fall 2014 - Chalmersrootzen/fields/lectures/RF1.pdf · “Level sets and extrema of random processes and fields” by Jean-Marc Azais and Mario Wschebor, Wiley, 2009

Literature

An unfinished manuscript “Applications of RANDOM FIELDS AND GEOMETRY: Foundations and Case Studies” by Robert Adler, Jonathan Taylor, and Keith Worsley.

Complementary literature:

“Level sets and extrema of random processes and fields” by Jean-Marc Azais and Mario Wschebor, Wiley, 2009

“Asymptotic Methods in the Theory of Gaussian Processes” by Vladimir Piterbarg, American Mathematical Society, ser. Translations of Mathematical Monographs, Vol. 148, 1995

“Random fields and Geometry” by Robert Adler and Jonathan Taylor, Springer 2007

Page 4: Random fields, Fall 2014 - Chalmersrootzen/fields/lectures/RF1.pdf · “Level sets and extrema of random processes and fields” by Jean-Marc Azais and Mario Wschebor, Wiley, 2009

Slides for ATW ch 2, p 24-39

Exercises: 2.8.1, 2.8.2, 2.8.3, 2.8.4, 2.8.5, 2.8.6 + excercises in slides

Page 5: Random fields, Fall 2014 - Chalmersrootzen/fields/lectures/RF1.pdf · “Level sets and extrema of random processes and fields” by Jean-Marc Azais and Mario Wschebor, Wiley, 2009

Stochastic convergence (assumed known)

Almost sure convergence: 𝑋𝑛

𝑎.𝑠.X

Mean square convergence: 𝑋𝑛

𝐿2 X

Convergence in probability: 𝑋𝑛

𝑃 X

Convergence in distribution: 𝑋𝑛

𝑑 X, 𝐹𝑛

𝑑 F, 𝑃𝑛

𝑑 P,…

• 𝑋𝑛

𝑎.𝑠.X ⇒ 𝑋𝑛

𝑃 𝑋

• 𝑋𝑛

𝐿2 X ⇒ 𝑋𝑛

𝑃 𝑋

• 𝑋𝑛

𝑃 𝑋 plus uniform integrability ⇒ 𝑋𝑛

𝐿2 X

• 𝑋𝑛

𝑃 𝑋 ⇒ there is a subsequence {𝑛𝑘} with 𝑋𝑛𝑘

𝑎.𝑠.X

• The random variables don’t really mean anything for 𝑑

In particular, the 𝑋𝑛 and 𝑋 don’t need to be defined on the same probability space, and don’t need to have a simultaneous distribution

Page 6: Random fields, Fall 2014 - Chalmersrootzen/fields/lectures/RF1.pdf · “Level sets and extrema of random processes and fields” by Jean-Marc Azais and Mario Wschebor, Wiley, 2009

Random field

𝑇 parameter space. In this course 𝑇 is 𝑹𝑁 some 𝑁 ≥ 1 or a subset (e.g. box or sphere or surface of sphere) of 𝑹𝑁

𝑹𝑑 value space An (𝑁, 𝑑) random field is a collection (or family) of random variables

𝑓𝑡; 𝑡 ∈ 𝑇

where 𝑇 is a set of dimension 𝑁 and the 𝑓𝑡 (or 𝑓(𝑡)) take values in 𝑹𝑑

Or, a random function with values in 𝑹𝑑 𝑇

Or, a probablility measure on 𝑹𝑑 𝑇

Page 7: Random fields, Fall 2014 - Chalmersrootzen/fields/lectures/RF1.pdf · “Level sets and extrema of random processes and fields” by Jean-Marc Azais and Mario Wschebor, Wiley, 2009

A realisation (or sample function, or sample path, or sample field, or observation, or trajectory, or …) is the function

𝑓𝑡 𝜔 : 𝑇 𝑹𝑑 𝑡 ft(𝜔)

for 𝜔 fixed. Two examples below:

Microscopy image of tablet coating

Thresholded Gaussian field

Page 8: Random fields, Fall 2014 - Chalmersrootzen/fields/lectures/RF1.pdf · “Level sets and extrema of random processes and fields” by Jean-Marc Azais and Mario Wschebor, Wiley, 2009

Terminology

random variable stochastic variable

random element stochastic element

random process stochastic process

random field stochastic field

random vector stochastic vector

Page 9: Random fields, Fall 2014 - Chalmersrootzen/fields/lectures/RF1.pdf · “Level sets and extrema of random processes and fields” by Jean-Marc Azais and Mario Wschebor, Wiley, 2009

Finite dimensional distributions

The finite-dimensional distribution functions of an (𝑁, 𝑑) random field {𝑓𝑡} are defined as

𝐹𝑡1,…,𝑡𝑛 𝒙1, … , 𝒙𝑛 = 𝑃(𝑓𝑡1 ≤ 𝒙1, … , 𝑓𝑡𝑛 ≤ 𝒙n)

and the family of finite-dimensional distribution functions is the set

{𝐹𝑡1,…,𝑡𝑛 𝒙1, … , 𝒙𝑛 ; 𝑡1, … , 𝑡𝑛 ∈ 𝑇, 𝒙1, … , 𝒙𝑛 ∈ 𝑅𝑑 , 𝑛 ≥ 1}

This family has the following obvious properties:

Symmetry: it is not changed under a simultaneous permutation of 𝑡1, … , 𝑡𝑛 and 𝒙1, … , 𝒙𝑛

Consistency:

𝐹𝑡1,…,𝑡𝑛 𝒙1, … , 𝒙𝑛−1, ∞ = 𝐹𝑡1,…,𝑡𝑛−1𝒙1, … , 𝒙𝑛−1

Page 10: Random fields, Fall 2014 - Chalmersrootzen/fields/lectures/RF1.pdf · “Level sets and extrema of random processes and fields” by Jean-Marc Azais and Mario Wschebor, Wiley, 2009

Example of symmetry: 𝐹𝑡1,𝑡2 𝒙1, 𝒙2 = 𝐹𝑡2,𝑡1 𝒙2, 𝒙1

Example of consistency: Marginal distributions may be obtained from bivariate distributions,

𝐹𝑡 𝒙 = 𝐹𝑡,𝑠 𝒙,∞

Three sample paths of a 1,1 random field. 𝐹2,5,8 𝑥1, 𝑥2, 𝑥3 is the probability to obtain a sample path which passes through all three vertical lines

(in a more general theory one instead of finite-dimensional distributions uses probabilities of cylindersets,

𝑃(𝑓𝑡1 ∈ 𝑩1, … , 𝑓𝑡𝑛 ∈ 𝑩n) )

Page 11: Random fields, Fall 2014 - Chalmersrootzen/fields/lectures/RF1.pdf · “Level sets and extrema of random processes and fields” by Jean-Marc Azais and Mario Wschebor, Wiley, 2009

(Daniell-)Kolmogorov extension theorem

To any symmetric and consistent family of finite-dimensional distributions {𝐹𝑡1,…,𝑡𝑛 𝒙1, … , 𝒙𝑛 ; 𝑡1, … , 𝑡𝑛 ∈ 𝑇, 𝒙1, … , 𝒙𝑛 ∈ 𝑅𝑑 , 𝑛 ≥ 1}

there exists a probability triple (Ω, , 𝑃) and an 𝑁, 𝑑 random field {𝑓𝑡; 𝑡 ∈ 𝑇 } which has these finite-dimensional distributions

In the proof one takes

Ω = 𝑹𝑑 𝑇, = (𝑹𝑑)

𝑇,

and 𝑃 as the measure on (𝑹𝑑)𝑇

which is uniquely

determined by the finite-dimensional distribution. Thus an element of Ω is a function 𝑓: 𝑇 𝑅𝑑 which maps a point 𝑡 ∈ 𝑇 to the value 𝑓(𝑡). The field is defined as

{𝑓𝑡 𝜔 = 𝑓 𝑡 ; 𝑡 ∈ 𝑇}

B

B

B

B

Page 12: Random fields, Fall 2014 - Chalmersrootzen/fields/lectures/RF1.pdf · “Level sets and extrema of random processes and fields” by Jean-Marc Azais and Mario Wschebor, Wiley, 2009

Limitations of Kolmogorov’s theorem

Many interesting sets, such as the set

𝐶 = {𝜔; 𝑓𝑡 𝜔 is a continous function of 𝑡}

do not belong to = (𝑹𝑑)𝑇

, and hence, in

Kolmogorov’s construction , the probabability of such events is not defined.

One important way around this problem is to make a direct construction of the field on some other probability space (Ω, , 𝑃) where the interesting sets belong to , say 𝐶 ∈ , so that their probabilities, say 𝑃(𝐶), is well defined. And then, more fields are obtained as functions of the already constructed field!

B

B B

BB

Page 13: Random fields, Fall 2014 - Chalmersrootzen/fields/lectures/RF1.pdf · “Level sets and extrema of random processes and fields” by Jean-Marc Azais and Mario Wschebor, Wiley, 2009

Modifications

A field 𝑔𝑡; 𝑡 ∈ 𝑇 is a modification of the field 𝑓𝑡; 𝑡 ∈ 𝑇 if

𝑃 𝑔𝑡 = 𝑓𝑡 = 1, ∀𝑡 ∈ 𝑇

It is obvious (!) that 𝑔𝑡 has the same finite dimensional distributions as 𝑓𝑡.

The other common way to circumvent the limitation is to construct, on Kolmogorov’s (Ω, , 𝑃) a modification of 𝑓𝑡 which has the desired properties, say continuity.

Whether this is possible or not (of course) dependes on which finite-dimensional distributions one is interested in. E.g. if they correspond to a Browninan motion it is possible, if they correspond to a Poisson process, it isn’t.

B

Page 14: Random fields, Fall 2014 - Chalmersrootzen/fields/lectures/RF1.pdf · “Level sets and extrema of random processes and fields” by Jean-Marc Azais and Mario Wschebor, Wiley, 2009

Doob’s separability

A field 𝑓𝑡; 𝑡 ∈ 𝑇 is separable if there is a countable subset

𝑆 ∈ 𝑇 and a null set Λ ∈ (𝑹𝑑)𝑇

such that for every

closed set 𝐵 ∈ 𝑹𝑑 and open set 𝐼 ∈ 𝑇 it holds that

𝑓𝑡 𝜔 ∈ 𝐵, ∀𝑡 ∈ 𝑆 ∩ 𝐼 ⇒ 𝜔 ∈ Λ or 𝑓𝑡 𝜔 ∈ 𝐵, ∀𝑡 ∈ 𝐼

A separable modification of a field always exists (at least for 𝑁 = 𝑑 = 1? ), and it can be seen that e.g. if a continuous modification of a field exists, then the separable modification is continuous.

Example of modification: Ω = 0,1 , is the Borel sets on [0, 1], 𝑃 is Lebesgue measure, 𝑓𝑡 𝜔 = 0, ∀𝑡, 𝜔 and

𝑔𝑡 𝜔 = 0 if 𝑡 ≠ 𝜔1 if 𝑡 = 𝜔

B

B

Page 15: Random fields, Fall 2014 - Chalmersrootzen/fields/lectures/RF1.pdf · “Level sets and extrema of random processes and fields” by Jean-Marc Azais and Mario Wschebor, Wiley, 2009

Measurable fields

A field 𝑓𝑡; 𝑡 ∈ 𝑇 is measurable if for almost all 𝜔 the sample path (function)

𝑓 . 𝜔 : 𝑇 𝑅𝑑

𝑡 𝑓𝑡(𝜔)

is (𝑅𝑑)-measurable (holds e.g. if the field is a.s. continuous). It then follows that the function of two variables 𝑓𝑡(𝜔) is measurable with respect to the product sigma-algebra × (𝑅𝑑), and one can then define integrals like ℎ 𝑓𝑡 𝑑𝑡

𝑇 and use Fubini’s theorem for calculations like

𝐸 ℎ 𝑓𝑡 𝑑𝑡𝑇

= 𝐸(ℎ 𝑓𝑡 )𝑑𝑡𝑇

(above we have assumed that and (𝑅𝑑) are complete)

B

BB

B B

Page 16: Random fields, Fall 2014 - Chalmersrootzen/fields/lectures/RF1.pdf · “Level sets and extrema of random processes and fields” by Jean-Marc Azais and Mario Wschebor, Wiley, 2009

ATW basically say that it is nice if one has seen the concepts of Kolmogorov extension, modification, and Doob separability, but that this has been taken care of once and for all by Kolmogorov, Doob and others, and that we shouldn’t worry about it any more in this course. And this is right (I hope).

However, things are different for the theory of ”Empirical Processes”, the so far most efficient and high-tech tool to find asymptotic distributions of statistical estimators. In this theory, such ”measureability problems” pose important techical problems, and has formed much of the entire theory. Empirical process theory is closely related to the metods used to prove continuity and differentiability in this course.

Page 17: Random fields, Fall 2014 - Chalmersrootzen/fields/lectures/RF1.pdf · “Level sets and extrema of random processes and fields” by Jean-Marc Azais and Mario Wschebor, Wiley, 2009

Gaussian fields (ATW p. 25-28)

A random vector 𝐗 = 𝑋1, … , 𝑋𝑑 has a multivariate Gaussian distribution iff one of the following conditions hold:

• 𝛼, 𝑥 ≜ 𝛼𝑖𝑋𝑖 𝑑𝑖 has a univariate normal distribution for

all 𝛼 ∈ 𝑅𝑑 .

• There exist a vector 𝒎 ∈ 𝑹𝑑 and a non-negative definite matrix 𝐶 such that for all 𝜽 ∈ 𝑹𝑑

𝜙 𝜽 = 𝐸 𝑒𝑖𝜽𝑋 = e𝑖𝜃𝒎−12𝜽𝐶𝜽´

If 𝐶 is positive definite and 𝑋 has the probability density

1

2𝜋 𝑑 𝐶 1/2𝑒−

12 𝒙−𝒎 𝐶 𝒙−𝒎 ´

then 𝑋 is Gaussian.

Here 𝑚 = 𝐸 𝑿 and 𝐶 = 𝐶𝑜𝑣 𝑿 . Similarly if the 𝑿𝒊 ∈ 𝑹𝒅

Page 18: Random fields, Fall 2014 - Chalmersrootzen/fields/lectures/RF1.pdf · “Level sets and extrema of random processes and fields” by Jean-Marc Azais and Mario Wschebor, Wiley, 2009

We write 𝐗~𝑁𝑑 𝒎,𝐶 if 𝐗 has a d-variate Gaussian distribution with mean 𝑚 and covariance matrix 𝐶.

Excercises (the first is (2.2.5), the second Exercise 2.8.2):

(i) if 𝐗~𝑁𝑑 𝒎,𝐶 and 𝐴 is a 𝑑 × 𝑑 matrix, then 𝐗𝐴~𝑁𝑑 𝒎𝐴,𝐴´𝐶𝐴

(ii) If 𝑿 = 𝑿1, 𝑿2 with 𝑿1 = 𝑋1, … , 𝑋𝑛 , 𝑿2 = 𝑋𝑛+1, … , 𝑋𝑑 , with mean vectors 𝑚1 and 𝑚2 and covariance matrix

𝐶 = 𝐶1,1 𝐶1,2𝐶2,1 𝐶2,2

, then the conditional distribution of 𝑿1 given

𝑿2 is n-variate normal with mean

𝒎1|2 = 𝒎1 + (𝑿2−𝒎2)𝐶2,2−1𝐶2,1

and covariance matrix

𝐶1|2 = 𝐶1,1 − 𝐶1,2𝐶2,2−1𝐶2,1

Page 19: Random fields, Fall 2014 - Chalmersrootzen/fields/lectures/RF1.pdf · “Level sets and extrema of random processes and fields” by Jean-Marc Azais and Mario Wschebor, Wiley, 2009

A Gaussian random field is hence, by the Kolmogorov theorem, determined by its means and covariances

Conversely, it also follows from the Kolmogorov theorem that given a function

𝑚: 𝑇 𝑹

and a non-negative definite function

𝐶: 𝑇 × 𝑇 𝑹

there exist an 𝑁, 1 Gaussian random field which has 𝒎 as mean function and 𝐶 as covariance function.

𝑁, 𝑑 Gaussian random fields for 𝑑 > 1 are the same, one just has to use more general notation.

Page 20: Random fields, Fall 2014 - Chalmersrootzen/fields/lectures/RF1.pdf · “Level sets and extrema of random processes and fields” by Jean-Marc Azais and Mario Wschebor, Wiley, 2009

Gaussian related fields (ATW p. 28-30)

An (𝑁, 𝑑) Gaussian related field 𝑓(𝑡); 𝑡 ∈ 𝑇 is defined from a (𝑁, 𝑘) Gaussian field 𝑔(𝑡); 𝑡 ∈ 𝑇 using a function 𝐹: 𝑅𝑘 𝑅𝑑 by the formula

𝑓 𝑡 = 𝐹 𝑔 𝑡 = 𝐹 g1 t , … , 𝑔𝑘 𝑡 .

Examples:

• Instantaneous function of Gaussian field: 𝑘 = 𝑑 and 𝐹 is invertible

• 𝜒2-field: 𝑑 = 1 and 𝐹 𝒙 = 𝑥𝑖2𝑘

𝑖=1

• 𝑡-field: 𝑑 = 1 and 𝐹 𝒙 =𝑥1 𝑘−1

( 𝑥𝑖2𝑘

𝑖=2 )1/2

• 𝐹-field: 𝑑 = 1, 𝑘 = 𝑚 + 𝑛 and 𝐹 𝒙 =𝑚 𝑥𝑖

2𝑛𝑖=1

𝑛 𝑥𝑖2𝑛+𝑚

𝑖=𝑛+1

Page 21: Random fields, Fall 2014 - Chalmersrootzen/fields/lectures/RF1.pdf · “Level sets and extrema of random processes and fields” by Jean-Marc Azais and Mario Wschebor, Wiley, 2009

Stationarity and isotropy (ATW p. 30-31)

Weak stationarity: A random field is weakly stationary if • 𝒎 𝑡 ≜ 𝐸 𝑓(𝑡) is constant • 𝐶 𝑠, 𝑡 ≜ 𝐸{(𝑓 𝑠 − 𝒎 𝑠 )´(𝑓 𝑡 − 𝒎 𝑡 } only depends

on 𝑡 − 𝑠

Weak isotropy: A random field is weakly isotropic if 𝐶 𝑠, 𝑡 only depends on |𝑡 − 𝑠|

A random field is strictly stationary if the joint distribution of {𝑓 𝑡1 + 𝜏 ,… , 𝑓 𝑡𝑛 + 𝜏 ) doesn’t depend on 𝜏, for all 𝑛 ≥ 1, 𝑡1, … , 𝑡𝑛 ∈ 𝑹𝑁 .

A random field is strictly isotropic if it is stationary and the joint distribution of {𝑓 𝑡1 , … , 𝑓 𝑡𝑛 } is invariant under rotations, for all 𝑛 ≥ 1, 𝑡1, … , 𝑡𝑛 ∈ 𝑹𝑁 .

Page 22: Random fields, Fall 2014 - Chalmersrootzen/fields/lectures/RF1.pdf · “Level sets and extrema of random processes and fields” by Jean-Marc Azais and Mario Wschebor, Wiley, 2009

Weak is the same as strict for real Gaussian fields

”Weak” is sometimes instead called ”second order”

Abuse of notation:

For weakly stationary fields one writes

𝐶 𝑠, 𝑡 = 𝐶 𝑡 − 𝑠

For istropic fields one writes

𝐶 𝑠, 𝑡 = 𝐶 |𝑡 − 𝑠|

Page 23: Random fields, Fall 2014 - Chalmersrootzen/fields/lectures/RF1.pdf · “Level sets and extrema of random processes and fields” by Jean-Marc Azais and Mario Wschebor, Wiley, 2009

Cosine processes and fields (ATW p. 32-36)

Cosine process (a (1,1) field):

𝑓 𝑡 ≜ 𝜉 cos 𝜆𝑡 + 𝜉′ sin 𝜆𝑡 = 𝑅𝑐𝑜𝑠(𝜆𝑡 − 𝜃)

where 𝜉 and 𝜉′ are uncorrelated and have the same distribution, and (for convenience?) mean 0, and 𝑅2 = 𝜉2 +

𝜉′ 2, and 𝜃 = arctan(𝜉′

𝜉). R is ”amplitude”, 𝜃 is ”phase”,

and 𝜆 is ”angular frequency” . Then

𝐸 𝑓 𝑡 = 0

and

𝐶 𝑠, 𝑡 = 𝐸{𝑓 𝑠 𝑓 𝑡 } = 𝐸{(𝜉 cos 𝜆𝑠 + 𝜉′ sin 𝜆𝑠)(𝜉 cos 𝜆𝑡 + 𝜉′ sin 𝜆𝑡)}

= 𝐸 𝜉2 (cos 𝜆𝑠 cos 𝜆𝑡 + sin 𝜆𝑠 sin 𝜆𝑡) = 𝐸(𝜉2) cos 𝜆 𝑡 − 𝑠

Page 24: Random fields, Fall 2014 - Chalmersrootzen/fields/lectures/RF1.pdf · “Level sets and extrema of random processes and fields” by Jean-Marc Azais and Mario Wschebor, Wiley, 2009

If 𝜉, 𝜉′ are Gaussian, then 𝑅2 is exponential with parameter 2𝜎2 (why?), so that 𝑃(𝑅 ≥ 𝑢) = exp (−𝑢2/𝜎2), and 𝜃 is independent of 𝑅 and uniformly distributed on 0, 2𝜋 (do the calculation!).

The following are of central interest in the course:

• 𝑁𝑢 = 𝑁𝑢 (𝑓, 𝑇) ≜ #{𝑡 ∈ 𝑇; 𝑓 𝑡 = 𝑢 and 𝑑𝑓 𝑡

𝑑𝑡> 0}

• 𝑃( sup0≤𝑡≤𝑇

𝑓 𝑡 ≥ 𝑢)

𝜆 in the cosine process is “angular frequency”. Sometimes one instead writes

𝑓 𝑡 = Rcos (2𝜋𝜔𝑡 + 𝜃) 𝜔 then is “ frequency”

Page 25: Random fields, Fall 2014 - Chalmersrootzen/fields/lectures/RF1.pdf · “Level sets and extrema of random processes and fields” by Jean-Marc Azais and Mario Wschebor, Wiley, 2009

For a Gaussian cosine process, and Ψ 𝑢 ≜ 𝑃 𝑁 0,1 > 𝑢 , 𝑃( sup

0≤𝑡≤𝑇𝑓 𝑡 ≥ 𝑢) = 𝑃 𝑓 0 ≥ 𝑢 + 𝑃(𝑓 0 < 𝑢,𝑁𝑢 ≥ 1)

= Ψ 𝑢/𝜎 + 𝑃(𝑓 0 < 𝑢,𝑁𝑢 ≥ 1)

If 𝑇 ≤ 𝜋/𝜆 then

𝑃 𝑓 0 < 𝑢,𝑁𝑢 ≥ 1 = 𝑃 𝑁𝑢 ≥ 1 = 𝑃 𝑁𝑢 = 1 ,

and 𝑁𝑢 = 1 iff both 𝑅 ≥ 𝑢 and 𝜃 falls in an interval of lenght 𝜆𝑇 (requires some thinking: draw a picture). Since these two events are independent,

𝑃( sup0≤𝑡≤𝑇

𝑓 𝑡 ≥ 𝑢) = Ψ𝑢

𝜎+

𝜆𝑇

2𝜋× 𝑒−𝑢2/2𝜎2

If 𝑇 > 2𝜋/𝜆, then sup0≤𝑡≤𝑇

𝑓 𝑡 ≥ 𝑢 iff 𝑅 > 𝑢, so that

𝑃( sup0≤𝑡≤𝑇

𝑓 𝑡 ≥ 𝑢) = 𝑃(𝑅 ≥ 𝑢) = 𝑒−𝑢2/2𝜎2

Page 26: Random fields, Fall 2014 - Chalmersrootzen/fields/lectures/RF1.pdf · “Level sets and extrema of random processes and fields” by Jean-Marc Azais and Mario Wschebor, Wiley, 2009

Without assuming Gaussianity, for any differentiable stochastic process (i.e. (1,1)-field) we get the important general bound 𝑃( sup

0≤𝑡≤𝑇𝑓 𝑡 ≥ 𝑢) = 𝑃 𝑓 0 ≥ 𝑢 + 𝑃 𝑓 0 < 𝑢,𝑁𝑢 ≥ 1

≤ 𝑃 𝑓 0 ≥ 𝑢 + 𝑃 𝑁𝑢 ≥ 1 ≤ 𝑃 𝑓 0 ≥ 𝑢 + 𝐸(𝑁𝑢)

Page 27: Random fields, Fall 2014 - Chalmersrootzen/fields/lectures/RF1.pdf · “Level sets and extrema of random processes and fields” by Jean-Marc Azais and Mario Wschebor, Wiley, 2009

Cosine field (a (𝑁, 1) field):

𝑓 𝑡 = 𝑓 𝑡1, … , 𝑡𝑁 ≜1

𝑁 𝑓𝑘 𝜆𝑘𝑡𝑘 ,

𝑁

𝑘=1

where

𝑓𝑘 𝑡 = 𝜉𝑘 cos 𝑡 + 𝜉𝑘′ sin 𝑡

and the 𝜉𝑘 and 𝜉𝑘′ are uncorrelated and have the same

distribution, with mean 0.

If 𝑇 = 0, 𝑇𝑘𝑁𝑘=1 , then taking the supremum first over 𝑡1,

then over 𝑡2, then … we get that

sup0≤𝑡≤𝑇

𝑓 𝑡 =1

𝑁 sup

0≤𝑡≤𝑇𝑓𝑘 𝜆𝑘𝑡 .

𝑁

𝑘=1

If the 𝜉𝑘 and 𝜉𝑘′ are Gaussian, and 𝑇𝑘 ≤ 𝜋/𝜆𝑘, k = 1,… ,𝑁

this gives an explicit (but complicated) formula.

Page 28: Random fields, Fall 2014 - Chalmersrootzen/fields/lectures/RF1.pdf · “Level sets and extrema of random processes and fields” by Jean-Marc Azais and Mario Wschebor, Wiley, 2009

Orthogonal expansions (ATW p. 36-39)

An orthogonal exansion of an (𝑁, 1) field is an expression

𝑓 𝑡 = 𝜉𝑛𝜑𝑛(𝑡)∞

𝑛=1,

with 𝜉𝑛 uncorrelated centered (i.e. 𝐸 𝜉𝑛 = 0, just for convenience) random variables with 𝐸 𝜉𝑛

2 = 𝜎𝑛2, and 𝜑𝑛

non-random orthogonal functions 𝑇 𝑹. (For 𝑁, 𝑑 fields the 𝜉𝑛 are matrices and the 𝜑𝑛 are vector valued.)

The moment functions then are 𝐸 𝑓 𝑡 = 0 and

𝐶 𝑠, 𝑡 = 𝐸 𝑓 𝑠 𝑓 𝑡 = 𝜎𝑛2𝜑𝑛 𝑠 𝜑𝑛(𝑡)

𝑛=1

𝑉 𝑡 ≜ 𝐸 𝑓 𝑡 2 = 𝜎𝑛2𝜑𝑛 𝑡 2

𝑛=1

Important for theory, application , and computation.

Page 29: Random fields, Fall 2014 - Chalmersrootzen/fields/lectures/RF1.pdf · “Level sets and extrema of random processes and fields” by Jean-Marc Azais and Mario Wschebor, Wiley, 2009

A Gaussian field with continuous covariance always has a Reproducing Kernel Hilbert Space (RKHS) ortogonal expansion. Loosley it is obtained as follows: set

𝑆 = {𝑢: 𝑇 𝑹; 𝑢 ∙ = 𝑎𝑖𝐶 𝑠𝑖 ,∙ , 𝑎𝑖real, 𝑠𝑖 ∈ 𝑇, 𝑛 ≥ 1},

𝑛

𝑖=1

and define an inner product on 𝑆 by

𝑢, 𝑣 = ( 𝑎𝑖𝐶 𝑠𝑖 ,∙ ,

𝑛

𝑖=1

𝑏𝑖𝐶 𝑡𝑗 ,∙ ) = 𝑎𝑖𝑏𝑗𝐶(𝑠𝑖 , 𝑡𝑗)

𝑚

𝑗=1

𝑛

𝑖_1

𝑚

𝑗=1

”Reproducing kernel ” comes from

𝑢, 𝐶(𝑡,∙ ) = 𝑎𝑖𝐶 𝑠𝑖 ,∙ ,

𝑛

𝑖=1

𝐶 𝑡,∙ = 𝑎𝑖𝐶 𝑠𝑖 , 𝑡 = 𝑢(𝑡)

𝑛

𝑖=1

Page 30: Random fields, Fall 2014 - Chalmersrootzen/fields/lectures/RF1.pdf · “Level sets and extrema of random processes and fields” by Jean-Marc Azais and Mario Wschebor, Wiley, 2009

If 𝐶 𝑠, 𝑡 is positive definite, then 𝑢 = 𝑢, 𝑢 1/2 is a norm and one can define the RKHS 𝐻(𝑆) as the closure of 𝑆 in this norm. If {𝜑𝑛} is a complete orhtonormal system in 𝐻(𝑆), and the 𝜉𝑖 are 𝑁(0,1) then the field {𝑓 𝑡 } has an orthogonal expansion

𝑓 𝑡 =𝑑 𝜉𝑛𝜑𝑛(𝑡)∞

𝑛=1

This is important, but not always easy to handle. We will next briefly describe a somewhat more concrete expansion, the Karhunen-Loeve expansion, and then, in much more detail, the by far most important orthogonal expansions, the spectral representations, which expresses the field as a sum of cosine processes.

Page 31: Random fields, Fall 2014 - Chalmersrootzen/fields/lectures/RF1.pdf · “Level sets and extrema of random processes and fields” by Jean-Marc Azais and Mario Wschebor, Wiley, 2009

The Karhunen-Loeve expansion applies to the case when 𝑇 is a compact set in 𝑹𝑁. Let the operator 𝐶: 𝐿2 𝑇 𝐿2 𝑇 be defined by

𝐶𝜓 𝑡 = 𝐶 𝑠, 𝑡 𝜓 𝑠 𝑑𝑠𝑇

and let 𝜆1 ≥ 𝜆2 ≥ ⋯ its eigenvalues and 𝜓1 ≥ 𝜓2 ≥ ⋯ the corresponding orthonormal eigenfunctions. It can be shown

that 𝜆𝑛𝜓𝑛 is an orthonormal system in the RKHS 𝐻 𝐶 ,

and hence

𝑓 𝑡 =𝑑 𝜉𝑛 𝜆𝑛 𝜓(𝑡)∞𝑛=1 .

In general convergence is in mean square. For continuous fields, the sum also converges 𝑎. 𝑠.

Again it may be difficult to find the eigenvalues and eigenfunctions, but discretization may often work


Recommended