+ All Categories
Home > Education > Tensor train to solve stochastic PDEs

Tensor train to solve stochastic PDEs

Date post: 27-Jan-2017
Category:
Upload: alexander-litvinenko
View: 90 times
Download: 0 times
Share this document with a friend
58
Numerical methods for solving stochastic partial differential equations in the Tensor Train format Alexander Litvinenko 1 (joint work with Sergey Dolgov 2,3 , Boris Khoromskij 3 and Hermann G. Matthies 4 ) 1 SRI UQ and Extreme Computing Research Center KAUST, 2 Max-Planck-Institut f¨ ur Mathematik in den Naturwissenschaften, Leipzig, MPI for dynamics of complex systems in 3 Magdeburg, 4 TU Braunschweig, Germany Center for Uncertainty Quantification http://sri-uq.kaust.edu.sa/
Transcript
Page 1: Tensor train to solve stochastic PDEs

Numerical methods for solving stochastic partialdifferential equations in the Tensor Train format

Alexander Litvinenko1

(joint work with Sergey Dolgov2,3, Boris Khoromskij3 andHermann G. Matthies4)

1 SRI UQ and Extreme Computing Research Center KAUST, 2

Max-Planck-Institut fur Mathematik in den Naturwissenschaften,Leipzig, MPI for dynamics of complex systems in 3 Magdeburg,

4 TU Braunschweig, Germany

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

http://sri-uq.kaust.edu.sa/

Page 2: Tensor train to solve stochastic PDEs

4*

Motivation for UQ

Nowadays computational algorithms, run onsupercomputers, can simulate and resolve verycomplex phenomena. But how reliable are thesepredictions? Can we trust to these results?

Some parameters/coefficients are unknown, lack ofdata, very few measurements → uncertainty.

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

2 / 58

Page 3: Tensor train to solve stochastic PDEs

4*

Notation, problem setup

ConsiderA(u; q) = f ⇒ u = S(f ; q),

where S is a solution operator.Uncertain Input:

1. Parameter q := q(ω) (assume moments/cdf/pdf/quantiles ofq are given)

2. Boundary and initial conditions, right-hand side

3. Geometry of the domain

Uncertain solution:

1. mean value and variance of u

2. exceedance probabilities P(u > u∗)

3. probability density functions (pdf) of u.

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

3 / 58

Page 4: Tensor train to solve stochastic PDEs

4*

KAUST

Figure : KAUST campus, 5 yearsold, approx. 7000 people (include1400 kids), 100 nations.

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

4 / 58

Page 5: Tensor train to solve stochastic PDEs

4*

Children at KAUST

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

5 / 58

Page 6: Tensor train to solve stochastic PDEs

4*

Stochastic Numerics Group at KAUST

Figure : SRI UQ GroupCenter for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

6 / 58

Page 7: Tensor train to solve stochastic PDEs

4*

3rd UQ Workshop ”Advances in UQ Methods, Alg. & Appl.”

Figure : The 3rd UQ Workshop ”Advances in UQ Methods, Algorithmsand Applications”

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

7 / 58

Page 8: Tensor train to solve stochastic PDEs

4*

PDE with uncertain diffusion coefficients

PART 1. Stochastic Forward Problems

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

8 / 58

Page 9: Tensor train to solve stochastic PDEs

4*

PDE with uncertain diffusion coefficients

Consider−div(κ(x , ω)∇u(x , ω)) = f (x , ω) in G × Ω, G ⊂ R2,u = 0 on ∂G, (1)

where κ(x , ω) - uncertain diffusion coefficient. Since κ positive,usually κ(x , ω) = eγ(x ,ω).For well-posedness see [Sarkis 09, Gittelson 10, H.J.Starkloff 11,Ullmann 10].Further we will assume that covκ(x , y) is given (or estimated fromthe available data).

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

9 / 58

Page 10: Tensor train to solve stochastic PDEs

4*

Our previous work

After applying the stochastic Galerkin method, obtain:Ku = f, where all ingredients are represented in a tensor format

Solve for u. Compute maxu, var(u), level sets of u, pdf, cdf,

1. Efficient Analysis of High Dimensional Data in Tensor Formats,[Espig, Hackbusch, A.L., Matthies and Zander, 2012]

Research rank of K (from which ingredients it depends)2. Efficient low-rank approximation of the stochastic Galerkinmatrix in tensor formats, [Wahnert, Espig, Hackbusch, A.L., Matthies, 2013]

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

10 / 58

Page 11: Tensor train to solve stochastic PDEs

4*

Smooth transformation of Gaussian RF

Step 1: We assume κ = φ(γ) -a smooth transformation of theGaussian random field γ(x , ω), e.g. φ(γ) = exp(γ).[see PhD of E. Zander 2013, or PhD of A. Keese, 2005]

Step 2: Given the covariance matrix of κ(x , ω), we derive thecovariance matrix of γ(x , ω). After that the KLE may becomputed,

γ(x , ω) =∞∑

m=1

gm(x)θm(ω),

∫D

covγ(x , y)gm(y)dy = λmgm(x),

(2)

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

11 / 58

Page 12: Tensor train to solve stochastic PDEs

4*

Full JM,p and sparse J spM,p multi-index sets

M-dimensional PCE approximation of κ writes (α = (α1, ..., αM))

κ(x , ω) ≈∑α∈JM

κα(x)Hα(θ(ω)), Hα(θ) := hα1(θ1) · · · hαM(θM)

(3)

DefinitionThe full multi-index is defined by restricting each componentindependently,

JM,p = 0, 1, . . . , p1⊗· · ·⊗0, 1, . . . , pM, where p = (p1, . . . , pM)

is a shortcut for the tuple of order limits.

DefinitionThe sparse multi-index is defined by restricting the sum ofcomponents,

J spM,p = α = (α1, . . . , αM) : α ≥ 0, α1 + · · ·+ αM ≤ p .

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

12 / 58

Page 13: Tensor train to solve stochastic PDEs

4*

TT compression of PCE coeffs

The Galerkin coefficients κα are evaluated as follows [Thm 3.10,PhD of E. Zander 13],

κα(x) =(α1 + · · ·+ αM)!

α1! · · ·αM !φα1+···+αM

M∏m=1

gαmm (x), (4)

where φ|α| := φα1+···+αMis the Galerkin coefficient of the

transform function, and gαmm (x) means just the αm-th power of the

KLE function value gm(x).

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

13 / 58

Page 14: Tensor train to solve stochastic PDEs

4*

Complexity reduction

Complexity reduction in Eq. (4) can be achieved with help of KLEof κ(x , ω):

κ(x , ω) ≈ κ(x) +L∑`=1

√µ`v`(x)η`(ω) (5)

with the normalized spatial functions v`(x).Instead of using κα(x), (4), directly, we compute

κ`(α) =(α1 + · · ·+ αM)!

α1! · · ·αM !φα1+···+αM

∫D

M∏m=1

gαmm (x)v`(x)dx .

Note that L N. Then we restore the approximate coefficients

κα(x) ≈ κ(x) +L∑`=1

v`(x)κ`(α).

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

14 / 58

Page 15: Tensor train to solve stochastic PDEs

4*

Construction of the stochastic Galerkin operator

Given KLE of κ, assemble for i , j = 1, . . . ,N, ` = 1, . . . , L:

K0(i , j) =

∫D

κ(x)∇ϕi (x)·∇ϕj(x)dx , K`(i , j) =

∫D

v`(x)∇ϕi (x)·∇ϕj(x)dx ,

(6)

K(ω)` (α,β) =

∫RM

Hα(θ)Hβ(θ)∑ν∈JM

κ`(ν)Hν(θ)ρ(θ)dθ

=∑ν∈JM

∆α,β,νκ`(ν),

∆α,β,ν = ∆α1,β1,ν1 · · ·∆αM ,βM ,νM ,

∆αm,βm,νm =

∫R

hαm(θ)hβm(θ)hνm(θ)ρ(θ)dθ,

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

15 / 58

Page 16: Tensor train to solve stochastic PDEs

4*

Stochastic Galerkin operator

Putting together previous formulas, obtain the stochastic Galerkinoperator,

K = K(x)0 ⊗∆0 +

L∑`=1

K(x)` ⊗K

(ω)` , (7)

with K ∈ RN(p+1)M×N(p+1)M in case of full JM,p.

IDEA: If PCE coefficients of κ are computed in the tensor productformat, the direct product in ∆ (15) allows to exploit the sameformat for (7), and build the operator easily.

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

16 / 58

Page 17: Tensor train to solve stochastic PDEs

4*

Tensor Train

Two tensor Train examples

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

17 / 58

Page 18: Tensor train to solve stochastic PDEs

4*

Examples (B. Khoromskij’s lecture)

f (x1, ..., xd) = w1(x1) + w2(x2) + ...+ wd(xd)

= (w1(x1), 1)

(1 0

w2(x2) 1

)...

(1 0

wd−1(xd−1) 1

)(1

wd(xd)

)

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

18 / 58

Page 19: Tensor train to solve stochastic PDEs

4*

Examples:

TT rank(f )=2

f = sin(x1 + x2 + ...+ xd)

= (sin x1, cos x1)

(cos x2 − sin x2

sin x2 cos x2

)...

(cos xd−1 − sin xd−1

sin xd−1 cos xd−1

)(cos xd

sin xd−1

)

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

19 / 58

Page 20: Tensor train to solve stochastic PDEs

4*

Low-rank response surface: PCE in the TT format

Calculation of

κ`(α) =(α1 + · · ·+ αM)!

α1! · · ·αM !φα1+···+αM

∫D

M∏m=1

gαmm (x)v`(x)dx .

in TT format needs:

I a procedure to compute each element of a tensor, e.g.κα1,...,αM

.

I build a TT approximation κα ≈ κ(1)(α1) · · ·κ(M)(αM) using afeasible amount of elements (i.e. much less than (p + 1)M).

Such procedure exists, and relies on the cross interpolation ofmatrices, generalized to a higher-dimensional case [Oseledets, Tyrtyshnikov2010; Savostyanov 13; Grasedyck; Bebendorf].

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

20 / 58

Page 21: Tensor train to solve stochastic PDEs

PCE coefficients κ`(α) are :

κ`(α) =∑

s1,...,sM−1

κ(1)`,s1

(α1)κ(2)s1,s2(α2) · · ·κ(M)

sM−1(αM). (8)

Collect the spatial components into the “zeroth” TT block,

κ(0)(x) =[κ

(0)` (x)

]L`=0

=[κ(x) v1(x) · · · vL(x)

], (9)

then the PCE writes as the following TT format,

κ(x ,α) =∑

`,s1,...,sM−1

κ(0)` (x)κ

(1)`,s1

(α1) · · ·κ(M)sM−1(αM). (10)

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

21 / 58

Page 22: Tensor train to solve stochastic PDEs

4*

Stochastic Galerkin matrix in TT format

Given κα(x), (10), we split the whole sum over ν in K,(7):∑ν∈JM,p

∆α,β,νκ`(ν) =∑

s1,...,sM−1

K(1)`,s1

(α1, β1)K(2)s1,s2(α2, β2) · · ·K(M)

sM−1(αM , βM),

K(m)(αm, βm) =

pm∑νm=0

∆αm,βm,νmκ(m)(νm), m = 1, . . . ,M.

(11)

then the TT representation for the operator writes

K =∑

`,s1,...,sM−1

K(0)` ⊗K

(1)`,s1⊗· · ·⊗K(M)

sM−1 ∈ R(N·#JM,p)×(N·#JM,p),

(12)

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

22 / 58

Page 23: Tensor train to solve stochastic PDEs

4*

Solving and Post-processing:

Solve the linear system Ku = f by alternating optimizationmethods [Dolgov, Savostyanov 14] with a mean-fieldpreconditioned. Obtain the solution u in the TT format.

u(x ,α) =∑

s0,...,sM−1

u(0)s0 (x)u

(1)s0,s1(α1) · · ·u(M)

sM−1(αM). (13)

u(x ,θ) =∑

s0,...,sM−1

u(0)s0 (x)

(p∑

α1=0

hα1(θ1)u(1)s0,s1(α1)

)· · · (14) p∑

αM=0

hαM(θM)u

(M)sM−1(αM)

. (15)

Then compute: mean, co(variance), exceedance probabilities

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

23 / 58

Page 24: Tensor train to solve stochastic PDEs

4*

Numerics: Main steps

1. Use sglib (E. Zander, TU BS) for discretization and solutionwith J sp

M,p.

2. Compute PCE (sglib) of the coefficients κ(x , ω) in the TTformat by new block adaptive cross algorithm (TT toolbox)

3. Use TT-Toolbox for full JM,p,

4. amen cross.m for TT approximation of κα,

5. Compute stochastic Galerkin matrix K in TT format,

6. Replace high-dimensional calculations by the TT-toolbox.

7. Compute solution of the linear system in TT (alternatingminimal energy, tAMEn)

8. Post-processing in TT format

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

24 / 58

Page 25: Tensor train to solve stochastic PDEs

4*

Numerical experiments, errors, accuracy

D = [−1, 1]2\[0, 1]2. f = f (x) = 1, log-normal and betadistributions for κ. 557, 2145, 8417 dofs.

Eκ =1

Nmc

Nmc∑z=1

√∑Ni=1 (κ(xi ,θz)− κ∗(xi ,θz))2√∑N

i=1 κ2∗(xi ,θz)

where θzNmcz=1 are normally distributed random samples and

κ∗(xi ,θz) = φ (γ(xi ,θz)) is the reference coefficient computedwithout using the PCE for φ.

Eu =‖u − u∗‖L2(D)

‖u∗‖L2(D), Evaru =

‖varu − varu∗‖L2(D)

‖varu∗‖L2(D).

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

25 / 58

Page 26: Tensor train to solve stochastic PDEs

4*

More numerics

We compute the maximizer of the mean solution,xmax : u(xmax) ≥ u(x) ∀x ∈ D.umax(θ) = u(xmax,θ), and u = u(xmax).Taking some τ > 1, we compute

P = P (umax(θ) > τ u) =

∫RM

χumax(θ)>τ u(θ)ρ(θ)dθ. (16)

By P∗ we will also denote the probability computed from the

Monte Carlo method, and estimate the error as EP = |P− P∗| /P∗.

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

26 / 58

Page 27: Tensor train to solve stochastic PDEs

4*

Sparse J spM,p or full JM,p ?

What is better sparse J spM,p or full JM,p multi-index set ?

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

27 / 58

Page 28: Tensor train to solve stochastic PDEs

4*

CPU times (sec.) versus p, log-normal distribution

TT (full index set JM,p) Sparse (index set J spM,p)

p Tκ Top Tu Tκ Top Tu

1 9.6 0.2 1.7 0.5 0.3 0.652 14.7 0.2 3 0.5 3.2 1.43 19.1 0.2 3.4 0.7 1028 184 24.4 0.2 4.2 2.2 — —5 30.9 0.32 5.3 9.8 — —

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

28 / 58

Page 29: Tensor train to solve stochastic PDEs

4*

How does polynomial order influence the ranks ?

How does the max. polynomial order p influence the ranks ?

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

29 / 58

Page 30: Tensor train to solve stochastic PDEs

4*

Performance versus p, log-normal distribution

p CPU time, sec. rκ ru rχ Eκ Eu PTT Sparse χ TT Sparse TT Sparse TT

1 11 1.4 0.2 32 42 1 4e-3 1.7e-1 1e-2 1e-1 02 18 5.1 0.3 32 49 1 1e-4 1.1e-1 5e-4 5e-2 03 23 1046 83 32 49 462 6e-5 2.e-3 3e-4 5e-4 2.8e-44 29 — 70 32 50 416 6e-5 — 1e-4 — 1.2e-45 37 — 103 32 49 410 6e-5 — 1e-4 — 6.2e-4

Take τ = 1.2:

P = P (umax(θ) > τ u) =

∫RM

χumax(θ)>τ u(θ)ρ(θ)dθ. (17)

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

30 / 58

Page 31: Tensor train to solve stochastic PDEs

4*

How does stochastic dimension M influence the ranks ?

How does the stochastic dimension M influence the TT ranks ?

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

31 / 58

Page 32: Tensor train to solve stochastic PDEs

4*

Performance versus M , log-normal distribution

M CPU time, sec. rκ ru rχ Eκ Eu PTT Sparse χ TT Sparse TT Sparse TT

10 6 6 1.3 20 39 70 2e-4 1.7e-1 3e-4 1.5e-1 2.86e-415 12 92 23 27 42 381 8e-5 2e-3 3e-4 5e-4 3e-420 22 1e+3 67 32 50 422 6e-5 2e-3 3e-4 5e-4 2.96e-430 53 5e+4 137 39 50 452 6e-5 1e-1 3e-4 5.5e-2 2.78e-4

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

32 / 58

Page 33: Tensor train to solve stochastic PDEs

4*

How does covariance length influence the ranks ?

How does covariance length influence the ranks ?

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

33 / 58

Page 34: Tensor train to solve stochastic PDEs

4*

Performance versus cov. length, log-normal distribution

cov. CPU time, sec. rκ ru rχ Eκ Eu Plength TT Sparse χ TT Sparse TT Sparse TT

0.1 216 55800 0.9 70 50 1 2e-2 2e-2 1.8e-2 1.8e-2 00.3 317 52360 42 87 74 297 3e-3 3.5e-3 2.6e-3 2.6e-3 8e-310.5 195 51700 58 67 74 375 1.5e-4 2e-3 2.6e-4 3.1e-4 6e-331.0 57.3 55200 97 39 50 417 6.1e-5 9e-2 3.2e-4 5.6e-2 2.95e-041.5 32.4 49800 121 31 34 424 3.2e-5 2e-1 5e-4 1.7e-1 7.5e-04

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

34 / 58

Page 35: Tensor train to solve stochastic PDEs

4*

How does standard deviation σ influence the ranks ?

How does the standard deviation σ influence the TT ranks ?

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

35 / 58

Page 36: Tensor train to solve stochastic PDEs

4*

Performance versus σ, log-normal distribution

σ CPU time, sec. rκ ru rχ Eκ Eu PTT Sparse χ TT Sparse TT Sparse TT

0.2 16 1e+3 0.3 21 31 1 6e-5 5e-5 4e-5 1e-5 00.4 19 968 0.3 29 42 1 7e-5 8e-4 1e-4 2e-4 00.5 21 970 80 32 49 456 6e-5 2e-3 3e-4 5e-4 3e-40.6 24 962 25 34 57 272 9e-5 4e-3 6e-4 1e-3 2e-30.8 32 969 68 39 66 411 4e-4 8e-2 2e-3 3e-2 8e-21.0 51 1070 48 44 82 363 2e-3 4e-1 5e-3 3e-1 9e-2

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

36 / 58

Page 37: Tensor train to solve stochastic PDEs

4*

How does number of DoFs influence the ranks ?

How does number of DoFs influence the ranks ?

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

37 / 58

Page 38: Tensor train to solve stochastic PDEs

4*

Performance versus #DoFs, log-normal distribution

#DoFs CPU time, sec rκ ru rχ Eκ Eu PTT Sparse χ TT Sparse TT Sparse TT

557 6 6 1.3 20 39 71 2e-4 1.7e-1 3e-4 1.5e-1 2.86e-42145 9 14 1.2 20 39 76 2e-4 2e-3 3e-4 5.7e-4 2.9e-48417 357 171 0.8 20 40 69 1.7e-4 2e-3 3e-4 5.6e-4 2.93e-4

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

38 / 58

Page 39: Tensor train to solve stochastic PDEs

4*

Comparison with the Monte Carlo

Comparison of the solution obtained via the (Stochastic Galerkin+ TT) with the solution obtained via Monte Carlo (4000).For the Monte Carlo test, we prepare the TT solution with

parameters p = 5 and M = 30.

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

39 / 58

Page 40: Tensor train to solve stochastic PDEs

4*

Verification of the MC method (4000), log-normal distr.

Nmc TMC , sec. Eu Evaru P∗ EP TT results102 0.6 9e-3 2e-1 0 ∞ Tsolve 97 sec.103 6.2 2e-3 6e-2 0 ∞ Tχ 157 sec.104 6.2·101 6e-4 7e-3 4e-4 5e-1 rκ 39105 6.2·102 3e-4 3e-3 4e-4 5e-1 ru 50106 6.3·103 1e-4 1e-3 5e-4 4e-1 rχ 432

P 6e-4

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

40 / 58

Page 41: Tensor train to solve stochastic PDEs

4*

Part II: diffusion coefficient has beta distrib.

κ(x , ω) = B−15,2

1 + erf(γ(x ,ω)√

2

)2

+ 1,

Ba,b(z) =1

B(a, b)

z∫0

ta−1(1− t)b−1dt.

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

41 / 58

Page 42: Tensor train to solve stochastic PDEs

4*

We researched (for beta distribution)

1. Performance versus p

2. Performance versus stochastic dimension M

3. Performance versus cov. length

4. Performance versus #DoFs

5. Verification of the Monte Carlo method

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

42 / 58

Page 43: Tensor train to solve stochastic PDEs

4*

Take to home

1. TT methods become preferable for high p, but otherwise thefull computation in a small sparse set may be incredibly fast.This reflects well the “curse of order”, taking place for thesparse set instead of the “curse of dimensionality” in the fullset: the cardinality of the sparse set grows exponentially withp.

2. The TT approach scales linearly with p.

3. TT methods allow easy calculation of stochastic Galerkinoperator. With p < 10 TT storage of stoch. Galerkin operatorallows us to forget about the sparsity issues, since the numberof TT entries O(Mp2r 2) is tractable.

4. Chebyshev, Laguerre, ... may be incorporated into the schemefreely.

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

43 / 58

Page 44: Tensor train to solve stochastic PDEs

4*

Future plans for the next article

1. Compute Sobol indices in the TT format. Which uncertaincoefficients and which PCE terms are important ?

2. Solution of this linear elliptic SPDE is a ”working horse” forthe non-linear equation and the Newton method

3. Stochastic Galerkin in TT format above can be used as apreconditioning it is very fast!) for more complicatednon-linear problems

4. Apply to more complicated diffusion coefficients (e.g. whichare not so good splittable)

5. To create analytic u, compute analytically RHS and solve theproblem again (to avoid to use MC as a reference)

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

44 / 58

Page 45: Tensor train to solve stochastic PDEs

4*

Approximate Bayesian Update

PART 2. Inverse Problems via approximateBayesian Update

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

45 / 58

Page 46: Tensor train to solve stochastic PDEs

4*

Setting for the identification process

General idea:We observe / measure a system, whose structure we know in

principle.The system behaviour depends on some quantities (parameters),

which we do not know ⇒ uncertainty.We model (uncertainty in) our knowledge in a Bayesian setting:

as a probability distribution on the parameters.We start with what we know a priori, then perform a measurement.

This gives new information, to update our knowledge(identification).

Update in probabilistic setting works with conditional probabilities⇒ Bayes’s theorem.

Repeated measurements lead to better identification.

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

46 / 58

Page 47: Tensor train to solve stochastic PDEs

4*

Mathematical setup

Consider

A(u; q) = f ⇒ u = S(f ; q),

where S is solution operator.Operator depends on parameters q ∈ Q,hence state u ∈ U is also function of q:

Measurement operator Y with values in Y:

y = Y (q; u) = Y (q, S(f ; q)).

Examples of measurements:(ODE) u(t) = (x(t), y(t), z(t))T , y(t) = (x(t), y(t))T

(PDE) y(ω) =∫D0

u(ω, x)dx , y(ω) =∫D0| grad u(ω, x)|2dx , u in

few points

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

47 / 58

Page 48: Tensor train to solve stochastic PDEs

4*

Inverse problem

For given f , measurement y is just a function of q.This function is usually not invertible ⇒ ill-posed problem,

measurement y does not contain enough information.In Bayesian framework state of knowledge modelled in a

probabilistic way,parameters q are uncertain, and assumed as random.

Bayesian setting allows updating / sharpening of informationabout q when measurement is performed.

The problem of updating distribution —state of knowledge of qbecomes well-posed.

Can be applied successively, each new measurement y andforcing f —may also be uncertain—will provide new information.

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

48 / 58

Page 49: Tensor train to solve stochastic PDEs

4*

Conditional probability and expectation

With state u ∈ U ⊗ S a RV, the quantity to be measured

y(ω) = Y (q(ω), u(ω))) ∈ Y ⊗ S

is also uncertain, a random variable.A new measurement z is performed, composed from the

“true” value y ∈ Y and a random error ε: z(ω) = y + ε(ω).Classically, Bayes’s theorem gives conditional probability

P(Iq|Mz) =P(Mz |Iq)

P(Mz)P(Iq);

expectation with this posterior measure is conditional expectation.Kolmogorov starts from conditional expectation E (·|Mz),

from this conditional probability via P(Iq|Mz) = E(χIq |Mz

).

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

49 / 58

Page 50: Tensor train to solve stochastic PDEs

4*

IDEA of the Bayesian Update (BU)

Let Y (x ,θ), θ = (θ1, ..., θM , ...), is approximated:

Y (x ,θ) =∑

β∈Jm,p

Hβ(θ)Yβ(x),

q(x ,θ) =∑

β∈Jm,p

Hβ(θ)qβ(x),

Yβ(x) =1

β!

∫Θ

Hβ(θ)Y (x ,θ)P(dθ).

Take qf (ω) = q0(ω).Linear BU: qa = qf + K · (z − y)Non-Linear BU: qa = qf + H1 · (z − y) + (z − y)T · H2 · (z − y).

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

50 / 58

Page 51: Tensor train to solve stochastic PDEs

4*

Open questions

Open questions

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

51 / 58

Page 52: Tensor train to solve stochastic PDEs

4*

Open questions

Multivariate Cauchy distributionThe characteristic function ϕX(t) of the multivariate Cauchydistribution is defined as follow:

ϕX(t) = exp

(i(t1, t2) · (µ1, µ2)T −

√1

2(t1, t2)

(σ2

1 00 σ2

2

)(t1, t2)T

),

(18)

ϕX(t) ≈R∑ν=1

ϕXν,1(t1) · ϕXν,2(t2). (19)

Again, from the inversion theorem, the probability density of X onR2 can be computed from ϕX(t) as follow

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

52 / 58

Page 53: Tensor train to solve stochastic PDEs

pX(y) =1

(2π)2

∫R2

exp(−i〈y, t〉)ϕX(t)dt (20)

≈ 1

(2π)2

∫R2

exp(−i(y1t1 + y2t2))R∑ν=1

ϕXν,1(t1) · ϕXν,2(t2)dt1dt2

(21)

≈R∑ν=1

1

(2π)

∫R

exp(−iy1t1)ϕXν,1(t1)dt1 ·1

(2π)

∫R

exp(−iy2t2)ϕXν,2(t2)dt2

(22)

≈R∑ν=1

pXν,1(y1) · pXν,2(y2), i .e. (23)

the probability density pX(y) is numerically splittable.

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

53 / 58

Page 54: Tensor train to solve stochastic PDEs

4*

Elliptically contoured multivariate stable distribution

ϕX(t) = exp

(i(t1, t2) · (µ1, µ2)T −

((t1, t2)

(σ2

1 00 σ2

2

)(t1, t2)T

)α/2),

(24)Now the question is to find a separation of(

(t1, t2)

(σ2

1 00 σ2

2

)(t1, t2)T

)α/2

≈R∑ν=1

φν,1(t1) · φν,2(t2), (25)

with some tensor rank R.

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

54 / 58

Page 55: Tensor train to solve stochastic PDEs

4*

Multivariate distribution

Assume that the characteristic function ϕX(t) of some multivariated-dimensional distribution is approximated as follow:

ϕX(t) ≈R∑`=1

d⊗µ=1

ϕX`,µ(tµ). (26)

pX(y) = const

∫Rd

exp(−i〈y, t〉)ϕX(t)dt (27)

≈ const

∫Rd

exp(−id∑

j=1

yj tj)R∑`=1

d⊗µ=1

ϕX`,µ(tµ)dt1...dtd (28)

≈R∑`=1

constd⊗µ=1

∫R

exp(−iy`t`)ϕX`,µ(tµ)dt`· (29)

≈R∑`=1

d⊗µ=1

pX`,µ(yµ). (30)

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

55 / 58

Page 56: Tensor train to solve stochastic PDEs

4*

Actual computation of ϕX(t)

ϕX(τβ) = E (exp(i〈X(θ1, ..., θm), τβ〉))

=

∫· · ·∫

Θexp(i〈X(θ1, ..., θm), τβ〉)

M∏m=1

pθm(θm)dθ1...dθM ,

〈X(ω), τβ〉 = 〈∑α∈J

ξαHα(θ), τβ〉 ≈d∑`=1

∑α∈J

ξα` Hα(θ)tβ`,`

=∑α∈J

d∑`=1

ξα` tβ`,`Hα(θ) =∑α∈J〈ξα, τβ〉Hα(θ) (31)

Now compute the exp() function from the scalar product

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

56 / 58

Page 57: Tensor train to solve stochastic PDEs

exp(i〈X(ω), τβ〉) = exp(i∑α∈J〈ξα, τβ〉Hα(θ)) (32)

=∏α∈J

exp (i〈ξα, τβ〉Hα(θ)) (33)

Now we apply integration:

ϕX(t) = E (exp(i〈X(ω), τβ〉))

=

∫· · ·∫

Θ

∏α∈J

exp (i〈ξα, τβ〉Hα(θ))M∏

m=1

pθm(θm)dθ1...dθM

≈????

nq∑`=1

w`∏α∈J

exp (i〈ξα, τβ〉Hα(θ`))M∏

m=1

pθm(θm,`)

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

57 / 58

Page 58: Tensor train to solve stochastic PDEs

4*

Literature

1. Polynomial Chaos Expansion of random coefficients and thesolution of stochastic partial differential equations in the TensorTrain format, S. Dolgov, B. N. Khoromskij, A. Litvinenko, H. G.Matthies, 2015/3/11, arXiv:1503.032102. Efficient analysis of high dimensional data in tensor formats, M.Espig, W. Hackbusch, A. Litvinenko, H.G. Matthies, E. ZanderSparse Grids and Applications, 31-56, 40, 20133. Application of hierarchical matrices for computing theKarhunen-Loeve expansion, B.N. Khoromskij, A. Litvinenko, H.G.Matthies, Computing 84 (1-2), 49-67, 31, 20094. Efficient low-rank approximation of the stochastic Galerkinmatrix in tensor formats, M. Espig, W. Hackbusch, A. Litvinenko,H.G. Matthies, P. Waehnert, Computers and Mathematics withApplications 67 (4), 818-829

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

58 / 58


Recommended