Post on 08-Feb-2017
transcript
Low-rank tensor methods for PDEs withuncertain coefficients and
Bayesian Update surrogate
Alexander Litvinenko
Center for UncertaintyQuantification
Center for UncertaintyQuantification
Center for Uncertainty Quantification Logo Lock-up
http://sri-uq.kaust.edu.sa/
Extreme Computing Research Center, KAUST
Alexander Litvinenko Low-rank tensor methods for PDEs with uncertain coefficients and Bayesian Update surrogate
4*
The structure of the talk
Part I (Stochastic forward problem):1. Motivation2. Elliptic PDE with uncertain coefficients3. Discretization and low-rank tensor approximations4. Tensor calculus to compute QoI
Part II (Bayesian update):1. Bayesian update surrogate2. Examples
13
13
17
17
14
14 17
13
17
14 15
13 13
17 29
13 48
15
13 13
13 13
15 13
13
13 16
23
8 8
13 15
28 29
8
8 15
8 15
8 15
19
18 18
61
57
23
17 17
17 17
23 35
57 60
61 117
17
17 17
17 17
14 14
14
7 7
14 14
34
21 14
17 14
28 28
10
10 13
17 17
17 17
11 11
17
11 11
69
40
17 11
17 11
36 28
69 68
10
10 11
9 9
10 11
9
9 12
14 14
21 21
14
14
11
11 11
42
14
11 11
11 11
14 22
38 36
12
12 13
12 12
10 10
12
10 10
23
12 10
10 10
15 15
13
10 10
15 15
69
97
49
28
16 15
12 12
21 21
48 48
83 132
48 91
16
12 12
13 12
8 8
13
8 8
26
13 8
13 8
22 21
13
13 13
9 9
13 13
9
9 13
49
26
9 12
9 13
26 22
49 48
12
12 14
12 14
12 14
15
9 9
18 18
26
15 15
14 14
26 35
15
14 14
15 14
15 14
16
16 19
97
68
29
16 18
16 18
29 35
65 64
97 132
18
18 18
15 15
18 18
15
15
14
7 7
33
15 16
15 17
32 32
16
16 17
14 14
16 17
14
14 18
64
33
11 11
14 18
31 31
72 65
11
11
8
8 14
11 18
11 13
18
13 13
33
18 13
15 13
33 31
20
15 15
19 15
18 15
19
18 18
53
87
136
64
35
19 18
14 14
35 35
64 66
82 128
61 90
33 62
8
8 13
14 14
17 14
18 14
17
17 18
2917 18
10 10
35 35
19
10 10
13 10
19 10
13
13
10
10 14
70
28
13 15
13 13
29 37
56 56
15
13 13
15 13
15 13
19
19
10
10 15
23
11 11
12 12
28 33
11
11 12
11 12
11 12
18
15 15
115
66
23
18 15
18 15
23 30
49 49
121 121
18
18 18
12 12
18 18
12
12 18
22
11 11
11 11
27 27
11
11 11
11 11
10 10
17
10 10
6222
17 10
17 10
21 21
59 49
13
10 10
18 18
10 10
11 11
10
10 11
27
10 11
10 11
32 21
12
12 15
12 13
12 15
13
13 19
88
115
62
27
13 19
13 14
27 32
62 59
115 121
61 90
10
10 11
14 14
21 14
12 12
14
10 10
12 12
29
14 12
15 12
35 35
14
14 15
11 11
14 15
11
11
8
8 16
69
29
11 18
11 23
28 28
62 62
18
18
8
8 15
15 15
13 13
15
13 13
29
15 13
13 13
33 28
16
13 13
16 13
15 13
18
15 15
135
62
29
18 15
18 15
22 22
69 62
101 101
10
10 11
19 19
15 15
7 7
15
7 7
40
15 7
15 7
40 22
19
19
9
9 13
18 18
19 22
18
18
11
10 10
11 11
62
31
18 20
11 11
31 31
39 39
20
11 11
19 11
12 11
19
12 12
26
12 12
14 12
13 13
12
12 14
13 13
4*
KAUST
I received very rich collaboration experience as a co-organizator of:I 3 UQ workshops,I 2 Scalable Hierarchical Algorithms for eXtreme Computing
(SHAXC) workshopsI 1 HPC Conference (www.hpcsaudi.org, 2017)
4*
My interests and collaborations
4*
Motivation to do Uncertainty Quantification (UQ)
Motivation: there is an urgent need to quantify and reduce theuncertainty in output quantities of computer simulations withincomplex (multiscale-multiphysics) applications.
Typical challenges: classical sampling methods are often veryinefficient, whereas straightforward functional representationsare subject to the well-known Curse of Dimensionality.
My goal is systematic, mathematically founded, development ofUQ methods and low-rank algorithms relevant for applications.
Center for UncertaintyQuantification
Center for UncertaintyQuantification
Center for Uncertainty Quantification Logo Lock-up
-1 / 39
4*
UQ and its relevance
Nowadays computational predictions are used in criticalengineering decisions and thanks to modern computers we areable to simulate very complex phenomena. But, how reliableare these predictions? Can they be trusted?
Example: Saudi Aramco currently has a simulator,GigaPOWERS, which runs with 9 billion cells. How sensitiveare the simulation results with respect to the unknown reservoirproperties?
Center for UncertaintyQuantification
Center for UncertaintyQuantification
Center for Uncertainty Quantification Logo Lock-up
0 / 39
4*
Part I: Stochastic forward problem
Part I: Stochastic Galerkin method to solveelliptic PDE with uncertain coefficients
4*
PDE with uncertain coefficient and RHS
Consider− div(κ(x , ω)∇u(x , ω)) = f (x , ω) in G × Ω, G ⊂ R2,u = 0 on ∂G, (1)
where κ(x , ω) - uncertain diffusion coefficient. Since κ positive,usually κ(x , ω) = eγ(x ,ω).For well-posedness see [Sarkis 09, Gittelson 10, H.J.Starkloff11, Ullmann 10].Further we will assume that covκ(x , y) is given.
Center for UncertaintyQuantification
Center for UncertaintyQuantification
Center for Uncertainty Quantification Logo Lock-up
1 / 39
4*
My previous work
After applying the stochastic Galerkin method, obtain:Ku = f, where all ingredients are represented in a tensor format
Compute maxu, var(u), level sets of u, sign(u)[1] Efficient Analysis of High Dimensional Data in Tensor Formats,
Espig, Hackbusch, A.L., Matthies and Zander, 2012.
Research which ingredients influence on the tensor rank of K[2] Efficient low-rank approximation of the stochastic Galerkin matrix in tensor formats,
Wahnert, Espig, Hackbusch, A.L., Matthies, 2013.
Approximate κ(x , ω), stochastic Galerkin operator K in TensorTrain (TT) format, solve for u, postprocessing[3] Polynomial Chaos Expansion of random coefficients and the solution of stochastic
partial differential equations in the Tensor Train format, Dolgov, Litvinenko, Khoromskij, Matthies, 2016.
Center for UncertaintyQuantification
Center for UncertaintyQuantification
Center for Uncertainty Quantification Logo Lock-up
2 / 39
4*
Typical quantities of interest
Keeping all input and intermediate data in a tensorrepresentation one wants to perform different tasks:
I evaluation for specific parameters (ω1, . . . , ωM),I finding maxima and minima,I finding ‘level sets’ (needed for histogram and probability
density).Example of level set: all elements of a high dimensional tensorfrom the interval [0.7,0.8].
Center for UncertaintyQuantification
Center for UncertaintyQuantification
Center for Uncertainty Quantification Logo Lock-up
3 / 39
4*
Canonical and Tucker tensor formats
Definition and Examples of tensors
Center for UncertaintyQuantification
Center for UncertaintyQuantification
Center for Uncertainty Quantification Logo Lock-up
4 / 39
4*
Canonical and Tucker tensor formats
[Pictures are taken from B. Khoromskij and A. Auer lecture course]
Storage: O(nd )→ O(dRn) and O(Rd + dRn).
Center for UncertaintyQuantification
Center for UncertaintyQuantification
Center for Uncertainty Quantification Logo Lock-up
5 / 39
4*
Definition of tensor of order d
Tensor of order d is a multidimensional array over a d-tupleindex set I = I1 × · · · × Id ,
A = [ai1...id : i` ∈ I`] ∈ RI , I` = 1, ...,n`, ` = 1, ..,d .
A is an element of the linear space
Vn =d⊗`=1
V`, V` = RI`
equipped with the Euclidean scalar product 〈·, ·〉 : Vn ×Vn → R,defined as
〈A,B〉 :=∑
(i1...id )∈I
ai1...id bi1...id , for A, B ∈ Vn.
Center for UncertaintyQuantification
Center for UncertaintyQuantification
Center for Uncertainty Quantification Logo Lock-up
6 / 39
4*
Examples of rank-1 and rank-2 tensors
Rank-1:f (x1, ..., xd ) = exp(f1(x1) + ...+ fd (xd )) =
∏dj=1 exp(fj(xj))
Rank-2: f (x1, ..., xd ) = sin(∑d
j=1 xj), since
2i · sin(∑d
j=1 xj) = ei∑d
j=1 xj − e−i∑d
j=1 xj
Rank-d function f (x1, ..., xd ) = x1 + x2 + ...+ xd can beapproximated by rank-2: with any prescribed accuracy:
f ≈∏d
j=1(1 + εxj)
ε−∏d
j=1 1ε
+O(ε), as ε→ 0
Center for UncertaintyQuantification
Center for UncertaintyQuantification
Center for Uncertainty Quantification Logo Lock-up
7 / 39
4*
Tensor and Matrices
Rank-1 tensor
A = u1 ⊗ u2 ⊗ ...⊗ ud =:d⊗µ=1
uµ
Ai1,...,id = (u1)i1 · ... · (ud )id
Rank-1 tensor A = u ⊗ v , matrix A = uvT , A = vuT , u ∈ Rn,v ∈ Rm,Rank-k tensor A =
∑ki=1 ui ⊗ vi , matrix A =
∑ki=1 uivT
i .Kronecker product of n × n and m ×m matrices is a new blockmatrix A⊗ B ∈ Rnm×nm, whose ij-th block is [AijB].
Center for UncertaintyQuantification
Center for UncertaintyQuantification
Center for Uncertainty Quantification Logo Lock-up
8 / 39
4*
Discretization of elliptic PDE
Now let us discretize our diffusion equation withuncertain coefficients
Center for UncertaintyQuantification
Center for UncertaintyQuantification
Center for Uncertainty Quantification Logo Lock-up
9 / 39
4*
Karhunen Loeve and Polynomial Chaos Expansions
Apply bothKarhunen Loeve Expansion (KLE):κ(x , ω) = κ0(x) +
∑∞j=1 κjgj(x)ξj(θ(ω)), where
θ = θ(ω) = (θ1(ω), θ2(ω), ..., ),ξj(θ) = 1
κj
∫G (κ(x , ω)− κ0(x)) gj(x)dx .
Polynomial Chaos Expansion (PCE)κ(x , ω) =
∑α κ
(α)(x)Hα(θ), compute ξj(θ) =∑
α∈J ξ(α)j Hα(θ),
where ξ(α)j = 1κj
∫G κ
(α)(x)gj(x)dx .
Further compute ξ(α)j ≈∑s
`=1(ξ`)j∏∞
k=1(ξ`, k )αk .
Center for UncertaintyQuantification
Center for UncertaintyQuantification
Center for Uncertainty Quantification Logo Lock-up
10 / 39
4*
Final discretized stochastic PDE
Ku = f, where
K:=∑s
`=1K` ⊗⊗M
µ=1∆`µ, K` ∈ RN×N , ∆`µ ∈ RRµ×Rµ ,u:=
∑rj=1 uj ⊗
⊗Mµ=1 ujµ, uj ∈ RN , ujµ ∈ RRµ ,
f:=∑R
k=1 f k ⊗⊗M
µ=1 gkµ, f k ∈ RN and gkµ ∈ RRµ .(Wahnert, Espig, Hackbusch, Litvinenko, Matthies, 2011)
Examples of stochastic Galerkin matrices:
Center for UncertaintyQuantification
Center for UncertaintyQuantification
Center for Uncertainty Quantification Logo Lock-up
11 / 39
4*
Computing QoI in low-rank tensor format
Now, we consider how tofind maxima in a high-dimensional tensor
4*
Maximum norm and corresponding index
Let u =∑r
j=1⊗d
µ=1 ujµ ∈ Rr , compute
‖u‖∞ := maxi:=(i1,...,id )∈I
|ui | = maxi:=(i1,...,id )∈I
∣∣∣∣∣∣r∑
j=1
d∏µ=1
(ujµ)
iµ
∣∣∣∣∣∣ .Computing ‖u‖∞ is equivalent to the following e.v. problem.
Let i∗ := (i∗1 , . . . , i∗d ) ∈ I, #I =
∏dµ=1 nµ.
‖u‖∞ = |ui∗ | =
∣∣∣∣∣∣r∑
j=1
d∏µ=1
(ujµ)
i∗µ
∣∣∣∣∣∣ and e(i∗) :=d⊗µ=1
ei∗µ ,
where ei∗µ ∈ Rnµ the i∗µ-th canonical vector in Rnµ (µ ∈ N≤d ).
Center for UncertaintyQuantification
Center for UncertaintyQuantification
Center for Uncertainty Quantification Logo Lock-up
12 / 39
Then
u e(i∗) =
r∑j=1
d⊗µ=1
ujµ
d⊗µ=1
ei∗µ
=r∑
j=1
d⊗µ=1
ujµ ei∗µ
=r∑
j=1
d⊗µ=1
[(ujµ)i∗µei∗µ
]
=
r∑j=1
d∏µ=1
(ujµ)i∗µ
︸ ︷︷ ︸
ui∗=
d⊗µ=1
e(i∗µ) = ui∗e(i∗).
Thus, we obtained an “eigenvalue problem”:
u e(i∗) = ui∗e(i∗).
Center for UncertaintyQuantification
Center for UncertaintyQuantification
Center for Uncertainty Quantification Logo Lock-up
13 / 39
4*
Computing ‖u‖∞, u ∈ Rr by vector iteration
By defining the following diagonal matrix
D(u) :=r∑
j=1
d⊗µ=1
diag((ujµ)`µ
)`µ∈N≤nµ
(2)
with representation rank r , obtain D(u)v = u v .Now apply the well-known vector iteration method (with ranktruncation) to
D(u)e(i∗) = ui∗e(i∗),
obtain ‖u‖∞.[Approximate iteration, Khoromskij, Hackbusch, Tyrtyshnikov 05],
and [Espig, Hackbusch 2010]
Center for UncertaintyQuantification
Center for UncertaintyQuantification
Center for Uncertainty Quantification Logo Lock-up
14 / 39
4*
How to compute the mean value in CP format
Let u =∑r
j=1⊗d
µ=1 ujµ ∈ Rr , then the mean value u can becomputed as a scalar product
u =
⟨ r∑j=1
d⊗µ=1
ujµ
,
d⊗µ=1
1nµ
1µ
⟩ =r∑
j=1
d⊗µ=1
⟨ujµ, 1µ
⟩nµ
=
(3)
=r∑
j=1
d∏µ=1
1nµ
( nµ∑k=1
(ujµ)k
), (4)
where 1µ := (1, . . . ,1)T ∈ Rnµ .Numerical cost is O
(r ·∑d
µ=1 nµ)
.
Center for UncertaintyQuantification
Center for UncertaintyQuantification
Center for Uncertainty Quantification Logo Lock-up
15 / 39
4*
Numerical Experiments
2D L-shape domain, N = 557 dofs.Total stochastic dimension is Mu = Mk + Mf = 20, there are|J | = 231 PCE coefficients
u =231∑j=1
uj,0 ⊗20⊗µ=1
ujµ ∈ R557 ⊗20⊗µ=1
R3.
Center for UncertaintyQuantification
Center for UncertaintyQuantification
Center for Uncertainty Quantification Logo Lock-up
16 / 39
4*
Level sets
Now we compute level sets
sign(b‖u‖∞1− u)for b ∈ 0.2, 0.4, 0.6, 0.8.
I Tensor u has 320 ∗ 557 ≈ 2 · 1012 entries ≈ 16 TB ofmemory.
I The computing time of one level set was 10 minutes.I Intermediate ranks of sign(b‖u‖∞1− u) and of rank(uk )
were less than 24.
Center for UncertaintyQuantification
Center for UncertaintyQuantification
Center for Uncertainty Quantification Logo Lock-up
17 / 39
4*
Part II
Part II: Bayesian update
We will speak about Gauss-Markov-Kalman filter for theBayesian updating of parameters in comput. model.
4*
Mathematical setup
Consider
K (u; q) = f ⇒ u = S(f ; q),
where S is solution operator.Operator depends on parameters q ∈ Q,hence state u ∈ U is also function of q:
Measurement operator Y with values in Y:
y = Y (q; u) = Y (q,S(f ; q)).
Examples of measurements:y(ω) =
∫D0
u(ω, x)dx , or u in few points
Center for UncertaintyQuantification
Center for UncertaintyQuantification
Center for Uncertainty Quantification Logo Lock-up
18 / 39
4*
Random QoI
With state u a RV, the quantity to be measured
y(ω) = Y (q(ω),u(ω)))
is also uncertain, a random variable.Noisy data: y + ε(ω),
where y is the “true” value and a random error ε.
Forecast of the measurement: z(ω) = y(ω) + ε(ω).
Center for UncertaintyQuantification
Center for UncertaintyQuantification
Center for Uncertainty Quantification Logo Lock-up
19 / 39
4*
Conditional probability and expectation
Classically, Bayes’s theorem gives conditional probability
P(Iq|Mz) =P(Mz |Iq)
P(Mz)P(Iq) (orπq(q|z) =
p(z|q)
Zspq(q));
Expectation with this posterior measure is conditionalexpectation.
Kolmogorov starts from conditional expectation E (·|Mz),from this conditional probability via P(Iq|Mz) = E
(χIq |Mz
).
Center for UncertaintyQuantification
Center for UncertaintyQuantification
Center for Uncertainty Quantification Logo Lock-up
20 / 39
4*
Conditional expectation
The conditional expectation is defined asorthogonal projection onto the closed subspace L2(Ω,P, σ(z)):
E(q|σ(z)) := PQ∞q = argminq∈L2(Ω,P,σ(z)) ‖q − q‖2L2
The subspace Q∞ := L2(Ω,P, σ(z)) represents the availableinformation.
The update, also called the assimilated valueqa(ω) := PQ∞q = E(q|σ(z)), is a Q-valued RV
and represents new state of knowledge after the measurement.Doob-Dynkin: Q∞ = ϕ ∈ Q : ϕ = φ z, φmeasurable.
Center for UncertaintyQuantification
Center for UncertaintyQuantification
Center for Uncertainty Quantification Logo Lock-up
21 / 39
4*
Numerical computation of NLBU
Look for ϕ such that q(ξ) = ϕ(z(ξ)), z(ξ) = y(ξ) + ε(ω):
ϕ ≈ ϕ =∑α∈Jp
ϕαΦα(z(ξ))
and minimize ‖q(ξ)− ϕ(z(ξ))‖2L2, where Φα are polynomials
(e.g. Hermite, Laguerre, Chebyshev or something else).Taking derivatives with respect to ϕα:
∂
∂ϕα〈q(ξ)− ϕ(z(ξ)),q(ξ)− ϕ(z(ξ))〉 = 0 ∀α ∈ Jp
Inserting representation for ϕ, obtain:
Center for UncertaintyQuantification
Center for UncertaintyQuantification
Center for Uncertainty Quantification Logo Lock-up
22 / 39
4*
Numerical computation of NLBU
∂
∂ϕαE
q2(ξ)− 2∑β∈J
qϕβΦβ(z) +∑β,γ∈J
ϕβϕγΦβ(z)Φγ(z)
= 2E
−qΦα(z) +∑β∈J
ϕβΦβ(z)Φα(z)
= 2
∑β∈J
E [Φβ(z)Φα(z)]ϕβ − E [qΦα(z)]
= 0 ∀α ∈ J .
Center for UncertaintyQuantification
Center for UncertaintyQuantification
Center for Uncertainty Quantification Logo Lock-up
23 / 39
4*
Numerical computation of NLBU
Now, rewriting the last sum in a matrix form, obtain the linearsystem of equations (=: A) to compute coefficients ϕβ: ... ... ...
... E [Φα(z(ξ))Φβ(z(ξ))]...
... ... ...
...ϕβ
...
=
...
E [q(ξ)Φα(z(ξ))]...
,
where α, β ∈ J , A is of size |J | × |J |.
Center for UncertaintyQuantification
Center for UncertaintyQuantification
Center for Uncertainty Quantification Logo Lock-up
24 / 39
4*
Numerical computation of NLBU
We can rewrite the system above in the compact form:
[Φ] [diag(...wi ...)] [Φ]T
...ϕβ...
= [Φ]
w0q(ξ0)...
wNq(ξN)
[Φ] ∈ RJα×N , [diag(...wi ...)] ∈ RN×N , [Φ] ∈ RJα×N .Solving this system, obtain vector of coefficients (...ϕβ...)
T forall β.Finally, the assimilated parameter qa will be
qa = qf + ϕ(y)− ϕ(z), (5)
z(ξ) = y(ξ) + ε(ω), ϕ =∑
β∈JpϕβΦβ(z(ξ))
Center for UncertaintyQuantification
Center for UncertaintyQuantification
Center for Uncertainty Quantification Logo Lock-up
25 / 39
4*
Example: Lorenz 1963 problem (chaotic system of ODEs)
x = σ(ω)(y − x)
y = x(ρ(ω)− z)− yz = xy − β(ω)z
Initial state q0(ω) = (x0(ω), y0(ω), z0(ω)) are uncertain.
Solving in t0, t1, ..., t10, Noisy Measur. → UPDATE, solving int11, t12, ..., t20, Noisy Measur. → UPDATE,...
IDEA of the Bayesian Update (BU):Take qf (ω) = q0(ω).Linear BU: qa = qf + K · (z − y)Non-Linear BU: qa = qf + H1 · (z − y) + (z − y)T · H2 · (z − y).
Center for UncertaintyQuantification
Center for UncertaintyQuantification
Center for Uncertainty Quantification Logo Lock-up
26 / 39
Trajectories of x,y and z in time. After each update (newinformation coming) the uncertainty drops. [O. Pajonk, B. V. Rosic, A.
Litvinenko, and H. G. Matthies, 2012]
Center for UncertaintyQuantification
Center for UncertaintyQuantification
Center for Uncertainty Quantification Logo Lock-up
27 / 39
4*
Example: Lorenz problem
10 0 100
0.10.20.30.40.50.60.70.8
x
20 0 200
0.050.1
0.150.2
0.250.3
0.350.4
0.45
y
0 10 200
0.10.20.30.40.50.60.70.80.9
1
z
xfxa
yfya
zfza
Figure: quadratic BU surrogate, measure the state (x(t), y(t), z(t)).Prior and posterior after one update.
Center for UncertaintyQuantification
Center for UncertaintyQuantification
Center for Uncertainty Quantification Logo Lock-up
28 / 39
4*
Example: Lorenz Problem
10 5 00
0.10.20.30.40.50.60.70.80.9
x
x1x2
15 10 50
0.050.1
0.150.2
0.250.3
0.350.4
0.450.5
y
y1y2
5 10 150
0.10.20.30.40.50.6
z
z1z2
Figure: Comparison of the posterior functions computed by linear andquadratic BU after second update.
Center for UncertaintyQuantification
Center for UncertaintyQuantification
Center for Uncertainty Quantification Logo Lock-up
29 / 39
4*
Example: Lorenz Problem
20 0 200
0.020.040.060.080.1
0.120.140.16
x
50 0 500
0.010.020.030.040.050.060.070.080.09
y
0 10 200
0.020.040.060.080.1
0.120.140.160.18
z
xfxa
yfya
zfza
Figure: Quadratic measurement (x(t)2, y(t)2, z(t)2): Comparison of apriori and a posterior for NLBU
Center for UncertaintyQuantification
Center for UncertaintyQuantification
Center for Uncertainty Quantification Logo Lock-up
30 / 39
4*
Example: 1D elliptic PDE with uncertain coeffs
−∇ · (κ(x , ξ)∇u(x , ξ)) = f (x , ξ), x ∈ [0,1]
+ Dirichlet random b.c. g(0, ξ) and g(1, ξ).3 measurements: u(0.3) = 22, s.d. 0.2, x(0.5) = 28, s.d. 0.3,x(0.8) = 18, s.d. 0.3.
I κ(x, ξ): N = 100 dofs, M = 5, number of KLE terms 35, beta distribution for κ, Gaussian covκ, cov.length 0.1, multi-variate Hermite polynomial of order pκ = 2;
I RHS f (x, ξ): Mf = 5, number of KLE terms 40, beta distribution for κ, exponential covf , cov. length 0.03,multi-variate Hermite polynomial of order pf = 2;
I b.c. g(x, ξ): Mg = 2, number of KLE terms 2, normal distribution for g, Gaussian covg , cov. length 10,multi-variate Hermite polynomial of order pg = 1;
I pφ = 3 and pu = 3
Center for UncertaintyQuantification
Center for UncertaintyQuantification
Center for Uncertainty Quantification Logo Lock-up
31 / 39
4*
Example: updating of the solution u
0 0.5 1-20
0
20
40
60
0 0.5 1-20
0
20
40
60
Figure: Original and updated solutions, mean value plus/minus 1,2,3standard deviations
[graphics are built in the stochastic Galerkin library sglib, written by E. Zander in TU Braunschweig]
Center for UncertaintyQuantification
Center for UncertaintyQuantification
Center for Uncertainty Quantification Logo Lock-up
32 / 39
4*
Example: Updating of the parameter
0 0.5 10
0.5
1
1.5
0 0.5 10
0.5
1
1.5
Figure: Original and updated parameter κ.
Center for UncertaintyQuantification
Center for UncertaintyQuantification
Center for Uncertainty Quantification Logo Lock-up
33 / 39
4*
Future plans and possible collaboration
Future plans and possible collaboration ideas
4*
Future plans, Idea N1
Possible collaboration work with Troy Butler: To develop alow-rank adaptive goal-oriented Bayesian update technique. Thesolution of the forward and inverse problems will be considered as awhole adaptive process, controlled by error/uncertainty estimators.
z
(y - z) q
f ε
forward update
low-rank and adaptive
y
f z
(y - z)
ε
forwardy q.....
low-rank and adaptive
... q update
Stochastic forward spatial discret.
stochastic discret.
low-rank approx.
Inverse problem
Errors
inverse operator approx.
4*
Future plans, Idea N2
Edge between Green functions in PDEs and covariancematrices.Possible collaboration with statistical group, Doug Nychka(NCAR), Havard Rue
Center for UncertaintyQuantification
Center for UncertaintyQuantification
Center for Uncertainty Quantification Logo Lock-up
34 / 39
4*
Future plans, Idea N3
Data assimilation techniques, Bayesian update surrogare.Develop non-linear, non-Gaussian Bayesian updateapproximation for gPCE coefficients.Possible collaboration with Jan Mandel, Troy Butler, Kody Law,Y. Marzouk, H. Najm, TU Braunschweig and KAUST
4*
Collaborators
1. Uncertainty quantification and Bayesian Update: Prof. H.Matthies, Bojana V. Rosic, Elmar Zander, Oliver Pajonkfrom TU Braunschweig, Germany,
2. Low-rank tensor calculus: Mike Espig from RWTH Aachen,Boris and Venera Khoromskij from MPI Leipzig
3. Spatial and environmental statistics: Marc Genton, YingSun, Raphael Huser, Brian Reich, Ben Shaby and DavidBolin.
4. Some others: UQ, data assimilation, high-dimensionalproblems/statistics
4*
Conclusion
I Introduced low-rank tensor methods to solve elliptic PDEswith uncertain coefficients,
I Explained how to compute the maximum, the mean, levelsets,... in low-rank tensor format,
I Derived Bayesian update surrogate ϕ (as a linear,quadratic, cubic etc approximation), i.e. computeconditional expectation of q, given measurement y .
Center for UncertaintyQuantification
Center for UncertaintyQuantification
Center for Uncertainty Quantification Logo Lock-up
34 / 39
4*
Example: Canonical rank d , whereas TT rank 2
d-Laplacian over uniform tensor grid. It is known to have theKronecker rank-d representation,
∆d = A⊗IN⊗...⊗IN +IN⊗A⊗...⊗IN +...+IN⊗IN⊗...⊗A ∈ RI⊗d⊗I⊗d
(6)with A = ∆1 = tridiag−1,2,−1 ∈ RN×N , and IN being theN × N identity. Notice that for the canonical rank we have rankkC(∆d ) = d , while TT-rank of ∆d is equal to 2 for anydimension due to the explicit representation
∆d = (∆1 I)×(
I 0∆1 I
)× ...×
(I 0
∆1 I
)×(
I∆1
)(7)
where the rank product operation ”×” is defined as a regularmatrix product of the two corresponding core matrices, theirblocks being multiplied by means of tensor product. The similarbound is true for the Tucker rank rankTuck (∆d ) = 2.
4*
Advantages and disadvantages
Denote k - rank, d-dimension, n = # dofs in 1D:
1. CP: ill-posed approx. alg-m, O(dnk), hard to computeapprox.
2. Tucker: reliable arithmetic based on SVD, O(dnk + kd )
3. Hierarchical Tucker: based on SVD, storage O(dnk + dk3),truncation O(dnk2 + dk4)
4. TT: based on SVD, O(dnk2) or O(dnk3), stable5. Quantics-TT: O(nd )→ O(d logqn)
4*
How to compute the variance in CP format
Let u ∈ Rr and
u := u − ud⊗µ=1
1nµ
1 =r+1∑j=1
d⊗µ=1
ujµ ∈ Rr+1, (8)
then the variance var(u) of u can be computed as follows
var(u) =〈u, u〉∏dµ=1 nµ
=1∏d
µ=1 nµ
⟨r+1∑i=1
d⊗µ=1
uiµ
,
r+1∑j=1
d⊗ν=1
ujν
⟩
=r+1∑i=1
r+1∑j=1
d∏µ=1
1nµ
⟨uiµ, ujµ
⟩.
Numerical cost is O(
(r + 1)2 ·∑d
µ=1 nµ)
.
4*
Computing QoI in low-rank tensor format
Now, we consider how tofind ‘level sets’,
for instance, all entries of tensor u from interval [a,b].
4*
Definitions of characteristic and sign functions
1. To compute level sets and frequencies we needcharacteristic function.2. To compute characteristic function we need sign function.
The characteristic χI(u) ∈ T of u ∈ T in I ⊂ R is for every multi-index i ∈ I pointwise defined as
(χI(u))i :=
1, ui ∈ I,0, ui /∈ I.
Furthermore, the sign(u) ∈ T is for all i ∈ I pointwise definedby
(sign(u))i :=
1, ui > 0;−1, ui < 0;0, ui = 0.
Center for UncertaintyQuantification
Center for UncertaintyQuantification
Center for Uncertainty Quantification Logo Lock-up
36 / 39
4*
sign(u) is needed for computing χI(u)
LemmaLet u ∈ T , a,b ∈ R, and 1 =
⊗dµ=1 1µ, where
1µ := (1, . . . ,1)t ∈ Rnµ .(i) If I = R<b, then we have χI(u) = 1
2(1+ sign(b1− u)).
(ii) If I = R>a, then we have χI(u) = 12(1− sign(a1− u)).
(iii) If I = (a,b), then we haveχI(u) = 1
2(sign(b1− u)− sign(a1− u)).
Computing sign(u), u ∈ Rr , via hybrid Newton-Schulz iterationwith rank truncation after each iteration.
Center for UncertaintyQuantification
Center for UncertaintyQuantification
Center for Uncertainty Quantification Logo Lock-up
37 / 39
4*
Level Set, Frequency
Definition (Level Set, Frequency)Let I ⊂ R and u ∈ T . The level set LI(u) ∈ T of u respect to I ispointwise defined by
(LI(u))i :=
ui ,ui ∈ I ;0,ui /∈ I ,
for all i ∈ I.The frequency FI(u) ∈ N of u respect to I is defined as
FI(u) := # suppχI(u).
Center for UncertaintyQuantification
Center for UncertaintyQuantification
Center for Uncertainty Quantification Logo Lock-up
38 / 39
4*
Computation of level sets and frequency
PropositionLet I ⊂ R, u ∈ T , and χI(u) its characteristic. We have
LI(u) = χI(u) u
and rank(LI(u)) ≤ rank(χI(u)) rank(u).The frequency FI(u) ∈ N of u respect to I is
FI(u) = 〈χI(u),1〉 ,
where 1 =⊗d
µ=1 1µ, 1µ := (1, . . . ,1)T ∈ Rnµ .
Center for UncertaintyQuantification
Center for UncertaintyQuantification
Center for Uncertainty Quantification Logo Lock-up
39 / 39