+ All Categories
Home > Documents > Max-Planc fur Mathematik¨ in den Naturwissenschaften Leipzig · Mathematik in den Naturwissensc...

Max-Planc fur Mathematik¨ in den Naturwissenschaften Leipzig · Mathematik in den Naturwissensc...

Date post: 17-Sep-2018
Category:
Upload: lekhanh
View: 217 times
Download: 0 times
Share this document with a friend
22
f ¨ ur Mathematik in den Naturwissenschaften Leipzig Application of hierarchical matrices for computing the Karhunen-Lo` eve expansion by Boris N. Khoromskij, Alexander Litvinenko, and Hermann G. Matthies Preprint no.: 81 2008
Transcript
Page 1: Max-Planc fur Mathematik¨ in den Naturwissenschaften Leipzig · Mathematik in den Naturwissensc haften, Leipzig, German y 1 Institut f ur Wissensc haftlic hes Rec hnen, Braunsc h

Max-Plan k-Institutfur Mathematik

in den Naturwissenschaften

Leipzig

Application of hierarchical matrices for

computing the Karhunen-Loeve expansion

by

Boris N. Khoromskij, Alexander Litvinenko, and Hermann G.

Matthies

Preprint no.: 81 2008

Page 2: Max-Planc fur Mathematik¨ in den Naturwissenschaften Leipzig · Mathematik in den Naturwissensc haften, Leipzig, German y 1 Institut f ur Wissensc haftlic hes Rec hnen, Braunsc h
Page 3: Max-Planc fur Mathematik¨ in den Naturwissenschaften Leipzig · Mathematik in den Naturwissensc haften, Leipzig, German y 1 Institut f ur Wissensc haftlic hes Rec hnen, Braunsc h

Appli ation of hierar hi al matri es for omputingthe Karhunen-Lo�eve expansionB. N. Khoromskij1, A. Litvinenko2 and H. G. Matthies2Max-Plan k-Institut f�ur Mathematik in den Naturwissens haften, Leipzig, Germany1Institut f�ur Wissens haftli hes Re hnen, Brauns hweig, Germany2Abstra tRealisti mathemati al models of physi al pro esses ontain un ertainties. Thesemodels are often des ribed by sto hasti di�erential equations (SDEs) or sto hasti partial di�erential equations (SPDEs) with multipli ative noise. The un ertaintiesin the right-hand side or the oeÆ ients are represented as random �elds. To solvea given SPDE numeri ally one has to dis retise the deterministi operator as wellas the sto hasti �elds. The total dimension of the SPDE is the produ t of thedimensions of the deterministi part and the sto hasti part. To approximate ran-dom �elds with as few random variables as possible, but still retaining the essentialinformation, the Karhunen-Lo�eve expansion (KLE) be omes important. The KLEof a random �eld requires the solution of a large eigenvalue problem. Usually itis solved by a Krylov subspa e method with a sparse matrix approximation. Wedemonstrate the use of sparse hierar hi al matrix te hniques for this. A log-linear omputational ost of the matrix-ve tor produ t and a log-linear storage require-ment yield an eÆ ient and fast dis retisation of the random �elds presented.AMS subje t lassi� ation: 60H15, 60H35, 65N25Key words: Hierar hi al matrix, data-sparse approximation, Karhunen-Lo�eve ex-pansion, un ertainty quanti� ation, random �elds, eigenvalue omputation.1 Introdu tionDuring the last few years there is a great interest in numeri al methods for solving sto has-ti PDEs and ODEs [10, 2, 3, 25, 39, 37, 35, 38℄. Examples are sto hasti Navier Stokesequations, sto hasti plasti ity equations and sto hasti aerodynami equations. Veryoften these equations ontain parameters, right-hand sides, initial or boundary onditionswhi h have a sto hasti nature. Typi al examples are ondu tivity oeÆ ients in ground-water ow problems, plasti ity of the material and parameters in turbulen e modelling.To solve the problem, the given sto hasti di�erential or integral equation has to be dis- retised. For the dis retisation of the deterministi part one an use any known te hnique(�nite element, �nite di�eren es or �nite volumes). For the dis retisation of random�elds the Karhunen-Lo�eve expansion (KLE) [28℄ is usually used. Another important ap-pli ation of the KLE is the dire t omputation of higher order moments of the solutionwithout omputing the solution per se [29, 36℄. Ea h random �eld is hara terised by itsCorresponden e: A. Litvinenko, Institut f�ur Wissens haftli hes Re hnen, Hans-Sommer Str. 65,38106, Brauns hweig, Germany 1

Page 4: Max-Planc fur Mathematik¨ in den Naturwissenschaften Leipzig · Mathematik in den Naturwissensc haften, Leipzig, German y 1 Institut f ur Wissensc haftlic hes Rec hnen, Braunsc h

ovarian e fun tion. To dis retise this random �eld one has to solve an eigenproblem fora Fredholm integral operator with the ovarian e fun tion as the kernel. In a straight-forward dis retisations, the matrix is dense and hen e the omputational ost is O(n3)FLOPS, where n is the number of degrees of freedom (dof) in the omputational domain.For spe ial ases, when the ovarian e fun tion is stationary (i.e. ov(x; y) = ov(x� y))and the omputational domain is an axiparallel re tangle with uniform and axiparalleltriangulation the Fast Fourier te hnique [12℄ an be applied with the omputational ostO(n logn). In [23℄ the authors introdu ed the so- alled Hierar hi al Krone ker Tensor(HKT) format for sparse approximation of integral operators. The matrix-ve tor produ tin the HKT format an be done in O(dn1=d logn) FLOPS, where d is the dimension of thedomain. For more general ases of the ovarian e matrix, for a non-re tangular domain orfor a non-axiparallel triangulation, the FFT is not appli able and a data sparse te hniqueshould be applied (e.g., the H-matrix te hnique [21, 20, 18, 17℄).In [37℄ the authors ompute the KLE by the Fast Multipole method with an iterativeKrylov eigensolver. In [10℄ a brief overview of how boundary value problems with randomdata may be solved using the sto hasti FEM is des ribed. In the same paper the authorsapplyH-matri es and the Lan zos-based thi k-restart method [42℄ for omputing the KLEof random �elds.In the urrent paper we onsider the appli ation of the H-matrix method in a sys-temati way. The rest of this paper is stru tured as follows. In Se tion 2 we set up theproblem and re all the Karhunen-Lo�eve expansion. The H-matrix te hnique is presentedin Se tion 3. In parti ular, we prove the asymptoti smoothness of the arising ovari-an e fun tions. The H-matrix approximation of ovarian e fun tions is shown in Se tion4. Finally, in Se tion 5, we provide numeri al results for solving an eigenproblem withprede�ned H-matrix-ve tor produ t.2 Ba kgroundNowadays the trend of numeri al mathemati s is often trying to resolve inexa t math-emati al models by very exa t deterministi numeri al methods. The reason is that al-most ea h mathemati al model ontains un ertainties in the oeÆ ients, right-hand side,boundary onditions, initial data as well as in the geometry. Su h type of un ertainties an be modelled by random �elds. In [2, 3, 25, 39, 37, 35, 29℄ the authors onsider thefollowing sto hasti ellipti boundary value problem� div(�(x; !)ru) = f(x; !) in G � ; G � Rd ;u = g(x; !) on �G � ; (1)where the ondu tivity oeÆ ient �(x; !), the right-hand side f(x; !), the boundary datag(x; !) and the solution u(x; !) are random �elds. The omputational domain G is abounded domain, x 2 G and ! belongs to the spa e of random events .To guarantee the positive de�niteness and regularity of the operator in (1) it is assumed2

Page 5: Max-Planc fur Mathematik¨ in den Naturwissenschaften Leipzig · Mathematik in den Naturwissensc haften, Leipzig, German y 1 Institut f ur Wissensc haftlic hes Rec hnen, Braunsc h

that 0 < �min � �(x; !) � �max <1; a.e. on G � :We assume that there is a triplet (;�;P), where is a set of random elementary events,� is the �-algebra of Borel subsets of and P a probability measure. We assume also thatthe random �elds �(�; !) : ! L1(G), f(�; !) : ! L2(G) and g(�; !) : ! L2(�G)have �nite varian e.Let us as an example onsider the random �eld �(x; !). The mean value �(x; !) andthe ovarian e fun tion ov�(x; y), x, y 2 Rd , should be provided. By de�nition, the o-varian e fun tion is symmetri and positive semi-de�nite. One an lassify all ovarian efun tions into the three following groups:1. Dire tionally independent (isotropi ) and translation invariant (stationary or homoge-neous), i.e. ov(x; y) = ov(jx� yj).2. Dire tionally dependent (anisotropi ) and stationary or homogeneous, i.e. ov(x; y) = ov(x� y).3. Instationary and non-homogeneous, i.e. of a general type.The ovarian e fun tions of types (1) and (2), dis retised on an axiparallel re tangulargrid, result in (blo k) Toeplitz matri es. These matri es an be further extended to (blo k) ir ulant ones. The matrix ve tor multipli ation in the lass of (blo k) ir ulant matri es an be performed by the Fast Fourier Transformation (FFT) very eÆ iently. In the aseof a general grid as well as in the third ase, the dis retised ovarian e matrix is not aToeplitz one and the FT annot be applied. Thus, we need a general data sparse formatto store ovarian e matri es.For the numeri al solution of (1) the presented random �elds need to be dis retisedboth in the sto hasti and in the spatial dimension. One of the main tools here is theKarhunen-Lo�eve expansion (KLE) [28℄. Thus, an e�e tive and \sparse" omputation ofthe KLE is a key point in solving Eq. (1) [31℄. Let us de�ne the following operator Twhi h will be needed for omputing the KLE of �(x; !):T : L2(G)! L2(G); (T�)(x) := ZG ov�(x; y)�(y)dy:For ov� 2 L2(G � G), the operator T is ompa t and selfadjoint [40℄, in fa t Hilbert-S hmidt. As the ovarian e fun tion ov� is symmetri positive semi-de�nite, hen e so isT . Thus, the eigenfun tions �` of the following Fredholm integral equation of the se ondkind T�` = �`�`; �` 2 L2(G); ` 2 N ; (2)are mutually orthogonal and de�ne a basis of L2(G) (for more details see [33, 19℄). Theeigenvalues �` are real, non-negative and an be arranged de reasingly �1 � �2 � ::: � 0[33℄. From Mer er's theorem ([40, 33℄) it follows that for a ontinuous ov� the eigenfun -tions are ontinuous and the onvergen e of ov�m(x; y) = m�1X=0 �`�`(x)�`(y)3

Page 6: Max-Planc fur Mathematik¨ in den Naturwissenschaften Leipzig · Mathematik in den Naturwissensc haften, Leipzig, German y 1 Institut f ur Wissensc haftlic hes Rec hnen, Braunsc h

as m ! 1 to the exa t ovarian e fun tion ov� is absolute and uniform on G � G[33℄. The onvergen e rates an be estimated through the smoothness of the ovarian efun tion [37℄.By de�nition, the KLE of �(x; !) is the following series�(x; !) = ��(x) + 1X=1p�`�`(x)�`(!); where (3)��(x) = E�(x); �`(!) = 1p�` ZG(�(x; !)� ��(x))�`(x)dx;E�(x) is the mean value of �(x; !), �` and �` are the eigenvalues and the eigenve tors ofproblem (2) and �`(!) un orrelated random variables. For numeri al purposes one trun- ates the KLE (3) to a �nite number m of terms. In the ase of a Gaussian random �eld,the �` are independent standard normal random variables. In the ase of a non-Gaussianrandom �eld, the �` are un orrelated but not independent, and an be approximated in aset of new independent Gaussian random variables [24, 41℄, e.g.�`(!) =X�2J �(�)H�(�(!));where �(!) = (�1(!); �2(!); :::), �(�) are oeÆ ients, H�, � 2 J , is a Hermitian basisand J := f�j� = (�1; :::; �j; :::); �j 2 N0g a multi-index set. For the purpose of a tual omputation, trun ate the polynomial haos expansion (PCE) [24, 41℄ after �nitely manyterms, e.g.� 2 JM;p := f� 2 J j (�) �M; j�j � pg; (�) := maxf| 2 N j�| > 0g:In [33℄ it is shown that the m-term KL trun ation is best in Hilbert-S hmidt norm.As soon as the m-term KLE of the ondu tivity �(x; !) is omputed and the randomvariables �`(!) 2 RjJj are dis retised [2, 11, 25℄, one an obtain, after applying the sto has-ti Galerkin approximation method [30℄ and trun ated PCE, the following equationKu = 24m�1X=0 X 2JM;p�( ) K`35u = f ; (4)where �( ) are some dis rete operators whi h ome from the Hermitian algebra and anbe omputed analyti ally [30, 25, 29℄. The sparsity pattern of�( ) depends on how manyterms were used in the PCE. Note that the matri es K` 2 Rn�n allow for data sparseapproximations, in parti ular the hierar hi al (H) matrix approximation. Note that theiterative solvers, used for the solution of (4), do not require that the matri es K` arestored expli itly.Now one an see that the a ura y of the dis retisation of (1) depends on the onvergen e4

Page 7: Max-Planc fur Mathematik¨ in den Naturwissenschaften Leipzig · Mathematik in den Naturwissensc haften, Leipzig, German y 1 Institut f ur Wissensc haftlic hes Rec hnen, Braunsc h

rate of �m(x; !) with respe t to m ! 1. Thus, heap and a urate omputing of theKLE approximation of the given random �elds is required.Further, in this paper, we ombine theH-matrix data representation together with Krylovsolvers for the eÆ ient omputation of the m-term KLEs of the given random �elds andthe solution.2.1 FE dis retisation of equation (2)Further in the paper we will use the bold font for de�ning dis retised obje ts, e.g. u 2 Rnor C 2 Rn�n .In general, the eigenvalue problem (2) needs to be solved numeri ally and standardte hniques (e.g. [1, 19, 32℄) may be used for this. We onsider the following Galerkindis retisation of the operator in (2). Let I = f1; : : : ; ng. Assume that b1,...,bn are thenodal basis fun tions with respe t to the nodes x1; :::; xn 2 G � Rd , i.e. bi(xj) = Æij, i; j 2I. Let Vh = spanfb1; :::; bng and for the sto hasti variables we introdu e � = (�1; :::; �n)T ,�i(!) := �(xi; !), i 2 I.The interpolation of �(x; !) in the FE basis above is then�h(x; !) = nXi=1 bi(x)�i(!) = b(x)�(!); b(x) = (b1(x); :::; bn(x)):The ovarian e fun tion of �h is ov�h(x; y) = nXi=1 nXj=1 bi(x)Cijbj(y) = b(x)Cb(y)T ; with Cij = ov�(xi; yj): (5)Note that this dis retisation may use a di�erent grid than the dis retisation of the spatialpart in (1). Applying (5) and�`(y) = nXj=1 bj(y)�j` = b(y)�`; �` := (�1`; :::; �n`)Tto the eigenvalue problem ZG ov�(x; y)�`(y)dy = �`�`(x); (6)we obtain ZG b(x)Cb(y)Tb(y)�`dy = �`b(x)�`:The weak formulation (Galerkin weighting) givesZG ZG b(x)Tb(x)Cb(y)Tb(y)�`dydx = ZG �`b(x)Tb(x)�`dx;5

Page 8: Max-Planc fur Mathematik¨ in den Naturwissenschaften Leipzig · Mathematik in den Naturwissensc haften, Leipzig, German y 1 Institut f ur Wissensc haftlic hes Rec hnen, Braunsc h

or W�` = �`M�`;where the matrixW and mass matrixM are de�ned as followsWij :=Xk;� ZG ZG bi(x)bk(x)Ck�bj(y)b�(y)dxdy; G � Rd ; k; � 2 I;Mij = ZG bi(x)bj(x)dx; i; j 2 I:Re all that the matrixW is symmetri positive semi-de�nite and dense. The mass matrixM is symmetri positive de�nite and may be sparse. Now, the dis rete eigenvalue problemlooks like W�` = �hM�`; W =MCM ; Cij = ov�(xi; yj): (7)Here the matrix M is stored in the usual data sparse format and the matrix C is ap-proximated in the H-matrix format (see Se tion 3). If not the omplete spe trum is ofinterest, but only a part of it then the needed omputational resour es an be drasti- ally redu ed [4℄. To ompute m eigenvalues (m � n) and orresponding eigenve torswe apply an iterative Krylov subspa e (Lan zos) eigenvalue solver for symmetri ma-tri es [27, 42, 4, 26, 34℄. This eigensolver requires only matrix-ve tor multipli ations.All matrix-ve tor multipli ations are performed in the H-matrix format whi h will ostO(n logn). Note that to solve the symmetri problem (7) often a third party eigensolverrequires the user to de�ne the matrix-ve tor produ ts w =M�1Wv and w =Mv. Thesame problem an be written in the formCM�i = �i�i; (8)where the produ t CM is selfadjoint with respe t to the new s alar produ t(�i;�j)M = (M�i;�j).3 H-Matrix te hniqueUsually the mass matrix M is stored in a sparse matrix format, whi h requires linear omplexity. The ovarian e matrix C is not sparse and, in general, requires O(n2) unitsof memory for the storage and O(n2) FLOPS for the matrix-ve tor multipli ation. Inthis se tion it will be shown how to approximate general ovarian e matri es with theH-matrix format [20, 18, 17, 22℄. The H-matrix te hnique is nothing but a hierar hi aldivision of a given matrix into subblo ks and further approximation of the majority ofthem by low-rank matri es (Fig. 2). To de�ne whi h subblo ks an be approximated wellby low-rank matri es and whi h not, a so- alled admissibility ondition is used. Whende omposition into subblo ks is done an important question is, how to ompute the low-rank approximations. For this purpose we o�er to use the ACA algorithm [15, 7, 5, 8, 9℄whi h does the job with a linear omplexity.6

Page 9: Max-Planc fur Mathematik¨ in den Naturwissenschaften Leipzig · Mathematik in den Naturwissensc haften, Leipzig, German y 1 Institut f ur Wissensc haftlic hes Rec hnen, Braunsc h

3.1 Admissibility onditionsOriginally the H-matrix te hnique was developed for the approximation of sti�ness ma-tri es oming from partial di�erential and integral equations [20, 17, 9℄. Typi al kernelsof integral equations are the following Green fun tions:�(x; y) := 1jx� yjd�2 ; x; y 2 Rd ; d � 3 or �(x; y) := log jx� yj; x; y 2 R2 ; (9)with singularities at x = y. The idea behind H-matri es is to approximate blo ks farfrom diagonal (far from the singularity) by low-rank matri es. The admissibility ondition( riteria) is used to divide a given matrix into subblo ks and de�ne whi h subblo ks anbe approximated well by low-rank matri es and whi h not. Let us explain how to obtainan admissibility ondition for the fun tions in (9).Let I be an index set of all degrees of freedom. Denote for ea h index i 2 I orre-sponding to a basis fun tion bi the support Gi := supp bi � Rd . Now we de�ne two treeswhi h are ne essory for the de�nition of hierar hi al matri es. These trees are labeledtrees where the label of a vertex t is denoted by t.De�nition 3.1 (Cluster Tree TI�I)[20, 17℄A �nite tree TI is a luster tree over the index set I if the following onditions hold:� I is the root of TI and a subset t � I holds for all t 2 TI.� If t 2 TI is not a leaf, then the set of sons sons(t) ontains disjoint subsets of I andthe subset t is the disjoint union of its sons, t = [s2sons(t) s.� If t 2 TI is a leaf, then jtj � nmin for a �xed number nmin.De�nition 3.2 (Blo k Cluster Tree TI�I) [20, 17℄Let TI be a luster tree over the index set I. A �nite tree TI�I is a blo k luster tree basedon TI if the following onditions hold:� root(TI�I) = I � I.� Ea h vertex b of TI�I has the form b = (�; �) with lusters �; � 2 TI.� For ea h vertex (�; �) with sons(�; �) 6= ?, we havesons(�; �) = 8<: (�; �0) : �0 2 sons(�); if sons(�) = ? ^ sons(�) 6= ?(� 0 ; �) : � 0 2 sons(�); if sons(�) 6= ? ^ sons(�) = ?(� 0; �0) : � 0 2 sons(�); �0 2 sons(�); otherwise� The label of a vertex (�; �) is given by [(�; �) = b� � b� � I � I.7

Page 10: Max-Planc fur Mathematik¨ in den Naturwissenschaften Leipzig · Mathematik in den Naturwissensc haften, Leipzig, German y 1 Institut f ur Wissensc haftlic hes Rec hnen, Braunsc h

We an see that \root(TI�I) = I � I. This implies that the set of leaves of TI�I is apartition of I � I.We generalise Gi to lusters � 2 TI by setting G� := Si2� Gi, i.e., G� is the minimalsubset of Rd that ontains the supports of all basis fun tions bi with i 2 � .Suppose that G� � Rd and G� � Rd are ompa t and �(x; y) is de�ned for (x; y) 2G� � G� with x 6= y. The standard assumption on the kernel fun tion in the H-matrixtheory is asymptoti smoothness of �(x; y) 2 C1(G� � G�), i.e, thatj��x��y �(x; y)j � C1j�+ �j!C j�+�j0 kx� yk�j�+�j� ; �; � 2 N ;holds for onstants C1, C0 and 2 R. This estimation is used to ontrol the error �q fromthe Taylor expansion �(x; y) = X�2Nd0 ;j�j�q(x� x0)� 1�!��x�(x0; y) + �q:Let S be an integral operator with an asymptoti ally smooth kernel � in the domainG� � G�: (Sv)(x) = ZG� �(x; y)v(y)dy; x 2 G� :Suppose that �k(x; y) is an approximation of � in G� � G� of the separate form (e.g.,Taylor or Lagrange polynomials):�k(x; y) = kX�=1 '�(x) �(y); (10)where k is the rank of separation. We are aiming at an approximation of the form (10)su h that exponential onvergen ek�� �kk1;G��G� � O(�k) (11)holds. For this purpose we introdu e the following admissibility ondition.De�nition 3.3 The standard admissibility ondition (Adm�) for two domains B� and B�(whi h a tually orrespond to two lusters � and �) is de�ned as followsminfdiam(B� ); diam(B�)g � �dist(B� ; B�); (12)where B� ; B� � Rd are axis-parallel bounding boxes of the lusters � and � su h thatG� � B� and G� � B�.Lemma 3.1 The fun tion �(x; y) = e�jx�yj onverges exponentially, i.e. 9� su h that for�k(x; y) from (10) holds k�(x; y)� �k(x; y)k � O(�k): (13)8

Page 11: Max-Planc fur Mathematik¨ in den Naturwissenschaften Leipzig · Mathematik in den Naturwissensc haften, Leipzig, German y 1 Institut f ur Wissensc haftlic hes Rec hnen, Braunsc h

Proof: Let x, y 2 G := [0; 1℄, x 2 � := [a; b℄, 0 � a < b � 1, and y 2 � := [ ; d℄,b � < d � 1. After introdu tion of the new variable t := x � y, we obtain �(t) := e�twith t 2 [ � b; d� a℄. The Taylor series of �(t) in point t0 := ( �b)+(d�a)2 is�(t) = e�t0 1 + 1Xj=1 (�1)jj! (t� t0)j! = e�t0 1 + kXj=1 (�1)jj! (t� t0)j + (�1)k+1(k + 1)! (~t� t0)k+1! ;where ~t 2 [ � b; d� a℄. Let " := e�t0 (�1)k+1(k+1)! (~t� t0)k+1, L1 := � b, L2 := d� a thenj"j � e�t0 (L2 � L1)k+1(k + 1)! � e�t0 � (L2 � L1)L2�L1(L2 � L1)! (L2 � L1)k+1�(L2�L1)(L2 � L1 + 1) � ::: � (k + 1) � C � �k+1�(L2�L1);where C := e�t0 (L2�L1)L2�L1(L2�L1)! and � := L2�L1L2�L1+1 < 1. �We will say that a pair (�; �) of lusters � and � 2 TI is admissible if the ondition (12)is satis�ed. The admissibility ondition indi ates blo ks whi h allow rank-k approximationand whi h not (see Fig. 2). The blo ks for whi h ondition (12) is true ( alled admissibleblo ks) are approximated by rank-k matri es. All other blo ks are omputed as usual.In order to get a simpler partitioning (see an example in Fig. 2, right), we de�ne theweaker admissibility ondition AdmW for a pair (�; �):Blo k b = � � � 2 TI�I is weak admissible, ((b is a leaf) or � 6= �); (14)where � , � are assumed to belong to the same level of TI�I .The ovarian e fun tions whi h are onsidered in this paper (see Se tion 4) do nothave singularities like in (9) and this is why more appropriate admissibility onditions arerequired. Di�erent types of ovarian e fun tions require di�erent admissibility onditions.The development of new admissibility ondition is not an easy task and it is out of frameof this paper.Let us onsider properties of fun tions depending on (x� y), i.e. �(x; y) = s(x � y).If x 2 Bx and y 2 By then r := x� y belongs toBr := fx� y : x 2 Bx; y 2 Byg:Lemma 3.2 (Proposition 4.1.2, [21℄) Any polynomial P (x; y) an be represented in theform: P (x; y) = k1X�=0 p�(x)y� or P (x; y) = k2X�=0 x�q�(y);where k1 (k2) is the polynomial degree in x (y) and p� and q� are polynomials in onevariable. 9

Page 12: Max-Planc fur Mathematik¨ in den Naturwissenschaften Leipzig · Mathematik in den Naturwissensc haften, Leipzig, German y 1 Institut f ur Wissensc haftlic hes Rec hnen, Braunsc h

If the fun tion f(�) is approximated in Br by a polynomial P (r) (Taylor series, Lagrangepolynomial, et .), i.e. f(r) � P (r) then the variables x and y have the same degreek = k1 = k2 in P (r). Applying the previous lemma, we obtain a separable k-termapproximation of f(x� y).In Se tion E.2 [21℄ the author explains how to trasfer the asymptoti al smoothness ofthe fun tion f(t) to the asymptoti al smoothness of the fun tion F (x; y) := f(jx � yj).Let f be de�ned on G0 � R and G0 � (�df ; df), df > 0.De�nition 3.4 (Se tion E.2, [21℄) f is asymptoti ally smooth if����� ddt�� f(t)���� � C0jtj���s for t 2 G0; � 2 N ; s 2 R and a onstant C0 = C0(�): (15)For t := jx� yj, x; y 2 Rd we obtain the fun tionF (x; y) := f(jx� yj): (16)Let us denote the dire tional derivative by D� :=Pdi=1 �i ��xi , where � 2 Rd .Proposition 3.1 (Se tion E.2, [21℄) If the fun tion f is asymptoti ally smooth in senseof (15), then F (x; y) from (16) is also asymptoti ally smooth, i.e. for all dire tionalderivatives D we havejDkF (x; y)j � k!C0jx� yj�k�s (0 6= jx� yj < df); C0 > 0:Lemma 3.3 The fun tion F (x; y) = F (r) = e�jrj is asymptoti ally smooth.Proof: Apply proposition 3.1 to the asymptoti al smooth fun tion f(t) := e�t. �Remark 3.1 For most of the ovarian e fun tions onsidered in our appli ations theasymptoti smoothness an be veri�ed.3.2 Rank-k Adaptive Cross ApproximationLet R 2 Rp�q andR = ABT ; where A 2 Rp�k ; B 2 Rq�k ; k 2 N : (17)Note that any matrix of rank k an be represented in the form (17).Suppose that b is a blo k of the matrixW andR :=W jb. Suppose it is known thatRmaybe approximated by a rank-k matrix. We explain below how to ompute R in the form(17). One possibility is the Adaptive Cross Approximation (ACA) algorithm [15, 7, 5, 9, 8℄.ACA is espe ially e�e tive for assembling low-rank matri es. It requires only k olumnsand k rows of the matrix under onsideration and, thus, has the omputational ostk(p+q). In [15℄ it is proved that if there exists a suÆ iently good low-rank approximation,10

Page 13: Max-Planc fur Mathematik¨ in den Naturwissenschaften Leipzig · Mathematik in den Naturwissensc haften, Leipzig, German y 1 Institut f ur Wissensc haftlic hes Rec hnen, Braunsc h

then there also exists a ross approximation with almost the same a ura y in the senseof the 2-norm.The ACA algorithm omputes ve tors a` and b` whi h form ~R = Pk=1 a`bT su h thatkR� ~Rk � ", where " is the desired a ura y [7, 9℄. In [8℄ the reader an also �nd di�erent ounterexamples when the standard ACA algorithm does not work. Here we present thestandard version of the ACA algorithm.Algorithm 3.1 ACA algorithmbegin=� input is a required a ura y " and a fun tion to ompute Rij �=;=� output is matrix ~R �=;k = 0; ~R = 0;S = ;; T = ;; =� sets of row and olumn indi es �=doTake a row i� =2 S;Subtra t Ri�j := Ri�j � ~Ri�j, j = 1::q;Find maxj jai�jj 6= 0, j < q. Suppose it lies in olumn j�;Compute all elements bij� in olumn j�, i < p;Subtra t Rij� := Rij� � ~Rij�, i = 1::p;k := k + 1; S := S [ fi�g; T := T [ fj�g;Compute ~R = ~R + ai� � bTj�; =� it is rank k approximation�=if(kai� � bTj�k2 � " � ka1 � bT1 k2) return ~R;Find maxi jbij�j, i < p, i 6= i�. The row where it lies is a new row i�;until(k < kmax)return ~R;end;Note that the algorithm does not ompute the whole matrix R. The subtra tion is doneonly from the elements under onsideration, i.e. row a` and olumn b`, ` = 1; :::; k.Remark 3.2 Further optimisation of the ACA algorithm an be done by the trun atedSVD. Suppose that a fa torisation of matrix R = ABT , A 2 Rp�K , B 2 Rq�K , is foundby ACA. Suppose also that the rank of R is k, k < K. Then one an apply the trun atedSVD algorithm to ompute R = U�V T requiring O((p+ q)K2 +K3) FLOPS.3.3 H-Matri esDe�nition 3.5 [20℄ Let I be an index set and TI�I be a hierar hi al division of the indexset produ t I � I into subblo ks (so- alled blo k luster tree). The set of H-matri es isde�ned asH(TI�I ; k) := fW 2 RI�I j rank(W jb) � k for all admissible blo ks b of TI�Ig:Here, W jb = (wij)(i;j)2b denotes the matrix blo k of W = (wij)i;j2I orresponding tob 2 TI�I. 11

Page 14: Max-Planc fur Mathematik¨ in den Naturwissenschaften Leipzig · Mathematik in den Naturwissensc haften, Leipzig, German y 1 Institut f ur Wissensc haftlic hes Rec hnen, Braunsc h

We denote an H-matrix approximation ofW by ~W .Finally, we list omputational omplexities of basi algebrai operations with H-matri es.Theorem 3.1 [20, 17℄ Let I be an index set, n := jIj, TI�I a tree whi h de�nes theblo k stru ture, depth(TI�I) = O(logn), W 2 H(TI�I ; k). Then the storage requirementof W and matrix ve tor multipli ation ost O(kn logn), matrix-matrix addition ostsO(k2n logn) and matrix-matrix produ t as well as matrix inverse ost O(k2n log2 n).Proof: See [20, 17, 9℄. �Note that the result of addition of two hierar hi al matri esM1 andM2 2 H(TI�I ; k) isa matrix from H(TI�I ; 2k). To have the sumM1 +M2 in the lass H(TI�I; k) also, oneshould trun ate the rank 2k to k.4 H-matrix approximation of ovarian e matrix CExamples of the omputational domain G are shown in Fig. 1.Figure 1: Examples of omputational domains G with a non-re tangular grid.Let x = (x1; :::; xd) and y = (y1; :::; yd) 2 G. De�ne the (anisotropi ) distan e by� =vuut dXi=1 jxi � yij2=l2i ; where li are orrelation length s ales; d = 2; 3: (18)Typi al examples of ovarian e fun tions are:a) ov(�) = e��2 (Gaussian); (19)b) ov(�) = e�� (exponential) and (20) ) ov(�) = � �1� 32�+ 12�3� for 0 � � � 10 for � � 1 (spheri al): (21)To demonstrate the a ura y of the H-matrix approximation, we ompute the followingerrors:"2 := jkCk2 � k ~Ck2jkCk2 ; " := k(C � ~C)zk2kCk2kzk2 ; where z is a random ve tor:12

Page 15: Max-Planc fur Mathematik¨ in den Naturwissenschaften Leipzig · Mathematik in den Naturwissensc haften, Leipzig, German y 1 Institut f ur Wissensc haftlic hes Rec hnen, Braunsc h

n rank k size, MB t, se . " maxi=1::10 j�i � ~�ij, i "2for ~C C ~C C ~C4:0 � 103 10 48 3 0.8 0.08 7 � 10�3 7:0 � 10�2, 9 2:0 � 10�41:05 � 104 18 439 19 7.0 0.4 7 � 10�4 5:5 � 10�2, 2 1:0 � 10�42:1 � 104 25 2054 64 45.0 1.4 1 � 10�5 5:0 � 10�2, 9 4:4 � 10�6Table 1: The a ura y of the H-matrix approximation (weak admissibility) of the ovari-an e fun tion (20), l1 = l3 = 0:1, l2 = 0:5. The geometry is shown in Fig. 1 (right).k size, MB t, se .1 1548 332 1865 423 2181 504 2497 596 nem -k size, MB t, se .4 463 118 850 2212 1236 3216 1623 4320 nem -Table 2: Dependen e of the omputing time and storage requirement on the H-matrixrank k for the ovarian e fun tion (20). (left) standard admissibility ondition (12),geometry shown in Fig. 1 (middle), l1 = 0:1, l2 = 0:5, n = 2:3 � 105. (right) weakadmissibility ondition (14), geometry shown in Fig. 1 (right), l1 = 0:1, l2 = 0:5, l3 = 0:1,n = 4:61 � 105.All the following numeri al experiments are done on a omputer with a 2GHz pro es-sor and with 3GB of memory. Table 1 shows the omputing time and storage requirementfor the H-matrix approximation ~C of C. One an see that ~C needs mu h less memoryand omputing time than C. Table 2 demonstrates the dependen e of omputationalresour es on H-matrix rank k for the standard (left) and weak (right) admissibility on-ditions. The matrix, obtained with the weak admissibility ondition (see an example inFig. 2, right), is simpler, but has a higher rank to a hieve the same a ura y than thematrix obtained with the standard admissibility. For the ases k = 6 and k = 20 thereare not enough memory (abbreviated as \nem").Figure 2 shows two di�erent examples of H-matrix approximations to the dis retised ovarian e fun tion (20) with l1 = 0:15 and l2 = 0:2. For the matrix on the left thestandard admissibility ondition (12) was used and for the matrix on the right the weakadmissibility ondition (14). The dark blo ks indi ate the dense matri es and the grayblo ks rank-k matri es. The steps inside blo ks present the de ay of singular values inlog s ale. The approximation on the left has a more omplex blo k stru ture, but has asmaller maximal rank k (=6). The approximation on the right has a less omplex blo kstru ture, but the maximal rank k is larger (=20).Table 3 demonstrates the omputational resour es needed for the H-matrix approxi-mation of the ovarian e fun tion ov(x; y) = e��; l1 = l2 = 1 (see (20)).13

Page 16: Max-Planc fur Mathematik¨ in den Naturwissenschaften Leipzig · Mathematik in den Naturwissensc haften, Leipzig, German y 1 Institut f ur Wissensc haftlic hes Rec hnen, Braunsc h

25 20

20 20

20 16

20 16

20 20

16 16

20 16

16 16

6 6

20 6 326 6

16 6 326 20

6 6

6 16

6 6

32 32

20 20

20 20 32

32 32

6 6

6 6 3220 6

16 6 32

32 6

32 32

6 32

32 32

32 6

32 326 6

6 6

20 16

6 6

32 32

6 32

32 32

32 32

6 32

32 32

6 32

20 20

20 20 32

32 32

32 32

32 32

32 32

32 32

32 32

32 32

6 66 6

20 6 32

32 32 6

6 632 6

32 32 6

6 632 32

6 32 6

6 632 32

32 32 6

66 20

6 6 32

32 32

6 6

632 6

32 32

6 6

632 32

6 32

6 6

632 32

32 32

6 6

20 20

20 20 32

32 32

6 6

20 6 32

32 326 20

6 6 32

32 32

20 20

20 20 32

32 32

32 6

32 32

32 6

32 32

32 6

32 32

32 6

32 32

32 32

6 32

32 32

6 32

32 32

6 32

32 32

6 32

32 32

32 32

32 32

32 32

32 32

32 32

32 32

32 32

6 6

6 6 66 6

20 6 32

32 32

32 6

32 32

6 32

32 32

32 6

32 32

6 6

6 6

6 6

6 6 66 6

32 6

32 32 6

6 6

6 6

6 6

6 6 66

32 6

32 32

6 6

6 6

6 6

6 6

6 6 632 6

32 32

32 6

32 32

32 6

32 32

32 6

32 32

6 6

6 6

6 6

6 6

6 20

6 6 32

32 32

6 32

32 32

32 32

6 32

32 32

6 32

66 6

6 6

6 6

6 6

6 632 32

6 32 6

66 6

6 6

6 6

6 6

632 32

6 32

6 6

66 6

6 6

6 6

6 6

32 32

6 32

32 32

6 32

32 32

6 32

32 32

6 32

66 6

6 6

20 20

20 20 32

32 32

32 32

32 32

32 32

32 32

32 32

32 32

6 6

20 6 32

32 32

32 6

32 32

6 32

32 32

32 6

32 326 20

6 6 32

32 32

6 32

32 32

32 32

6 32

32 32

6 32

20 20

20 20 32

32 32

32 32

32 32

32 32

32 32

32 32

32 32

6 632 32

32 32 6

6 632 6

32 32 6

6 632 32

6 32 6

6 632 32

32 32 6

632 32

32 32

6 6

632 6

32 32

6 6

632 32

6 32

6 6

632 32

32 32

6 6

32 32

32 32

32 32

32 32

32 32

32 32

32 32

32 32

32 6

32 32

32 6

32 6

32 6

32 32

32 6

32 6

32 32

6 32

32 32

6 32

32 32

6 6

32 32

6 6

32 32

32 32

32 32

32 32

32 32

32 32

32 32

32 32

25 20

20 20 20

2020 16

16 1620

2020 20

20 20 20

20 3220

20

20 20

20 20 20

20 32 20

2032 20

20 32

20

20

20 20

20 20 20

20 32 20

2020 20

20 20 20

20 3220

2032 20

20 32 20

2032 20

20 32

20

20

20 20

20 20 20

20 32 20

2032 20

20 3220

20

20 20

20 20 20

20 32 20

2032 20

20 32

20

20

32 20

20 32 20

2032 20

20 3220

2032 20

20 32 20

2032 20

20 32Figure 2: Two examples of H-matrix approximations 2 Rn�n , n = 322, of the dis retised ovarian e fun tion ov(x; y) = e��, l1 = 0:15, l2 = 0:2, x; y 2 [0; 1℄2. The biggest dense(dark) blo ks 2 R32�32 , max. rank k = 6 on the left and k = 20 on the right. The rightblo k stru ture is simpler, but the left stru ture is more a urate.For small problem sizes su h as 332, 652 (in 2D) it is possible to ompute the exa t ovarian e matrix C and he k the a ura y of the H-matrix approximation. But forlarge problem sizes there is not enough memory (\nem") to store the matrix C. The last olumn presents the a ura y of the H-matrix approximation.time (se .) memory (MB)n C ~C C ~C "332 0:14 0:01 9:5 0:7 4:3 � 10�3652 2:6 0:05 1:4 � 102 3:5 3:7 � 10�31292 �� 0:24 nem 16 ��2572 �� 1 nem 64 ��Table 3: Dependen e of the omputational time and storage ost on the problem size n,rank k = 5, ov(x; y) = e��, l1 = l2 = 1, domain G = [0; 1℄2.One an see that H-matrix approximations an be omputed very fast even for 1292and 2572 degrees of freedom, whereas for the dense matri es there is not enough memory.Table 4 demonstrates the a ura y of the H-matrix approximation of the ovarian efun tion (20) for di�erent ovarian e lengths l1 and l2 .14

Page 17: Max-Planc fur Mathematik¨ in den Naturwissenschaften Leipzig · Mathematik in den Naturwissensc haften, Leipzig, German y 1 Institut f ur Wissensc haftlic hes Rec hnen, Braunsc h

l1 l2 "0:01 0:02 3 � 10�20:1 0:2 8 � 10�30:5 1 2:8 � 10�5Table 4: Dependen e of the H-matrix a ura y on the ovarian e lengths l1 and l2 for ovarian e fun tion (20), G = [0; 1℄2, n = 1292.5 Numeri al omputation of KLEAn analyti al solution of the eigenvalue problem (2) is known very seldomly (usuallyonly in 1D and for a small lass of ovarian e fun tions). For instan e, the solution ofthe eigenvalue problem (2) with ov(x; y) = e��jx�yj, x; y 2 (�a; a) � R is available in[13, 14℄. However already in 2D the analyti al solutions are either more omplex or almostimpossible to dedu e. In this se tion we solve the symmetri eigenvalue problem (7). Wetested the ARPACK [27℄ and TRLAN [42℄ pa kages for omputing m largest eigenvaluesand orresponding eigenfun tions of (7). ARPACK is based upon an algorithmi vari-ant of the Arnoldi pro ess alled the Impli itly Restarted Arnoldi Method (IRAM). Forsymmetri matri es it redu es to a variant of the Lan zos pro ess alled the Impli itlyRestarted Lan zos Method (IRLM) [27℄.The TRLAN pa kage targets the ase where one wants both eigenvalues and eigen-ve tors of a large real symmetri eigenvalue problems that annot use the shift-and-inverts heme. In this ase the standard non-restarted Lan zos algorithm requires the storageof a large number of Lan zos ve tors whi h an ause storage problem and make ea hiteration of the method very expensive. The algorithm used in TRLAN is a dynami thi k-restart Lan zos algorithm. The onvergen e test used in the TRLAN is the residualr < toleran e � k ~Ck [42℄.The three most time- onsuming pro edures in the Lan zos method are the matrix-ve tor multipli ation, re-orthogonalisation and omputation of the Ritz ve tors. Allmatrix-ve tor produ ts are approximated in theH-matrix format with the ost O(n logn).We also investigate how the H-matrix te hnique redu es the memory requirements andthe omputing times of the eigenvalue solver.Remark 5.1 Note that an H-matrix approximation ~C of the symmetri matrix C is notalways symmetri [6℄ (the possible reason is the rounding error). Therefore we take thesymmetri part of ~C. Note also that in HLIB [16℄ there is a possibility to set up only theupper (lower) half of the matrix.In Table 5 one an see the omputing times required for anH-matrix ve tor multipli ation.The Table was made using the weak admissibility riteria (14). It approximates the ovarian e fun tion (20) with l1 = l3 = 0:1 and l2 = 0:5. The geometry is shown inFig. 1 (right). The times needed to set up the H-matri es are shown in parentheses.Numeri al experiments on�rm the theoreti al estimation O(kn logn) (from Theorem3.1) for an H-matrix ve tor multipli ation. One an see a linear dependen e on the rank15

Page 18: Max-Planc fur Mathematik¨ in den Naturwissenschaften Leipzig · Mathematik in den Naturwissensc haften, Leipzig, German y 1 Institut f ur Wissensc haftlic hes Rec hnen, Braunsc h

k and an almost linear dependen e on the problem size n. If the matrix C is stored ina dense matrix format then the omplexity should be O(n2) (the last row). Note thatfor n = 3:5 � 104 and higher there is not enough memory to store C. The orresponding omputing times for n � 3:5 � 104 are extrapolated from the previous values.k n n 1:05 � 104 2:4 � 104 3:5 � 104 6:8 � 104 2:3 � 105t1 t2 t1 t2 t1 t2 t1 t2 t1 t23 8 � 10�4 0.1 3 � 10�3 0.2 6:0 � 10�3 0.4 1 � 10�2 1 5:0 � 10�2 46 2 � 10�3 0.15 6 � 10�3 0.4 1:1 � 10�2 0.7 2 � 10�2 2 9:0 � 10�2 79 3 � 10�3 0.2 8 � 10�3 0.5 1:5 � 10�2 1.0 3 � 10�2 3 1:3 � 10�1 11full rank 0.13 0.62 2.48 10 140Table 5: t1- omputing times (in se .) required for an H-matrix and dense matrix ve tormultipli ation, t2 - times to set up ~C 2 Rn�n .Tables 6, 7 show the omputing times whi h required TRLAN [42℄ to ompute meigenpairs. Computing times in the Table 7 are larger than the times in the Table 6. Thereason is that TRLAN performs more iteration steps.matrix info (MB, se .) mn k size of ~C time to set up ~C 2 5 10 20 40 802:4 � 104 4 12 0.2 0.2 0.2 0.4 0.7 1.8 56:8 � 104 8 95 0.7 0.7 0.8 1.6 3.4 7.0 192:3 � 105 12 570 6.8 3.6 4.0 7.2 15.0 31.0 75Table 6: Time required for omputing m eigenpairs of the ovarian e fun tion (20) withl1 = l2 = l3 = 1. The geometry is shown in Fig. 1 (right).matrix info (MB, se .) mn k size of ~C time to set up ~C 2 5 10 20 40 802:4 � 104 4 12 0.2 0.6 0.9 1.3 2.3 4.2 86:8 � 104 8 95 2 2.4 3.8 5.6 8.4 18.0 282:3 � 105 12 570 11 10.0 17.0 24.0 39.0 70.0 150Table 7: Time required for omputing m eigenpairs of the ovarian e fun tion (20) withl1 = l3 = 0:1, l3 = 0:5. The geometry is shown in Fig. 1 (right).6 Con lusionWe have su essfully applied the H-matrix te hnique for the approximation of ovarian ematri es (19-21) in 2D and 3D ases. The use of the H-matrix te hnique redu es om-putational resour es (Tables 1, 2, 3 and 5), required by eigensolvers (e.g. ARPACK [27℄16

Page 19: Max-Planc fur Mathematik¨ in den Naturwissenschaften Leipzig · Mathematik in den Naturwissensc haften, Leipzig, German y 1 Institut f ur Wissensc haftlic hes Rec hnen, Braunsc h

and TRLAN [42℄) for solving the eigenvalue problem (7), dramati ally. The ombinationof the H-matrix te hnique and iterative eigenvalue solvers are seen to be a very eÆ ientway to ompute the KLE (Tables 6, 7).A knowledgements.The authors are appre iative to Eveline Rosseel (K.U.Leuven, Department of Com-puter S ien e, Belgium) for valuable omments. We would like also to thank our studentJeremy Rodriguez for the help in providing Tables 1, 2, 5, 6 and 7.It is also a knowledged that this resear h has been ondu ted within the proje t MUNAunder the framework of the German Luftfahrtfors hungsprogramm funded by the Ministryof E onomi s (BMWA).Referen es[1℄ Atkinson, K. E.: The numeri al solution of integral equations of the se ond kind,vol. 4 of Cambridge Monographs on Applied and Computational Mathemati s. Cam-bridge University Press, Cambridge (1997).[2℄ Babu�ska, I., Tempone, R., Zouraris, G. E.: Galerkin �nite element approx-imations of sto hasti ellipti partial di�erential equations. SIAM J Numer Anal42(2), 800{825 (ele troni ) (2004).[3℄ Babu�ska, I., Nobile, F., Tempone, R.: A sto hasti ollo ation method forellipti partial di�erential equations with random input data. SIAM J Numer Anal45(3), 1005{1034 (ele troni ) (2007).[4℄ Bai, Z., Demmel, J., Dongarra, J., Ruhe, A., van der Vorst, H.:, Eds.Templates for the solution of algebrai eigenvalue problems - A pra ti al guide, vol. 11of Software, Environments, and Tools. So iety for Industrial and Applied Mathemat-i s (SIAM), Philadelphia, PA (2000).[5℄ Bebendorf, M.: Approximation of boundary element matri es. Numer Math86(4), 565{589 (2000).[6℄ Bebendorf, M., Ha kbus h, W.: Stabilized rounded addition of hierar hi almatri es. Numer Linear Algebra Appl 14(5), 407{423 (2007).[7℄ Bebendorf, M., Rjasanow, S.: Adaptive low-rank approximation of ollo ationmatri es. Computing 70(1), 1{24 (2003).[8℄ B�orm, S., Grasedy k, L.: Hybrid ross approximation of integral operators.Numer Math 101(2), 221{249 (2005).[9℄ B�orm, S., Grasedy k, L., Ha kbus h, W.: Hierar hi al Matri es, Le tureNote. Max-Plan k Institute for Mathemati s, Leipzig (2003).17

Page 20: Max-Planc fur Mathematik¨ in den Naturwissenschaften Leipzig · Mathematik in den Naturwissensc haften, Leipzig, German y 1 Institut f ur Wissensc haftlic hes Rec hnen, Braunsc h

[10℄ Eiermann, M., Ernst, O. G., Ullmann, E.: Computational aspe ts of thesto hasti �nite element method. Computing and visualization in s ien e 10(1),3{15 (2007).[11℄ Elman, H. C., Ernst, O. G., O'Leary, D. P., Stewart, M.: EÆ ient itera-tive algorithms for the sto hasti �nite element method with appli ation to a ousti s attering. Comput Methods Appl Me h Engrg 194(9-11), 1037{1055 (2005).[12℄ Frigo, M., Johnson, S.: FFTW: An adaptive software ar hite ture for the FFT.In pro . ICASSP, IEEE, Seattle, WA, www.�tw.org , (3):1381{1384 (1998).[13℄ Ghanem, R., Spanos, D.: Spe tral sto hasti �nite-element formulation for relia-bility analysis. Journal of Engineering Me hani s 117(10), 2351{2370 (1991).[14℄ Ghanem, R., Spanos, P.: Sto hasti �nite elements: a spe tral approa h. Springer-Verlag, New York (1991).[15℄ Goreinov, S. A., Tyrtyshnikov, E. E., Zamarashkin, N. L.: A theory ofpseudoskeleton approximations. Linear Algebra Appl 261 , 1{21 (1997).[16℄ Grasedy k, L., B�orm, S.: H-matrix library: www.hlib.org.[17℄ Grasedy k, L., Ha kbus h, W.: Constru tion arithmeti s of H-matri es. Com-puting 70(4), 295{334 (2003).[18℄ Ha kbus h, W., Khoromskij, B. N.: A sparse H-matrix arithmeti . II. Appli- ation to multi-dimensional problems. Computing 64(1), 21{47 (2000).[19℄ Ha kbus h, W.: Integral equations, vol. 120 of International Series of Numeri alMathemati s. Birkh�auser Verlag, Basel, 1995. Theory and numeri al treatment,Translated and revised by the author from the 1989 German original.[20℄ Ha kbus h, W.: A sparse matrix arithmeti based on H-matri es. I. Introdu tionto H-matri es. Computing 62(2), 89{108 (1999).[21℄ Ha kbus h, W.: Hierar his he Matrizen - Algorithmen und Analysis. Le turenotes. Max-Plan k-Institut f�ur Mathematik, Leipzig (2004).[22℄ Ha kbus h, W., Khoromskij, B. N., Kriemann, R.: Hierar hi al matri esbased on a weak admissibility riterion. Computing 73(3), 207{243 (2004).[23℄ Ha kbus h, W., Khoromskij, B. N., Tyrtyshnikov, E. E.: Hierar hi alKrone ker tensor-produ t approximations. J Numer Math 13(2), 119{156 (2005).[24℄ Hida, T., Kuo, H.-H., Potthoff, J., Streit, L.: White noise - An in�nite-dimensional al ulus, vol. 253 ofMathemati s and its Appli ations. Kluwer A ademi Publishers Group, Dordre ht (1993). 18

Page 21: Max-Planc fur Mathematik¨ in den Naturwissenschaften Leipzig · Mathematik in den Naturwissensc haften, Leipzig, German y 1 Institut f ur Wissensc haftlic hes Rec hnen, Braunsc h

[25℄ Keese, A.: Numeri al solution of systems with sto hasti un ertainties. A generalpurpose framework for sto hasti �nite elements. Ph.D. Thesis, TU Brauns hweig,Germany (2004).[26℄ Lan zos, C.: An iteration method for the solution of the eigenvalue problem oflinear di�erential and integral operators. J Resear h Nat Bur Standards 45 , 255{282(1950).[27℄ Lehou q, R. B., Sorensen, D. C., Yang, C.: ARPACK users' guide. Solutionof large-s ale eigenvalue problems with impli itly restarted Arnoldi methods., vol. 6 ofSoftware, Environments, and Tools. So iety for Industrial and Applied Mathemati s(SIAM), Philadelphia, PA (1998).[28℄ Lo�eve, M.: Probability theory I. Graduate Texts in Mathemati s, Vol. 45, 46,fourth ed. Springer-Verlag, New York (1977).[29℄ Matthies, H. G.: Un ertainty quanti� ation with sto hasti �nite elements. Part1. Fundamentals. En y lopedia of Computational Me hani s, John Wiley and SonsLtd, (2007).[30℄ Matthies, H. G., Keese, A.: Galerkin methods for linear and nonlinear ellipti sto hasti partial di�erential equations. Comput Methods Appl Me h Engrg 194(12-16), 1295{1331, (2005).[31℄ Pellissetti, M. F., Ghanem, R.: Iterative solution of systems of linear equationsarising in the ontext of sto hasti �nite elements. Adv Eng Softw 31(8-9), 607{616,(2000).[32℄ Press, W. H., Teukolsky, S. A., Vetterling, W. T., Flannery, B. P.:Numeri al re ipes in C - The art of s ienti� omputing, se ond ed. CambridgeUniversity Press, Cambridge (1992).[33℄ Riesz, F., Sz.-Nagy, B.: Fun tional analysis. Dover Books on Advan ed Mathe-mati s. Dover Publi ations In ., New York, 1990. Translated from the se ond Fren hedition by Leo F. Boron, Reprint of the 1955 original.[34℄ Saad, Y.: Numeri al methods for large eigenvalue problems. Algorithms and Ar hi-te tures for Advan ed S ienti� Computing. Man hester University Press, Man h-ester (1992).[35℄ S hwab, C., Todor, R. A.: Sparse �nite elements for ellipti problems withsto hasti loading. Numer Math 95(4), 707{734 (2003).[36℄ S hwab, C., Todor, R. A.: Sparse �nite elements for sto hasti ellipti problems|higher order moments. Computing 71, 43{63 (2003).19

Page 22: Max-Planc fur Mathematik¨ in den Naturwissenschaften Leipzig · Mathematik in den Naturwissensc haften, Leipzig, German y 1 Institut f ur Wissensc haftlic hes Rec hnen, Braunsc h

[37℄ S hwab, C., Todor, R. A.: Karhunen-Lo�eve approximation of random �elds bygeneralized fast multipole methods. J Comput Phys 217(1), 100{122 (2006).[38℄ Seynaeve, B., Rosseel, E., Ni ola��, B., Vandewalle, S.: Fourier mode anal-ysis of multigrid methods for partial di�erential equations with random oeÆ ients.J Comput Phys 224(1), 132{149 (2007).[39℄ Todor, R. A., S hwab, C.: Convergen e rates for sparse haos approximationsof ellipti problems with sto hasti oeÆ ients. IMA J Numer Anal 27(2), 232{261(2007).[40℄ Werner, D.: Funktionalanalysis, extended ed. Springer-Verlag, Berlin, (2000).[41℄ Wiener, N.: The homogeneous haos. Ameri an Journal of Mathemati s 60 ,897{936 (1938).[42℄ Wu, K., Simon, H.: Thi k-restart Lan zos method for large symmetri eigenvalueproblems. SIAM J Matrix Anal Appl 22(2), 602{616 (ele troni ) (2000).

20


Recommended