+ All Categories
Home > Documents > Parellelism in Spectral Methods

Parellelism in Spectral Methods

Date post: 10-Jan-2016
Category:
Upload: ramona-corman
View: 5 times
Download: 0 times
Share this document with a friend
Description:
Paralelism

of 22

Transcript
  • PARALLELISM IN SPECTRAL METHODS

    C. CANUTO (I)

    ABSTRACT - Several strategies of parallelism for spectral algorithms are discussed. The investigation shows that, despite the intrinsic lack of locality of spectral methods, they are amenable to parallel implementations, even on fine grain architectures. Typical algorithms for the spectral approximation of the viscous, incompressible Navier-Stokes equations serve as examples in the discussion.

    SOMMARIO - Si discutono diverse strategie di parallelizzazione di algoritmi di tipo spettrale. L'analisi mostra che i metodi spettrali possono essere efficacemente implementati su architetture parallele, anche a grana fine, nonostante il loro carattere non-locale. Nella discussione si usano a titolo di esempio alcuni noti algoritmi spettrali per l'approssimazione delle equazioni di Navier-Stokes vi- scose e incompressibili.

    Introduction.

    Since their origin in the late sixties, spectral methods in their modern form have been designed and developed with the aim of solving problems, which could not be tackled by more conventional numerical methods (finite differences, and later finite elements). The direct simulation of turbulence for incompressible flows is the most popularly known example of such applications: the range of phenomena amenable to a satisfactory numerical simulation has widened during the years

    (l) Dipartimento di Matematica, Universit~ di Parma, 43100 Parma, Italy and Istituto di Analisi Numerica del C.N.R., Corso C. Alberto, 5 - 27100 Pavia, Italy.

    Invited paper at the International Symposium on ~,Vector and Parallel Proces- sors for Scientific Computation- 2~, held by the Accademia Nazionale dei Lincei and IBM, Rome, September 1987.

  • 54 C. CANUTO: Parallelism in

    under the twofold effect of the increase of the computers' power and the develop- ment of sophisticated algorithms of spectral type. The simulation of the same phenomena by other techniques would have required a computer power larger by order of magnitudes, hence, it would not have been feasible on the currently available machines (a discussion of the most significant achievements of spectral methods in fluid dynamics can be found, e.g., in Chapter 1 of ref. [1]).

    Since spectral methods have been constantly used in ~extreme>> applications, their implementation has taken place on state-of-the-art computer architectures. The vectorization of spectral algorithms was a fairly easy task. Nowadays, spec- tral codes for fluid-dynamics run on vector supercomputers such as the Cray family or the Cyber 205, taking full advantage of their pipeline architectures and reaching rates ofvectorization well above 80% (we refer, e.g., to Appendix B in ref. [1]).

    On the contrary, the implementation of spectral algorithms on parallel com- puters is still in its infancy. This is partly due to the fact that multiprocessor supercomputers are only now becoming available to the scientific community. But there is also a deeper motivation: it is not yet clear whether and how the global character of spectral methods will efficiently fit into a highly granular parallel architecture. Thus, a deep investigation - of both a theoretical and ex- perimental nature - is needed. As a testimony of the present uncertainty on this topic, we quote the point of view of researchers working at the development of a multipurpose parallel supercomputer, especially tailored for fluid-dynamics ap- plications, known as the Navier-Stokes Computer (NSC). This is a joint project between Princeton University and the NASA Langley Research Center, aimed at building a parallel supercomputer made up of a fairly small number of powerful nodes. Each node has the performance of a class VI vector supercomputer; the initial configuration will have 64 of such nodes. Despite the superior accuracy of spectral methods over finite difference methods, the scientists involved in this project have chosen to employ low-order finite differences at least in the initial investigation on how well transition and turbulence algorithms can exploit the NSC architecture. Indeed ,~the much greater communication demands of the global discretization may well tip the balance in favor of the less accurate, but simpler local discretizations>> ([ 12]).

    Currently, a number of implementations of spectral algorithms on parallel architectures is documented. Let us refer here to the work done at the IBM European Center for Scientific and Engineering Computing (ECSEC) in Rome, at the Nasa Langley Research Center by Erlebacher, Bokhari and Hussaini [5], and at ONERA (France) by Leca and Sacchi-Landriani [11]. The IBM con-

  • Spectral Methods 55

    tributions are described in detail by P. Sguazzero in a paper in this volume. The latter contributions will be briefly reviewed in the present paper.

    The purpose of this paper is to discuss where and to what extent it is possible to introduce parallelism in spectral algorithms. We will also try to indicate which communication networks are suitable for the implementation of spectral methods on fine grain, local memory architectures.

    1. Basic aspects of spectral methods.

    Let us briefly review the fundamental properties of spectral methods for the approximation of boundary value problems. We will focus on those aspects of the methods which are more related to their implementation in a multiprocessor environment. For complete details we refer, e.g., to refs. [1], [6], [15].

    Let us assume that we are interested in approximating a boundary value problem, which we write as

    (1.1) j L (u )=f in S'2 + boundary conditions on dl-2,

    in a d-dimensional box I2 = /-/~i=1 (ai, hi). We approximate the solution u by a finite expansion

    (1.2) U N = ~ 1~l k ~k(X), Ikl~N

    where

    k = (kl, ..., kd),

    (1.3) 0k (X) = /-/~i=l ~ I (Xi)"

    Each ~m (i) is a smooth global basis function on (al, bi), satisfying the orthogonality condition

    bi

    (1.4) f~(im) (x) ~p(1) (x) w (x) dx = c, ~mn J I

    a i

    with respect to a weight function r In most applications, the one dimensional basis functions are trigonometric polynomials in the space directions where a periodicity boundary condition is enforced, and orthogonal algebraic polyno- mials (Chebyshev, or Legendre polynomials) in the remaining directions.

  • 56 C. CANUTO: Parallelism in

    The boundary value problem is discretized by a suitable projection process, which can be represented as

    (1.5) a N e X N

    (LN(uN), V)N = (f, V)N, VV e YN

    Here XN is the space of trial functions, YN is the space of test functions, LN is an approximation of the differential operator L and (u, V)N is an inner product, which may depend upon the cut-off number N. In general, when XN ---- YN and the inner product is the L2(I2) inner product we speak ofa Galerkin method; this is quite common for periodic boundary value problems. Otherwise, for non- periodic boundary conditions, we have a tau-method when the inner product is the L 2- inner product and YN is a space of test functions which do not individually satisfy the boundary conditions, or a collocation method when the inner product is an approximation of the L2(g2)-inner product based on a Gaussian quadrature rule.

    In order to have a genuine spectral method, the basis functions in the expa- sion (1.2) must satisfy a supplementary property, in addition to the orthogonality condition (1.4): if one expands a smooth function according to this basis, the ~Fourier>> coefficients of the function should decay at a rate which is a monotoni- cally increasing divergent function of the degree of regularity of the function. This occurs if we approximate a periodic function by the trigonometric system (if u CS(0,2z), then ilk = 0(]kl-S)). The same property holds if we expand a non- periodic function according to the eigenfunctions of a singular Sturm-Liouville problem (such as Jacobi polynomials). The above mentioned property is known as the spectral accuracy property. When it is satisfied, one is in the condition to prove an error estimate for the approximation (1.5) of problem (1.1) of the form

    (1.6) Ilu-uNlIH~ ~< C(m, r)N m-r llUllHr for all r I> r0 fixed,

    where the spaces H r form a scale of Hilbert spaces in which the regularity of u is measured. Estimate (1.6) gives theoretical evidence of the fundamental property of spectral methods, namely, they guarantee an accurate representation of smooth, although highly structured, phenomena by a ~minimab> number of un- knowns.

    Spectral methods owe their success to the availability of ~fast algorithms>~ to handle complex problems. The discrete solution u ~ is determined by the set of its

  • Spectral Methods 57

    ((Fourier coefficients)) {fikl ] k I ~< N} according to the expansion (1.2), but it can also be uniquely defined by the set of its values {uj = uN(xj)l ~ ] ~N} at a selected set GN = {xj I ~ I ~

  • 58 C. CANUTO: Parallelism in

    (1.9) d~ =

    1

    2

    0

    (- l) ~+i cot (l-J)__Z, 14=j 2N

    , l=j.

    N On the other hand, if u(x) = ~_,

    k=O

    polynomial of the first kind), then

    uk Tk(x) (Tk(x) denoting the k-th Chebyshev du U

    - T. ~(~ Tm(x), with dx m=o

    N (1.1o) ~(2 = 2 z kak

    k=m+l Cm k+m odd

    (here Co = 2, C m = 1 for m~>l). In physical space, setting xj = cosjar/N,j =0, ..., N, we have

    du N (xt) = X d U u(xj), O

  • Spectral Methods 59

    The previous relations show that spectral differentiation - like a discrete trans- form - is again a global transformation (with the lucky exception of Fourier dif- ferentiation in transform space). The global character of spectral methods is cohe- rent with the global structure of the basis functions which are used in the expan- sion.

    Globality is the first feature of spectral methods we have to cope with in discussing vectorization and parallelization. I f we represent the previous trans- forms in a matrix-times-vector form, they can be easily implemented on a vector computer, and they take advantage of this architecture because matrices are either diagonal, or upper triangular, or full. When the transforms are realized through the Fast Fourier Transform algorithm, one can use efficiently vectorized FFT's (see, e.g., [14]).

    Conversely, if we are concerned with parallelization, globality implies grea- ter communication demand among processors. This may not be a major problem on coarse grain, large shared memory architectures, such as the now commercial- ly available supercomputers (e.g., Cray XMP, Cray 2, ETA t~ ...). We expect difficulties on the future fine grain, local memory architectures, where informa- tion will be spread over the memories of tens or hundreds of processors.

    In order to make our analysis more precise, let us focus on perhaps the most significant application of spectral methods given so far, i.e., the numerical simulation of a viscous, incompressible flow. Let us assume we want to discretize the time-dependent Navier-Stokes equations in primitive variables

    (1.12)

    ut -vAt t + ~7p+(u'V)u=f div u=O

    u=g(or u periodic)

    u(x, O)=uo (x)

    in ~2x (0, T], in g2x(0, T],

    on 052x(0, T], in ~,

    in a bounded domain g2 C R a (d=2 or 3). So far, nearly all the methods which have been proposed in the literature

    (see, e.g., Chapter 7 in [1] for a review) use a spectral method for the discretiza- tion in space, and a finite difference scheme to advance the solution in time. Typically, the convective term is advanced by an explicit scheme (e.g., second order Adams-Bashforth, or fourth order Runge-Kutta) for two reasons: the stability limit is always larger than the accuracy limit required to preserve overall spectral accuracy, and the nonlinear terms are easily handled by the pseudospectral tech- nique (see below). Conversely, the viscous and pressure terms are advanced by an

  • 60 C. CANUTO*. Parallelism in

    implicit scheme (e.g., Crank-Nicolson), in order to avoid too strict stability limits. Thus, at each time level, one has to

    1) evaluate the convective term (u-V)u for one, or several, known velocity fields. The computed terms appear on the right-hand side G of a Stokes' like problem

    (1.13)

    au-vAu+ ~7p = G in t2,

    div u=O in Y2,

    u=g (or u periodic) on Or2,

    where a= 1/At;

    ii) solve the spectral discretization of problem (1.13).

    In most cases, problem (1.13) is reduced to a sequence of Helmholtz prob- lems. These, in turn, are solved by a direct method or an iterative one. In the latter case, one has to evaluate residuals of spectral approximation of Helmholtz problems.

    We conclude that the main steps in a spectral algorithm are:

    A) calculation of differential operators on given functions; B) solution of linear systems.

    When the geometry of the physical domain is not Cartesian, one first has to reduce the computational domain to a simple geometry. In this case, one has to resort to one of the existing

    C) domain decomposition techniques.

    In the next sections, we will examine these three steps in some detail in view of their implementation on a multiprocessor architecture.

    2. Spectral calculation of differential operators.

    Let us consider the following problem: ~(given the representation of a finite- dimensional velocity field u, either in Physical or in Transform space, compute

  • Spectral Methods 61

    the representation of (u -~7)u in the same space~. We recall that by representa- tion of a given function v in Transform space we mean the finite set of its

  • 62 C. CANUTO" Parallelism in

    We are now in a position to discuss how to introduce parallelism in the calculation of (u 9 ~7)u. Two conceptually different forms of parallelism can be considered:

    a) Mathematical Parallelism: assign the calculation of different terms vDw to diffe- rent processors;

    b) Numerical Parallelism: assign different portions of the computational domain (in Physical space or in Transform space) to different processors, with the same mathematical task.

    The mathematical parallelism is the simplest to conceive and even to code; however, it suffers from a number of drawbacks. Since the same component of u may be needed by different processors, a large shared memory is necessry, and/or large data transfers occur. Different processors may require the same data at the same time, leading to severe memory bank conflicts. Furthermore, problems of synchronisation and balancing may arise if the different mathematical terms do not require the same computational effort, or if their number is not a multiple of the number of processors. The strategy becomes definitely useless on fine grain architectures.However, it can represent a first level of parallelism, if a hierarchy of parallelisms is available.

    Leca and Sacchi-Landriani [ 11 ] report their experience of parallelization of a mixed Fourier-Chebyshev Navier-Stokes algorithm, known as the Kleiser- Schumann method (see the next section). They use a multi-AP system at ONERA (France), four AP-120 processors having access to a ((large)) shared memory (compared to the , local, memories). Starting from a single-processor pre-existent code, Leca and Sacchi-Landriani simply send different subroutines - computing different contributions to the equation - to different processors. The largest observed speed-up is 2.78 out of the maximal 4.

    From now on, we will discuss strategy b) of parallelization, i.e., Numerical Parallelism. The question is: how do we split the computational domain among the processors in order to get the highest degree of parallelism with the lowest communication costs? The first answer to this question comes from the following fundamental observation: Spectral methods for multi-dimensional boundary value problems are inherently tensor products of one-dimensional spectral methods.

    This means that the orthogonal basis functions and the computational grids which define a multidimensional spectral method are obtained by taking tensor

  • Spectral Methods 63

    products of suitable orthogonal basis functions and grids on intervals of the real line.

    It follows that the elementary transformations (discrete transforms, dif- ferentiation, pointwise product, ...) which constitute a spectral method can be obtained as a sequence (a cascade) of one dimensional transformations of the same nature. Each of these transformations (e.g., differentiation in the x direc- tion) can be carried out in parallel over parallel rows or columns of the computa- tional domain (either in Physical space or in Transform Space).

    Therefore, the simplest strategy of domain decomposition will consist of assigning ~slices~ of the computational domain (i.e., groups of contiguous rows or columns, in a two dimensional geometry) to different processors. Once again we stress that we consider slices both in Physical Space (i.e., rows/columns ofgridva- lues) and in Transform Space (i.e., rows/columns of~Wourier~ coefficients). After a transformation along one space direction has been completed, one has to trans- pose the computational lattice in order to carry out transformations along the other directions. Transposition should not be a major problem on architectures with large shared memory or wide-band buses.

    Erlebacher, Bohkari and Hussaini [5] report preliminary experiences of cod- ing a Fourier-Chebyshev method for compressible Navier-Stokes simulations on a 20 processor Flex/32 computer at the NASA Langley Research Center. Since the time marching scheme is fully explicit, almost all the work is spent in comput- ing convective or diffusive terms by the spectral technique. Parallelization is achieved by the strategy described above. The physical variables on the com- putational domain are stored in shared memory; slices of them are sent to the processors, which write the results of their computation in shared memory. The authors' conclusions are summarized in Table 1, where speed-ups (Sp) and effi- ciencies (Ep) are documented for different choices of the computational grid. According to the authors, moving variables between shared and local memory should not cause major overheads even on such a supercomputer as the ETA 1~ Indeed, quoting from [5], ~a good algorithm [on the ETA 1~ should perform at least 5 floating point operations per word transferred one way from common memory,s. This minimum work is certainly achieved within a spectral code: think, for instance, of differetiation in physical space via FFT.

    Transposition of the computational lattice will eventually become prohibi- tive on fine grain, local memory architectures. In this case, small portions of the computational domain will be permanently resident in local memories, and inter- communication among processors will be the major issue. In order to understand the communication needs of a spectral method, let us observe that ifL(u) is any

  • 64 C. CANUTO" Parallelism in

    Table 1. Performance data of one residual calculation (courtesy Bokhari and Hussaini [5]).

    of Erlebacher,

    Performance

    Grid N,o, x2-'* P r. sp F.,

    128x 16x8 12.7 8 365 7.55 94.3 4 705 3.91 97.6 2 1386 1.99 99.4 1 2757 1.00 100.0

    8x64x32 9.0 8 269 7.52 94.0 4 510 3.87 96.8 2 1003 1.97 98.5 1 1977 1.00 100.0

    64x 16x 16 8.0

    16 138 13.01 81.3 8 242 7.41 92.6 4 466 3.84 95.9 2 916 1.95 97.7 1 1786 1.00 100.0

    32x32x16 6.7

    16 118 12.82 80.1 8 202 7.45 93.2 4 388 3.89 97.3 2 759 1.99 99.3 1 1511 1.00 100.0

    32x16x16 2.7

    16 62 9.98 62.4 8 92 6.72 84.0 4 168 3.67 91.7 2 321 1.92 96.0 1 616 1.00 100.0

    16x16x16 1.0

    16 37 6.54 40.9 8 43 5.60 70.0 4 71 3.44 86.0 2 127 1.92 95.8 1 244 1.00 100.0

  • Spectral Methods 65

    differential operator (of any order, with variable coefficients or non-linear terms, etc.) then one can compute a spectral approximation to L(u) at a point P of the computational domain using only information at the points lying on the rows and columns meeting at P (see Figure 1.a). This means that spectral methods, although global methods in one space dimension, exhibit a precise sparse struc- ture in multidimensional problems.

    x x x |

    X X X (~)

    X X X (~)

    X X X (~)

    X X X ~)

    X X X (~)

    |174174174

    X X X (~)

    X X X (~)

    X X X X X

    X X X X X

    X X X X X

    X X X X X

    X X X X X

    X X X X X

    - - |174174174174 P

    X X X X X

    X X X X X

    Figure 1.a - The spectral r162 at P in the computational domain.

    Let us confine ourselves to the 2-dimensional case. If we assume to partition the computational domain among an array of m 2 processors (see Figure 1.b), then processor Pi0do will need to exchange data only with processors Pi0d (J varying) and Pido (i varying). Thus, information will be exchanged among at most O(n) processors.

    Note that differentiation in Fourier space and evaluation of non-linear terms in physical space require no communication among processors. Thus, the com- munication demand in the spectral calculation of differential operators is dictated by the two following one dimensional transformations:

    (24) Fast Fourier transforms;

    (25) Differentiation in Chebyshev transform space, according to (1.10).

  • 66 C. CANUTO: Parallelism in

    P~jo

    Pi~j o Pij

    Figure 1.b - Communications in a lattice of processors.

    3. Solution of linear systems.

    Hereafterl we will discuss two examples of solution of linear systems arising from spectral approximations.

    3.1. Solving a Stokes problem via an influence matrix technique.

    We consider the Kleiser-Schumann algorithm for solving problem (1.13) in the infinite slab ~ = R (-1,1), with g= 0 and u 2~-periodic in the x and y directions (see [10] for the details). The basic idea is that (1.13) is equivalent to

    (3.1)

    au-vAu+ Vp=G Ap=div G

    in g?,

    u_-0 1 div u=0 on 092; this, in turn, is equivalent to

    (3.2) Ap= divG

    p=2

    in f2,

    on 00,

    au-vAu = G-Vp in g2,

    u=0 on 0s

  • Spectral Methods 67

    provided $ is chosen in such a way that div u -- 0 on 0g?. If we project (3.2) on each Fourier mode in the x and y directions, we get a family of one dimensional Helmholtz problems in the interval (-1,1), where the unknowns are the Fourier coefficients ofp and u along the chosen mode. The boundary values 4+ and ~l for the Fourier coefficient of p are obtained by solving a 2x2 linear system, whose matrix - the influence matrix - is computed once and for all in pre-processing stage.

    Thus, one is reduced to solve Helmholtz problems of the form

    (3.3) -w" + flw = h for - l

  • 68 C. CANUTO: Parallelism in

    3.2. Solving an elliptic boundary value problem by an iterative technique.

    Let us assume we want to solve the model problem

    -Au=f in the cube g2=(-1,1) 3, (3.6)

    u=O on Og2,

    by a Chebyshev collocation method. For a fixed N>0, we define the Chebyshev grid GN ={(xi, yj, zk) I 0~

  • Spectral Methods 69

    ~max (A_lLsp)~ 1 as apposed to ~.,,a, (Lsp) = 0(N4). (Here 2max, resp., 2rain, ~min ~rain

    denote the largest, resp., the smallest eigenvalue of the indicated matrix). This is achieved, for instance, if A is the matrix of a low order finite difference or finite element method for the Laplace operator on the Chebyshev grid GN. Multilinear finite elements (Deville and Mund [19]) guarantee exceedingly good precon- ditioning properties.

    The direct solution of the finite difference or finite element system at each Richardson iteration may be prohibitive for large problems. An approximate solution is usually enough for preconditioning purposes. Most of the algorithms proposed in the literature (see., e.g., [1], Chapter 5 for a review) are global se- quential algorithms (say, an LU incomplete factorization).

    Recently, Pietra and the author [2] have proposed solving approximately the trilinear finite element system by a small number of ADI iterations. They use an efficient ADI scheme for tensor product finite elements in dimension three intro- duced by Douglas [4]. The method can be easily extended to handle general variable coefficients. As usual, efficiency in an ADI scheme is gained by cycling the parameters. The ADI parameters can be automatically chosen in such a way that the cycle length lc(e) needed to reduce the error by a factor e satisfies

    lc(e) = log (2r, a~) = log (A rt) = 4logN. ~min

    It follows that for a fixed e, 0 < e

  • 70 C. CAmlTO: Parallelism in

    son, Saad and Schultz [9] discuss highly efficient implementations of ADI met- hods on several parallel architectures.

    3.3 Communication needs

    We have explored the structure of several spectral type algorithms, pointing out the most significant features in view of their implementation on parallel architectures. We first stressed the tensor product structure of spectral methods, next we indicated the one-dimensional transformations which more frequently occur in these methods: they are given in (2.4), (2.5) and (3.5).

    It is outside the scope of this paper to discuss in detail the implementation of these transformations on specific parallel architectures. Here, we simply recall the most suitable interconnection networks for each of these transformations re- ferring for a deeper analysis to classical books on parallel computers such as [8], or to review papers such as [13].

    Fast Fourier Transforms play a fundamental r61e in spectral methods. The Perfect Shuffle interconnection network (Pease (1968), Stone (1971)) is the optimal communication scheme for this class of transforms.

    Differentiation in Chebyshev transform space essentially amounts to a mat- rix-vector multiplication, where the matrix is upper triangular Toepliz (see (1.10)). Thus, it can be written as a recursion relation as follows

    C m I] (1) = Cm+ 2 1](1)+ 2 "~ (m+l)1]m+l, m=N-1, ..., 0;

    1](1~+~ = 1] (~ = 0.

    Cyclic Reduction (Golub-Hockney, 1965) or Cyclic Elimination (Heller, 1976) are the implementations of recursive algorithms which are suggested for parallel architectures. Several interconnection networks have been proposed for these transformations (see again [8], [13]).

    The tridiagonal systems arising from tau methods or finite-order precon- ditioners can be efficiently solved on parallel machines by a variety of substruc- turing algorithms, which include Cyclic Reduction or Cyclic Elimination. John- son, Saad and Schultz [9] discuss to implementation of ADI methods on the hypercube architecture. Often, it is advisable to invert the tridiagonal matrix once and for all in a preprocessing stage and then solve the linear systems by

  • Spectral Methods 71

    matrix-vector multiplication. In this case, the Nearest Neighbor Network pro- vides the optimal communication scheme.

    It is clear from the previous discussion that several intercommunication paths should co-exist in order to allow an optimal implementation of spectral algorithms on parallel architectures. The union of the Perfect Shuffle Network with the Nearest Neighbor Network (PSNN) is an example of a multi-path scheme, quite appropriate for spectral methods. The PSNN was first proposed by Grosch [7] for an efficient parallel implementation of fast Poisson solvers.

    4. Domain decompositions in spectral methods.

    The parallel implementation of different domain decomposition techniques for general boundary value problems is discussed in the paper by A. Quarteroni in this volume; we refer to it for the details. Hereafter, we confine ourselves to some basic considerations about the use of a domain decomposition strategy with spectral methods.

    Partitioning the domain is an absolute need if the geometry of the domain is complex, i.e., if it cannot be easily mapped into a Cartesian region. In this case, one breaks the domain into simple pieces, and sets up a separate scheme in each subdomain; suitable continuities are enforced at the interfaces, usually by an iterative procedure. (We refert to [1], Chapter 13, for a review of the existing domain decomposition techniques for spectral methods).

    The same strategy can be applied, even on a simple geometry, with the primary purpose of splitting the computational effort over several processors. This ~route to parallelism- - which is quite successful when the discretization scheme is of finite order - may contain a potential source of inefficiency if used in the context of spectral methods. Indeed, it leads to breaking the globality of the expansion, which - as we know - is a necessary ingredient in order to have high accuracy for regular solutions. We stress here one of the crucial differences be- tween local, finite order approximations and spectral approximations to the same boundary value problem, produced by a domain decomposition technique. In the former case, the solution obtained at convergence of the iterative procedure coin- cides with the solution obtained by a single-domain method of the same type, which employs the union of the grids on the subdomains. In the latter case, the single-domain solution is a global polynomial function, whereas the final multi- domain solution is merely a piecewise polynomial function, with finite order smoothness at the interfaces. Although this does not prevent asymptotic spectral

  • 72 C. CANUTO: Parallelism in

    accuracy for the multi-domain solution, its actual accuracy may be severely de- graded if compared to that of the single-domain solution defined by the same total number of degrees of freedom.

    Let us illustrate the situation with a model problem, taken from [2]. Consid- er the Dirichlet problem for the Poisson equation in the square (-1, 1) 2, whose exact solution is u(x, y) = cos 2nx cos 2ary. We divide the domain into four equal squares, on each of which we set a Chebyshev collocation method, plus we en- force C ~ continuity at the interfaces. The results are compared with those pro- duced by a Chebyshev collocation method on the original square, which uses the same total number of unknowns. The relative L = errors are reported in Table 2.

    Table 2. Relative maximum-norm errors for a Chebyshev collocation method (from [2]).

    u(x, y) = cos 2~ X cossry

    4 DOM, 4x4 .62 E0

    1 DOM, 8x8 .35 E-1

    4 DOM, 8x8 .12 E-2

    1 DOM, 16x16 .11 E-6

    4 DOM, 16x16 .49 E-10

    1 DOM, 32x32 .38 E-14

    Note the loss of four orders of magnitude in replacing the single domain with 16x 16 nodes by the four domains, each with a 8x8 grid. Of course, if we have four processors and we can reach the theoretical speed-up of four in the domain decomposition technique, we can run four 16x 16 subdomains in parallel at the cost of a single 16 X 16 domain on a single-processor, and gain four order of mag- nitudes in accuracy. However, if we seek parallelism through the splitting techni- ques described in Sections 2 and 3, and we maintain a speed-up of four, we can run for the same cost a 32x32 grid on the single domain, yielding a superior accurcy again by a factor of 10 -4. Thus, it appears that it is better to keep the

  • Spectral Methods 73

    spectral expansion as global as possible, and look for parallelism at the level of the solution of the algebraic system originated from the discretization method.

    We conclude by going back to domain decompositions for the spectral scheme on each,

  • 74 C. CANUTO: Parallelism in

    Conf. Numerical Methods in Fluid Mechanics (E. H. Hirschel, ed.), Vieweg Verlag, Braunshweig, 1980, 165-173.

    [11] P. Lv.cA, G. SACcm-LANDRL~NI, ParalMisation d'un algorithme de matrice d'influence pour la rgsolution des equations de Navier-Stokes par m~thodes spectrales, La R6cher- che A6rospatiale, 6 (1987), 35-42.

    [12] D. M. NOSENCHUCK, S. E. KRXST, T. A. ZANO, On multigrid methods for the Navier- Stokes Computer, paper presented at the 3rd Copper Mountain Conference on Multigrid Methods, Copper Mountain, Colorado, April 6-10, 1987.

    [13] J. M. ORTEGA, R. G. VOIOT, Solution of partial differential equations on vector and parallel computers, SIAM Review, 27 (1985), 149-240.

    [ 14] C. T~.MPERTON, Self-sorting mixed-radix fast Fourier transforms, J. Comput. Phys., 52 (1983), 1-23.

    [15] R. G. Vola% D. GorrT.IEB, M. Y. HussAIm (eds). Spectral Methods for Partial Differential Equations, SIAM, Philadelphia, 1984.


Recommended