+ All Categories
Home > Documents > estadistica

estadistica

Date post: 28-Sep-2015
Category:
Upload: veronica-navia
View: 9 times
Download: 1 times
Share this document with a friend
Description:
metodos espectrales
Popular Tags:
281
Transcript
  • CAMBRIDGE MONOGRAPHS ONAPPLIED AND COMPUTATIONALMATHEMATICS

    Series EditorsM. ABLOWITZ, S. DAVIS, S. HOWISON, A. ISERLES, A. MAJDA,J. OCKENDON, P. OLVER

    21 Spectral Methods for Time-DependentProblems

  • The Cambridge Monographs on Applied and Computational Mathematics reects thecrucial role ofmathematical and computational techniques in contemporary science. Theseries publishes expositions on all aspects of applicable and numericalmathematics,withan emphasis on new developments in this fast-moving area of research.

    State-of-the-artmethods and algorithms aswell asmodernmathematical descriptionsof physical and mechanical ideas are presented in a manner suited to graduate researchstudents and professionals alike. Sound pedagogical presentation is a prerequisite. It isintended that books in the series will serve to inform a new generation of researchers.

    Also in this series:1. A Practical Guide to Pseudospectral Methods, Bengt Fornberg2. Dynamical Systems and Numerical Analysis, A. M. Stuart and A. R. Humphries3. Level Set Methods and Fast Marching Methods, J. A. Sethian4. The Numerical Solution of Integral Equations of the Second Kind, Kendall

    E. Atkinson5. Orthogonal Rational Functions, Adhemar Bultheel, Pablo Gonzalez-Vera, Erik

    Hendiksen, and Olav Njastad6. The Theory of Composites, Graeme W. Milton7. Geometry and Topology for Mesh Generation, Herbert Edelsbrunner8. Schwarz-Christoffel Mapping, Ton A. Driscoll and Lloyd N. Trefethen9. High-Order Methods for Incompressible Fluid Flow, M. O. Deville, P. F. Fischer,

    and E. H. Mund10. Practical Extrapolation Methods, Avram Sidi11. Generalized Riemann Problems in Computational Fluid Dynamics, Matania

    Ben-Artzi and Joseph Falcovitz12. Radial Basis Functions: Theory and Implementations, Martin D. Buhmann13. Iterative Krylov Methods for Large Linear Systems, Henk A. van der Vorst14. Simulating Hamiltonian Dynamics, Ben Leimkuhler and Sebastian Reich15. Collocation Methods for Volterra Integral and Related Functional Equations,

    Hermann Brunner

    16. Topology for computing, Afra J. Zomordia17. Scattered Data Approximation, Holger Wendland19. Matrix Preconditioning Techniques and Applications, Ke Chen22. The Mathematical Foundations of Mixing, Rob Sturman, Julio M. Ottino and

    Stephen Wiggins

  • Spectral Methods for Time-DependentProblems

    JAN S. HESTHAVEN

    Brown University

    SIGAL GOTTLIEB

    University of Massachusetts, Dartmouth

    DAVID GOTTLIEB

    Brown University

  • For our children and grandchildren

  • Contents

    Introduction 1

    1 From local to global approximation 51.1 Comparisons of nite difference schemes 91.2 The Fourier spectral method: rst glance 16

    2 Trigonometric polynomial approximation 192.1 Trigonometric polynomial expansions 192.2 Discrete trigonometric polynomials 242.3 Approximation theory for smooth functions 34

    3 Fourier spectral methods 433.1 FourierGalerkin methods 433.2 Fouriercollocation methods 483.3 Stability of the FourierGalerkin method 523.4 Stability of the Fouriercollocation method for hyperbolic

    problems I 543.5 Stability of the Fouriercollocation method for hyperbolic

    problems II 583.6 Stability for parabolic equations 623.7 Stability for nonlinear equations 643.8 Further reading 65

    4 Orthogonal polynomials 664.1 The general SturmLiouville problem 674.2 Jacobi polynomials 69

    5 Polynomial expansions 795.1 The continuous expansion 795.2 Gauss quadrature for ultraspherical polynomials 83

    vii

  • viii Contents

    5.3 Discrete inner products and norms 885.4 The discrete expansion 89

    6 Polynomial approximation theory for smooth functions 1096.1 The continuous expansion 1096.2 The discrete expansion 114

    7 Polynomial spectral methods 1177.1 Galerkin methods 1177.2 Tau methods 1237.3 Collocation methods 1297.4 Penalty method boundary conditions 133

    8 Stability of polynomial spectral methods 1358.1 The Galerkin approach 1358.2 The collocation approach 1428.3 Stability of penalty methods 1458.4 Stability theory for nonlinear equations 1508.5 Further reading 152

    9 Spectral methods for nonsmooth problems 1539.1 The Gibbs phenomenon 1549.2 Filters 1609.3 The resolution of the Gibbs phenomenon 1749.4 Linear equations with discontinuous solutions 1829.5 Further reading 186

    10 Discrete stability and time integration 18710.1 Stability of linear operators 18810.2 Standard time integration schemes 19210.3 Strong stability preserving methods 19710.4 Further reading 202

    11 Computational aspects 20411.1 Fast computation of interpolation and differentiation 20411.2 Computation of Gaussian quadrature points and weights 21011.3 Finite precision effects 21411.4 On the use of mappings 225

    12 Spectral methods on general grids 23512.1 Representing solutions and operators on general grids 23612.2 Penalty methods 238

  • Contents ix

    12.3 Discontinuous Galerkin methods 24612.4 References and further reading 248

    Appendix A Elements of convergence theory 249

    Appendix B A zoo of polynomials 252B.1 Legendre polynomials 252B.2 Chebyshev polynomials 255

    Bibliography 260Index 272

  • Introduction

    The purpose of this book is to collect, in one volume, all the ingredients nec-essary for the understanding of spectral methods for time-dependent problems,and, in particular, hyperbolic partial differential equations. It is intended asa graduate-level text, covering not only the basic concepts in spectral meth-ods, but some of the modern developments as well. There are already severalexcellent books on spectral methods by authors who are well-known and activeresearchers in this eld. This book is distinguished by the exclusive treatmentof time-dependent problems, and so the derivation of spectral methods is inu-enced primarily by the research on nite-difference schemes, and less so by thenite-element methodology. Furthermore, this book is unique in its focus onthe stability analysis of spectral methods, both for the semi-discrete and fullydiscrete cases. In the book we address advanced topics such as spectral meth-ods for discontinuous problems and spectral methods on arbitrary grids, whichare necessary for the implementation of pseudo-spectral methods on complexmulti-dimensional domains.

    In Chapter 1, we demonstrate the benets of high order methods using phaseerror analysis. Typical nite difference methods use a local stencil to computethe derivative at a given point; higher ordermethods are then obtained by using awider stencil, i.e., more points. The Fourier spectralmethod is obtained by usingall the points in the domain. In Chapter 2, we discuss the trigonometric poly-nomial approximations to smooth functions, and the associated approximationtheory for both the continuous and the discrete case. In Chapter 3, we presentFourier spectral methods, using both the Galerkin and collocation approaches,and discuss their stability for both hyperbolic and parabolic equations. We alsopresent ways of stabilizing these methods, through super viscosity or ltering.

    Chapter 4 features a discussion of families of orthogonal polynomials whichare eigensolutions of a SturmLiouville problem.We focus on the Legendre andChebyshev polynomials, which are suitable for representing functions on nite

    1

  • 2 Introduction

    domains. In this chapter, we present the properties of Jacobi polynomials, andtheir associated recursion relations. Many useful formulas can be found in thischapter. In Chapter 5, we discuss the continuous and discrete polynomial expan-sions based on Jacobi polynomials; in particular, the Legendre and Chebyshevpolynomials. We present the Gauss-type quadrature formulas, and the differentpoints on which each is accurate. Finally, we discuss the connections betweenLagrange interpolation and electrostatics. Chapter 6 presents the approximationtheory for polynomial expansions of smooth functions using the ultraspheri-cal polynomials. Both the continuous and discrete expansions are discussed.This discussion sets the stage for Chapter 7, in which we introduce polynomialspectral methods, useful for problems with non-periodic boundary conditions.We present the Galerkin, tau, and collocation approaches and give examplesof the formulation of Chebyshev and Legendre spectral methods for a varietyof problems. We also introduce the penalty method approach for dealing withboundary conditions. In Chapter 8 we analyze the stability properties of themethods discussed in Chapter 7.

    In the nal chapters, we introduce some more advanced topics. In Chapter 9we discuss the spectral approximations of non-smooth problems. We addressthe Gibbs phenomenon and its effect on the convergence rate of these approxi-mations, and present methods which can, partially or completely, overcome theGibbs phenomenon. We present a variety of lters, both for Fourier and poly-nomial methods, and an approximation theory for lters. Finally, we discussthe resolution of the Gibbs phenomenon using spectral reprojection methods.In Chapter 10, we turn to the issues of time discretization and fully discretestability. We discuss the eigenvalue spectrum of each of the spectral spatialdiscretizations, which provides a necessary, but not sufcient, condition forstability. We proceed to the fully discrete analysis of the stability of the forwardEuler time discretization for the Legendre collocation method. We then presentsome of the standard time integration methods, especially the RungeKuttamethods. At the end of the chapter, we introduce the class of strong stabilitypreservingmethods and present some of the optimal schemes. In Chapter 11, weturn to the computational issues which arise when using spectral methods, suchas the use of the fast Fourier transform for interpolation and differentiation, theefcient computation of the Gauss quadrature points and weights, and the effectof round-off errors on spectral methods. Finally, we address the use of map-pings for treatment of non-standard intervals and for improving accuracy in thecomputation of higher order derivatives. In Chapter 12, we talk about the imple-mentation of spectral methods on general grids. We discuss how the penaltymethod formulation enables the use of spectral methods on general grids in onedimension, and in complex domains in multiple dimensions, and illustrate this

  • Introduction 3

    using both the Galerkin and collocation approaches. We also show how penaltymethods allow us to easily generalize to complicated boundary conditions andon triangular meshes. The discontinuous Galerkin method is an alternative wayof deriving these schemes, and penalty methods can thus be used to constructmethods based on multiple spectral elements.

    Chapters 1, 2, 3, 5, 6, 7, 8 of the book comprise a complete rst coursein spectral methods, covering the motivation, derivation, approximation the-ory and stability analysis of both Fourier and polynomial spectral methods.Chapters 1, 2, and 3 can be used to introduce Fourier methods within a courseon numerical solution of partial differential equations. Chapters 9, 10, 11, and12 address advanced topics and are thus suitable for an advanced course inspectral methods. However, depending on the focus of the course, many othercombinations are possible.

    A good resource for use with this book is PseudoPack. PseudoPack Rioand PseudoPack 2000 are software libraries in Fortran 77 and Fortran 90(respectively) for numerical differentiation by pseudospectral methods, cre-ated by Wai Sun Don and Bruno Costa. More information can be foundat http://www.labma.ufrj.br/bcosta/pseudopack/main.html and http://www.labma.ufrj.br/bcosta/pseudopack2000/main.html.

    As the oldest author of this book, I (David Gottlieb) would like to take aparagraph or so to tell you my personal story of involvement in spectral meth-ods. This is a personal narrative, and therefore may not be an accurate historyof spectral methods. In 1973 I was an instructor at MIT, where I met SteveOrszag, who presented me with the problem of stability of polynomial methodsfor hyperbolic equations. Working on this, I became aware of the pioneeringwork of Orszag and his co-authors and of Kreiss and his co-authors on Fourierspectral methods. The work on polynomial spectral methods led to the bookNumerical Analysis of Spectral Methods: Theory and Applications by SteveOrszag and myself, published by SIAM in 1977. At this stage, spectral meth-ods enjoyed popularity among the practitioners, particularly in the meteorol-ogy and turbulence community. However, there were few theoretical results onthese methods. The situation changed after the summer course I gave in 1979 inFrance. P. A. Raviart was a participant in this course, and his interest in spectralmethods was sparked. When he returned to Paris he introduced his postdoc-toral researchers, Claudio Canuto and Alo Quarteroni, and his students, YvonMaday and Christine Bernardi, to these methods. The work of this Europeangroup led to an explosion of spectral methods, both in theory and applications.After this point, the eld became too extensive to further review it. Nowadays,I particularly enjoy the experience of receiving a paper on spectral methodswhich I do not understand. This is an indication of the maturity of the eld.

  • 4 Introduction

    The following excellent books can be used to deepen ones understandingof many aspects of spectral methods. For a treatment of spectral methods forincompressible ows, the interested reader is referred to the classical book byC. Canuto, M. Y. Hussaini, A. Quarteroni and T. A. Zang, Spectral Methods:Fundamentals in single domains (2006), the more recent Spectral Methods forIncompressible Viscous Flow (2002) by R. Peyret and the modern text High-Order Methods in Incompressible Fluid Flows (2002) by M. Deville, P. F.Fischer, and E. Mund (2002). The book Spectral/hp Methods in ComputationalFluid Dynamics, by G. E. Karniadakis and S. J. Sherwin (2005), deals withmany important practical aspects of spectral methods computations for largescale uid dynamics application. A comprehensive discussion of approxima-tion theory may be found in Approximations Spectrales De Problemes AuxLimites Elliptiques (1992) by C. Bernardi and Y. Maday and in PolynomialApproximation of Differential Equations (1992) by D. Funaro. Many interest-ing results can be found in the book by B. -Y. Guo, Spectral Methods andtheir Applications (1998). For those wishing to implement spectral methods inMatlab, a good supplement to this book isSpectralMethods inMatlab (2000), byL. N. Trefethen.

    For the treatment of spectral methods as a limit of high order nite differ-ence methods, see A Practical Guide to Pseudospectral Methods (1996) byB. Fornberg. For a discussion of spectral methods to solve boundary value andeigenvalue problems, as well as Hermite, Laguerre, rational Chebyshev, sinc,and spherical harmonic functions, seeChebyshev and Fourier SpectralMethods(2000) by J. P. Boyd.

    This text has as its foundation the work of many researchers who make upthe vibrant spectral methods community. A complete bibliography of spectralmethods is a book in and of itself. In our list of references we present only apartial list of those paperswhich have direct relevance to the text. This necessaryprocess of selectionmeant thatmany excellent papers and bookswere excluded.For this, we apologize.

  • 1From local to global approximation

    Spectral methods are global methods, where the computation at any given pointdepends not only on information at neighboring points, but on information fromthe entire domain. To understand the idea of a global method, we begin byconsidering local methods, and present the global Fourier method as a limit oflocal nite difference approximations of increasing orders of accuracy. We willintroduce phase error analysis, and using this tool we will show the merits ofhigh-ordermethods, and in particular, their limit: the Fouriermethod. The phaseerror analysis leads to the conclusion that high-order methods are benecialfor problems requiring well resolved ne details of the solution or long timeintegrations.

    Finite difference methods are obtained by approximating a function u(x) bya local polynomial interpolant. The derivatives of u(x) are then approximatedby differentiating this local polynomial. In this context, local refers to the useof nearby grid points to approximate the function or its derivative at a givenpoint.

    For slowly varying functions, the use of local polynomial interpolants basedon a small number of interpolating grid points is very reasonable. Indeed, itseems to make little sense to include function values far away from the point ofinterest in approximating the derivative. However, using low-degree local poly-nomials to approximate solutions containing very signicant spatial or temporalvariation requires a very ne grid in order to accurately resolve the function.Clearly, the use of ne grids requires signicant computational resources insimulations of interest to science and engineering. In the face of such limita-tions we seek alternative schemes that will allow coarser grids, and thereforefewer computational resources. Spectral methods are such methods; they useall available function values to construct the necessary approximations. Thus,they are global methods.

    5

  • 6 From local to global approximation

    Example 1.1 Consider the wave equationu

    t= 2 u

    x0 x 2, (1.1)

    u(x, 0) = esin(x),with periodic boundary conditions.

    The exact solution to Equation (1.1) is a right-moving wave of the formu(x, t) = esin(x2 t),

    i.e., the initial condition is propagating with a speed 2 .In the following, we compare three schemes, each of different order of

    accuracy, for the solution of Equation (1.1) using the uniform gridx j = jx = 2 jN + 1 , j [0, . . . , N ]

    (where N is an even integer).

    Second-order nite difference scheme A quadratic local polynomial inter-polant to u(x) in the neighborhood of x j is given by

    u(x) = 12x2

    (x x j )(x x j+1)u j1 1x2

    (x x j1)(x x j+1)u j

    + 12x2

    (x x j1)(x x j )u j+1. (1.2)Differentiating this formula yields a second-order centered-difference approx-imation to the derivative du/dx at the grid point x j :

    dudx

    x j

    = u j+1 u j12x

    .

    High-order nite difference scheme Similarly, differentiating the inter-polant based on the points {x j2, x j1, x j , x j+1, x j+2} yields the fourth-ordercentered-difference scheme

    dudx

    x j

    = 112x

    (u j2 8u j1 + 8u j+1 u j+2).

    Global scheme Using all the available grid points, we obtain a global scheme.For each point x j we use the interpolating polynomial based on the points{x jk, . . . , x j+k} where k = N/2. The periodicity of the problem furnishes uswith the needed information at any grid point. The derivative at the grid pointsis calculated using a matrix-vector product

    dudx

    x j

    =N

    i=0D j i ui ,

  • From local to global approximation 7

    100 101 102 1031011

    109

    107

    105

    103

    101

    L

    N

    Finite difference

    Finite difference4th-order

    Spectral

    CFL

  • 8 From local to global approximation

    x

    0.0

    0.5

    1.0

    1.5

    2.0

    2.5

    3.0

    U(x,

    t)

    0 p/2 p 3p/2 2p

    N = 200

    t = 0.0

    x

    0.0

    0.5

    1.0

    1.5

    2.0

    2.5

    3.0

    U(x,

    t)0 p/2 p 3p/2 2p

    N = 10

    t = 0.0

    x

    0.0

    0.5

    1.0

    1.5

    2.0

    2.5

    3.0

    U(x,

    t)

    0 p/2 p 3p/2 2p

    N = 200

    t = 100.0

    x

    0.0

    0.5

    1.0

    1.5

    2.0

    2.5

    3.0

    U(x,

    t)

    0 p/2 p 3p/2 2p

    N = 10

    t = 100.0

    x

    0.0

    0.5

    1.0

    1.5

    2.0

    2.5

    3.0

    U(x,

    t)

    0 p/2 p 3p/2 2p

    N = 200

    t = 200.0

    x

    0.0

    0.5

    1.0

    1.5

    2.0

    2.5

    3.0

    U(x,

    t)

    0 p/2 p 3p/2 2p

    N = 10

    t = 200.0

    Figure 1.2 An illustration of the impact of using a global method for problemsrequiring long time integration. On the left we show the solution of Equation (1.1)computed using a second-order centered-difference scheme. On the right we showthe same problem solved using a global method. The full line represents the com-puted solution, while the dashed line represents the exact solution.

  • 1.1 Comparisons of nite difference schemes 9

    1.1 Comparisons of nite difference schemesThe previous example illustrates that global methods are superior in perfor-mance to local methods, not only when very high spatial resolution is requiredbut also when long time integration is important. In this section, we shall intro-duce the concept of phase error analysis in an attempt to clarify the observa-tions made in the previous section. The analysis conrms that high-order and/orglobal methods are a better choice when very accurate solutions or long timeintegrations on coarse grids are required. It is clear that the computing needs ofthe future require both.

    1.1.1 Phase error analysisTo analyze the phase error associated with a particular spatial approximationscheme, lets consider, once again, the linear wave problem

    u

    t= c u

    x0 x 2, (1.3)

    u(x, 0) = eikx ,

    with periodic boundary conditions, where i = 1 and k is the wave number.The solution to Equation (1.3) is a travelling wave

    u(x, t) = eik(xct), (1.4)

    with phase speed c.Once again, we use the equidistant grid

    x j = jx = 2 jN + 1 , j [0, . . . , N ].

    The 2m-order approximation of the derivative of a function f (x) isd fdx

    x j

    =m

    n=1mn Dn f (x j ), (1.5)

    where

    Dn f (x j ) = f (x j + nx) f (x j nx)2nx =f j+n f jn

    2nx, (1.6)

    and the weights, mn , are

    mn = 2(1)n(m!)2

    (m n)!(m + n)! . (1.7)

  • 10 From local to global approximation

    In the semi-discrete version of Equation (1.3) we seek a vector v =(v0(t), . . . , vN (t)) which satises

    dv jdt

    = cm

    n=1mn Dnv j , (1.8)

    v j (0) = eikx j .

    We may interpret the grid vector, v, as a vector of grid point values of atrigonometric polynomial, v(x, t), with v(x j , t) = v j (t), such that

    v

    t= c

    mn=1

    mn Dnv(x, t), (1.9)

    v(x, 0) = eikx .

    If v(x, t) satises Equation (1.9), the solution to Equation (1.8) is given byv(x j , t). The solution to Equation (1.9) is

    v(x, t) = eik(xcm (k)t), (1.10)

    where cm(k) is the numerical wave speed. The dependence of cm on the wavenumber k is known as the dispersion relation.

    The phase error em(k), is dened as the leading term in the relative errorbetween the actual solution u(x, t) and the approximate solution v(x, t):u(x, t) v(x, t)u(x, t)

    = 1 eik(ccm (k))t |k(c cm(k))t | = em(k).As there is no difference in the amplitude of the two solutions, the phase erroris the dominant error, as is clearly seen in Figure 1.2.

    In the next section we will compare the phase errors of the schemes inExample 1.1. In particular, this analysis allows us to identify the most efcientscheme satisfying the phase accuracy requirement over a specied period oftime.

    1.1.2 Finite-order nite difference schemesApplying phase error analysis to the second-order nite difference schemeintroduced in Example 1.1, i.e.,

    v(x, t)t

    = cv(x + x, t) v(x x, t)2x

    ,

    v(x, 0) = eikx ,

  • 1.1 Comparisons of nite difference schemes 11

    we obtain the numerical phase speed

    c1(k) = c sin(kx)kx .For x 1,

    c1(k) = c(1 (kx)

    2

    6+O((kx)4)

    ),

    conrming the second-order accuracy of the scheme.For the fourth-order scheme considered in Example 1.1,v(x, t)

    t= c

    12x(v(x 2x, t) 8v(x x, t) + 8v(x + x, t)

    v(x + 2x, t)),we obtain

    c2(k) = c8 sin(kx) sin(2kx)6kx .Again, for x 1 we recover the approximation

    c2(k) = c(1 (kx)

    4

    30+O((kx)6)

    ),

    illustrating the expected fourth-order accuracy.Denoting e1(k, t) as the phase error of the second-order scheme and e2(k, t)

    as the phase error of the fourth-order scheme, with the corresponding numericalwave speeds c1(k) and c2(k), we obtain

    e1(k, t) = kct1 sin(kx)kx

    , (1.11)e2(k, t) = kct

    1 8 sin(kx) sin(2kx)6kx .

    When considering wave phenomena, the critical issue is the number p of gridpoints needed to resolve a wave. Since the solution of Equation (1.3) has kwaves in the domain (0, 2 ), the number of grid points is given by

    p = N + 1k

    = 2kx

    .

    Note that it takes a minimum of two points per wavelength to uniquely specifya wave, so p has a theoretical minimum of 2.

    It is evident that the phase error is also a function of time. In fact, theimportant quantity is not the time elapsed, but rather the number of timesthe solution returns to itself under the assumption of periodicity. We denote thenumber of periods of the phenomenon by = kct/2 .

  • 12 From local to global approximation

    Rewriting the phase error in terms of p and yields

    e1(p, ) = 21 sin(2p1)2p1

    , (1.12)e2(p, ) = 2

    1 8 sin(2p1) sin(4p1)12p1 .

    The leading order approximation to Equation (1.12) is

    e1(p, ) 3(

    2p

    )2, (1.13)

    e2(p, ) 15(

    2p

    )4,

    fromwhichwe immediately observe that the phase error is directly proportionalto the number of periods i.e., the error grows linearly in time.

    We arrive at a more straightforward measure of the error of the scheme byintroducing pm(p, ) as a measure of the number of points per wavelengthrequired to guarantee a phase error, ep p, after periods for a 2m-orderscheme. Indeed, from Equation (1.13) we directly obtain the lower bounds

    p1(, ) 2

    3p, (1.14)

    p2(, ) 2 4

    15p,

    on pm , required to ensure a specic error p.It is immediately apparent that for long time integrations (large ), p2 p1,

    justifying the use of high-order schemes. In the following examples, we willexamine the required number of points per wavelength as a function of thedesired accuracy.

    Example 1.2

    p = 0.1 Consider the case in which the desired phase error is 10%. Forthis relatively large error,

    p1 20

    , p2 7 4

    .

    We recall that the fourth-order scheme is twice as expensive as the second-order scheme, so not much is gained for short time integration. However, as increases the fourth-order scheme clearly becomes more attractive.

    p = 0.01 When the desired phase error is within 1%, we have

    p1 64

    , p2 13 4

    .

  • 1.1 Comparisons of nite difference schemes 13

    Here we observe a signicant advantage in using the fourth-order scheme, evenfor short time integration.

    p = 105 This approximately corresponds to the minimum error displayedin Figure 1.1. We obtain

    p1 643

    , p2 43 4

    ,

    as is observed in Figure 1.1, which conrms that high-order methods are supe-rior when high accuracy is required.

    Sixth-order method As an illustration of the general trend in the behavior ofthe phase error, we give the bound on p3(p, ) for the sixth-order centered-difference scheme as

    p3(p, ) 2 6

    70p,

    for which the above special cases become

    p3(0.1, ) = 5 6

    , p3(0.01, ) = 8 6

    , p3(105, ) = 26 6

    ,

    conrming that when high accuracy is required, a high-order method is theoptimal choice. Indeed, sixth-order schemes are the methods of choice formany modern turbulence calculations.

    While the number of points per wavelength gives us an indication of the meritsof high-order schemes, the true measure of efciency is the work needed toobtain a predened bound on the phase error. We thus dene a measure of workper wavelength when integrating to time t ,

    Wm = 2m pm tt

    .

    This measure is a product of the number of function evaluations 2m, points perwavelength pm = 2kx , and the total number of time steps tt . Using = kct2we obtain

    Wm = 2mpm tt

    = 2mpm 2kct= 2mpm 2kc t

    xx

    = 2mpm 2kCFLmx= 2mpm pmCFLm ,

  • 14 From local to global approximation

    0 0.25 0.5 0.75 1n

    0

    50

    100

    150

    200

    250

    300

    350

    ep = 0.1

    W1

    W2

    W3

    0 0.25 0.5 0.75 1n

    0

    500

    1000

    1500

    2000

    2500

    3000

    3500

    ep = 0.01

    W1

    W2

    W3

    Figure 1.3 The growth of the work function, Wm , for various nite differenceschemes is given as a function of time, , in terms of periods. On the left we showthe growth for a required phase error of p = 0.1, while the right shows the resultof a similar computation with p = 0.01, i.e., a maximum phase error of less than1%.

    where CFLm = c tx refers to the CFL bound for stability. We assume that thefourth-order RungeKutta method will be used for time discretization. For thismethod it can be shown that CFL1 = 2.8, CFL2 = 2.1, and CFL3 = 1.75.Thus, the estimated work for second, fourth, and sixth-order schemes is

    W1 30 p

    , W2 35

    p, W3 48 3

    p. (1.15)

    In Figure 1.3 we illustrate the approximate work associated with the differentschemes as a function of required accuracy and time. It is clear that even forshort time integrations, high-order methods are the most appropriate choicewhen accuracy is the primary consideration. Moreover, it is evident that forproblems exhibiting unsteady behavior and thus needing long time integrations,high-order methods are needed to minimize the work required for solving theproblem.

    1.1.3 Innite-order nite difference schemesIn the previous section, we showed the merits of high-order nite differencemethods for time-dependent problems. The natural question is, what happensas we take the order higher and higher? How can we construct an innite-orderscheme, and how does it perform?

    In the following we will show that the limit of nite difference schemes isthe global method presented in Example 1.1. In analogy to Equation (1.5), the

  • 1.1 Comparisons of nite difference schemes 15

    innite-order method is given by

    dudx

    x j

    =

    n=1n

    u j+n u jn2nx

    .

    Todetermine the values ofn , we consider the function eilx . The approximationformula should be exact for all such trigonometric polynomials. Thus, nshould satisfy

    ileilx =

    n=1n

    ei(x+nx)l ei(xnx)l2nx

    =

    n=1n

    einxl einxl2nx

    eilx

    =

    n=1n

    2i sin(nlx)2nx

    eilx ,

    so

    l =

    n=1n

    sin(nlx)nx

    .

    We denote lx = , to emphasize that n /n are the coefcients of the Fouriersine expansion of ,

    =

    n=1

    nn

    sin(n ),

    and are therefore given by n = 2(1)n+1, n 1. Extending this denitionover the integers, we get

    n ={2(1)n+1 n = 00 n = 0.

    Substituting x = 2N+1 , we obtain

    dudx

    x j

    = N + 14

    n=

    nn

    u j+n.

    As we assume that the function u(x, t) is 2 -periodic, we have the identity

    u j+n = u j+n+p(N+1), p = 0,1,2 . . .

  • 16 From local to global approximation

    Rearranging the summation in the approximation yields

    dudx

    x j

    = N + 14

    N jn= j

    ( p=

    n+p(N+1)n + p(N + 1)

    )u j+n

    = 12

    N jn= j

    (1)n(

    p=

    (1)p(N+1)p + n/(N + 1)

    )u j+n

    = 12

    N jn= j

    (1)n(

    p=

    (1)pp + n/(N + 1)

    )u j+n.

    Using the identity

    k=(1)kk+x = sin(x) , and the substitution i = j + n,

    dudx

    x j

    = 12

    N jn= j

    (1)n sin(n/(N + 1))u j+n

    =N

    i=0

    12(1) j+i

    [sin(

    N + 1( j i))]1

    ui .

    Hence, we obtain the remarkable result that the innite-order nite differenceapproximation of the spatial derivative of a periodic function can be exactlyimplemented through the use of the differentiation matrix, D, as we saw inExample 1.1. As we shall see in the next section, the exact same formulationarises from the application of Fourier spectral methods. As we shall also see,the number of points per wavelength for the global scheme attains the minimumof p = 2 for a well-resolved wave.

    1.2 The Fourier spectral method: rst glance

    An alternative way of obtaining the global method is to use trigonometricpolynomials to interpolate the function f (x) at the points xl ,

    fN (x) =N

    l=0f (xl)hl(x),

    where the Lagrange trigonometric polynomials are

    hl(x) = 1N + 1sin( N+1

    2 (x xl))

    sin( 12 (x xl)

    ) =N2

    k= N2eik(xxl ).

    The interpolating polynomial is thus exact for all trigonometric polynomials ofdegree N/2.

  • 1.2 The Fourier spectral method: rst glance 17

    In a Fourier spectral method applied to the partial differential equation

    u

    t= c u

    x0 x 2,

    u(x, 0) = eikx

    with periodic boundary conditions, we seek a trigonometric polynomial of theform

    v(x, t) =N

    l=0v(xl , t)hl(x),

    such thatv

    t= c v

    x, (1.16)

    at the points x = x j , j = 0, . . . , N .The derivative of v at the points x j is given by

    v

    x

    x j

    =N

    l=0v(xl , t)hl(x j ),

    where

    hl(x j ) =(1) j+l

    2

    [sin(

    N + 1( j l))]1

    .

    Thus,

    dvdt

    x j

    = cN

    l=0

    (1) j+l2

    [sin(

    N + 1( j l))]1

    v(xl , t).

    The initial condition v(x, 0) is the interpolant of the function eikx ,

    v(x, 0) =N

    l=0eikxl hl(x).

    This is a Fouriercollocation method, which is identical to the innite-ordernite difference scheme derived above.

    It is important to note that since both sides of Equation (1.16) are trigono-metric polynomials of degree N/2, and agree at N + 1 points, they must beidentically equal, i.e.,

    v

    t= c v

    xfor all x .

    This is a fundamental argument in spectral methods, and will be seen frequentlyin the analysis.

  • 18 From local to global approximation

    Note that if k N/2, the same argument implies that v(x, 0) = eikx , andtherefore

    v(x, t) = eik(xct).Thus, we require two points per wavelength (N/k = p 2) to resolve theinitial conditions, and thus to resolve the wave. The spectral method requiresonly the theoretical minimum number of points to resolve a wave. Lets thinkabout how spectral methods behave when an insufcient number of pointsis given: in the case, N = 2k 2, the spectral approximation to the initialcondition eikx is uniformly zero. This example gives the unique avor of spectralmethods: when the number of points is insufcient to resolve the wave, the errordoes not decay. However, as soon as a sufcient number of points are used, wesee innite-order convergence.

    Note also that, in contrast to nite difference schemes, the spatial discretiza-tion does not cause deterioration in terms of the phase error as time progresses.The only source contributing to phase error is the temporal discretization.

    1.3 Further readingThe phase error analysis rst appears in a paper by Kreiss and Oliger (1972), inwhich the limiting Fourier case was discussed as well. These topics have beenfurther explored in the texts by Gustafsson, Kreiss, and Oliger (1995), and byFornberg (1996).

  • 2Trigonometric polynomial approximation

    The rst spectral methods computations were simulations of homogeneousturbulence on periodic domains. For that type of problem, the natural choiceof basis functions is the family of (periodic) trigonometric polynomials. In thischapter, we will discuss the behavior of these trigonometric polynomials whenused to approximate smooth functions. We will consider the properties of boththe continuous and discrete Fourier series, and come to an understanding of thefactors determining the behavior of the approximating series.

    We begin by discussing the classical approximation theory for the contin-uous case, and continue with the more modern theory for the discrete Fourierapproximation.

    For the sake of simplicity, we will consider functions of only one vari-able, u(x), dened on x [0, 2 ]. Also, we restrict ourselves in this chapterto functions having a continuous periodic extension, i.e., u(x) C0p[0, 2 ]. InChapter 9, we will discuss the trigonometric series approximation for functionswhich are non-periodic, or discontinuous but piecewise smooth. We shall seethat although trigonometric series approximations of piecewise smooth func-tions converge very slowly, the approximations contain high-order informationwhich is recoverable through postprocessing.

    2.1 Trigonometric polynomial expansions

    The classical continuous series of trigonometric polynomials, the Fourier seriesF[u] of a function, u(x) L2[0, 2 ], is given as

    F[u] = a0 +

    n=1an cos(nx) +

    n=1

    bn sin(nx), (2.1)

    19

  • 20 Trigonometric polynomial approximation

    where the expansion coefcients are

    an = 1cn

    20

    u(x) cos(nx) dx,

    with the values

    cn ={2 n = 0,1 n > 0,

    and

    bn = 1

    20

    u(x) sin(nx) dx, n > 0.

    Alternatively, the Fourier series can be expressed in complex form

    F[u] =

    |n|une

    inx , (2.2)

    with expansion coefcients

    un = 12 20

    u(x)einx dx =

    a0 n = 0,(an i bn)/2 n > 0,(an + i bn)/2 n < 0.

    (2.3)

    Remark The following special cases are of interest.

    1. If u(x) is a real function, the coefcients an and bn are real numbers and, con-sequently, un = un . Thus, only half the coefcients are needed to describethe function.

    2. If u(x) is real and even, i.e., u(x) = u(x), then bn = 0 for all values of n,so the Fourier series becomes a cosine series.

    3. If u(x) is real and odd, i.e., u(x) = u(x), then an = 0 for all values of n,and the series reduces to a sine series.

    For our purposes, the relevant question is how well the truncated Fourier seriesapproximates the function. The truncated Fourier series

    PNu(x) =

    |n|N/2une

    inx , (2.4)

    is a projection to the nite dimensional spaceBN = span{einx | |n| N/2}, dim( BN ) = N + 1.

    The approximation theory results for this series are classical.

  • 2.1 Trigonometric polynomial expansions 21

    Theorem 2.1 If the sum of squares of the Fourier coefcients is bounded|n|

    |un|2 < (2.5)

    then the truncated series converges in the L2 norm

    u PNuL2[0,2] 0 as N .If, moreover, the sumof the absolute values of the Fourier coefcients is bounded

    |n|N/2|un|2,

    and

    u PNuL[0,2]

    |n|>N/2|un|.

    Thus, the error committed by replacing u(x) with its N th-order Fourier seriesdepends solely on how fast the expansion coefcients of u(x) decay. This, inturn, depends on the regularity of u(x) in [0, 2] and the periodicity of thefunction and its derivatives.

    To appreciate this, lets consider a continous function u(x), with derivativeu(x) L2[0, 2]. Then, for n = 0,

    2 un = 20

    u(x)einx dx

    = 1in

    (u(2) u(0)) + 1in

    20

    u(x)einx dx .

    Clearly, then,

    |un| 1n.

    Repeating this line of argumentwe have the following result for periodic smoothfunctions.

  • 22 Trigonometric polynomial approximation

    0.0 0.2 0.4 0.6 0.8 1.0x /2p

    0.0

    0.5

    1.0

    1.5

    2.0

    2.5

    3.0

    3.5

    4.0

    P Nu

    u(x)N = 16N = 8N = 4

    (a)

    0.0 0.2 0.4 0.6 0.8 1.0

    100

    102

    104

    106

    108

    1010

    1012

    1014

    x/2p|u

    P N

    u|

    (b)N = 4N = 8

    N = 16N = 32

    N = 64

    Figure 2.1 (a) Continuous Fourier series approximation of Example 2.3 forincreasing resolution. (b) Pointwise error of the approximation for increasingresolution.

    Theorem 2.2 If a function u(x), its rst (m 1) derivatives, and their periodicextensions are all continuous and if themth derivative u(m)(x) L2[0, 2 ], thenn = 0 the Fourier coefcients, un, of u(x) decay as

    |un| (

    1n

    )m.

    What happens if u(x) Cp [0, 2 ]? In this case un decays faster than anynegative power of n. This property is known as spectral convergence. It followsthat the smoother the function, the faster the truncated series converges. Ofcourse, this statement is asymptotic; as we showed in Chapter 1, we need atleast two points per wavelength to reach the asymptotic range of convergence.

    Let us consider a few examples.

    Example 2.3 Consider the Cp [0, 2] function

    u(x) = 35 4 cos(x) .

    Its expansion coefcients are

    un = 2|n|.As expected, the expansion coefcients decay faster than any algebraic orderof n. In Figure 2.1 we plot the continuous Fourier series approximation of u(x)and the pointwise error for increasing N .

    This example clearly illustrates the fast convergence of the Fourier series andalso that the convergence of the approximation is almost uniform. Note that weonly observe the very fast convergence for N > N0 16.

  • 2.1 Trigonometric polynomial expansions 23

    (a)

    0.0 0.2 0.4 0.6 0.8 1.00.0

    0.2

    0.4

    0.6

    0.8

    1.0

    1.2

    1.4

    N = 4N = 8N = 16u (x)

    x /2p

    P Nu

    (b)

    0.0 0.2 0.4 0.6 0.8 1.0

    100

    101

    102

    103

    104

    105

    106

    107

    N = 4

    N = 8

    N = 16

    N = 32N = 64

    x/2p|u

    P N

    u|

    Figure 2.2 (a) Continuous Fourier series approximation of Example 2.4 forincreasing resolution. (b) Pointwise error of approximation for increasingresolution.

    Example 2.4 The expansion coefcients of the function

    u(x) = sin( x2

    ).

    are given by

    un = 2

    1(1 4n2) .

    Note that the derivative of u(x) is not periodic, and integrating by parts twicewe obtain quadratic decay in n. In Figure 2.2 we plot the continuous Fourierseries approximation and the pointwise error for increasing N . As expected,we nd quadratic convergence except near the endpoints where it is only linear,indicating non-uniform convergence. The loss of order of convergence at thediscontinuity points of the periodic extension, as well as the global reduction oforder, is typical of Fourier series (and other global expansion) approximationsof functions that are not sufciently smooth.

    2.1.1 Differentiation of the continuous expansionWhen approximating a function u(x) by the nite Fourier series PNu, we caneasily obtain the derivatives of PNu by simply differentiating the basis func-tions. The question is, are the derivatives of PNu good approximations to thederivatives of u?

    If u is a sufciently smooth function, then one can differentiate the sum

    PNu(x) =

    |n| N2une

    inx ,

  • 24 Trigonometric polynomial approximation

    term by term, to obtaindq

    dxqPNu(x) =

    |n| N2

    undq

    dxqeinx =

    |n| N2

    (in)q uneinx .

    It follows that the projection and differentiation operators commute

    PN dq

    dxqu = d

    q

    dxqPNu.

    This property implies that for any constant coefcient differention operator L,PNL (I PN ) u,

    known as the truncation error, vanishes. Thus, the Fourier approximation tothe equation ut = Lu is exactly the projection of the analytic solution.

    2.2 Discrete trigonometric polynomials

    The continuous Fourier series method requires the evaluation of the coefcients

    un = 12 20

    u(x)einx dx . (2.6)

    In general, these integrals cannot be computed analytically, and one resortsto the approximation of the Fourier integrals by using quadrature formulas,yielding the discrete Fourier coefcients. Quadrature formulas differ based onthe exact position of the grid points, and the choice of an even or odd numberof grid points results in slightly different schemes.

    2.2.1 The even expansionDene an equidistant grid, consisting of an even number N of gridpoints x j [0, 2 ), dened by

    x j = 2 jN j [0, . . . , N 1].The trapezoidal rule yields the discrete Fourier coefcients un , which approxi-mate the continuous Fourier coefcients un ,

    un = 1NN1j=0

    u(x j )einx j . (2.7)

    As the following theorem shows, the trapezoidal quadrature rule is a very naturalapproximation when trigonometric polynomials are involved.

  • 2.2 Discrete trigonometric polynomials 25

    Theorem 2.5 The quadrature formula12

    20

    f (x) dx = 1N

    N1j=0

    f (x j )

    is exact for any trigonometric polynomial f (x) = einx , |n| < N.Proof: Given a function f (x) = einx ,

    12

    20

    f (x) dx ={1 if n = 0,0 otherwise.

    On the other hand,

    1N

    N1j=0

    f (x j ) = 1NN1j=0

    ein(2 j/N )

    = 1N

    N1j=0

    q j

    where q = ei 2nN . If n is an integer multiple of N , i.e., n = mN ,then 1N

    N1j=0 f (x j ) = 1. Otherwise, 1N

    N1j=0 f (x j ) = q

    N1q1 = 0. Thus, the

    quadrature formula is exact for any function of the form f (x) = einx , |n| < N .QED

    The quadrature formula is exact for f (x) B2N2 where BN is the space oftrigonometric polynomials of order N ,

    BN = span{einx | |n| N/2}.Note that the quadrature formula remains valid also for

    f (x) = sin(Nx),because sin(Nx j ) = 0, but it is not valid for f (x) = cos(Nx).

    Using the trapezoid rule, the discrete Fourier coefcients become

    un = 1NcnN1j=0

    u(x j )einx j , (2.8)

    where we introduce the coefcients

    cn ={2 |n| = N/21 |n| < N/2

    for ease of notation. These relations dene a new projection of uINu(x) =

    |n|N/2

    uneinx . (2.9)

  • 26 Trigonometric polynomial approximation

    This is the complex discrete Fourier transform, based on an even number ofquadrature points. Note that

    uN/2 = uN/2,so that we have exactly N independent Fourier coefcients, corresponding tothe N quadrature points. As a consequence, IN sin( N2 x) = 0 so that the functionsin( N2 x) is not represented in the expansion, Equation (2.9).

    Onto which nite dimensional space does IN project? Certainly, the spacedoes not include sin( N2 x), so it is not BN . The correct space is

    BN = span{(cos(nx), 0 n N/2) (sin(nx), 1 n N/2 1)},which has dimension dim( BN ) = N .

    The particular denition of the discrete expansion coefcients introduced inEquation (2.8) has the intriguing consequence that the trigonometric polynomialINu interpolates the function, u(x), at the quadrature nodes of the trapezoidalformula. Thus, IN is the interpolation operator, where the quadrature nodes arethe interpolation points.

    Theorem 2.6 Let the discrete Fourier transform be dened by Equations (2.8)(2.9). For any periodic function, u(x) C0p[0, 2 ], we have

    INu(x j ) = u(x j ), x j = 2 jN j = 0, . . . , N 1.

    Proof: Substituting Equation (2.8) into Equation (2.9) we obtain

    INu(x) =

    |n|N/2

    (1

    Ncn

    N1j=0

    u(x j )einx j)

    einx .

    Exchanging the order of summation yields

    INu(x) =N1j=0

    u(x j )g j (x), (2.10)

    where

    g j (x) =

    |n|N/2

    1Ncn

    ein(xx j )

    = 1N

    sin[N

    x x j2

    ]cot

    [x x j

    2

    ], (2.11)

    by summing as a geometric series. It is easily veried that g j (xi ) = i j as isalso evident from the examples of g j (x) for N = 8 shown in Figure 2.3.

  • 2.2 Discrete trigonometric polynomials 27

    0.00 0.25 0.50 0.75 1.000.25

    0.00

    0.25

    0.50

    0.75

    1.00

    1.25

    x /2p

    g0(x) g2(x) g4(x) g6(x)

    Figure 2.3 The interpolation polynomial, g j (x), for N = 8 for various values of j .

    We still need to show that g j (x) BN . Clearly, g j (x) BN as g j (x) is apolynomial of degree N/2. However, since

    12ei

    N2 x j = 1

    2ei

    N2 x j = (1)

    j

    2,

    and, by convention uN/2 = uN/2, we do not get any contribution from the termsin( N2 x), hence g j (x) BN .

    QEDThe discrete Fourier series of a function has convergence properties very

    similar to those of the continuous Fourier series approximation. In particular,the discrete approximation is pointwise convergent forC1p[0, 2] functions andis convergent in the mean provided only that u(x) L2[0, 2 ]. Moreover, thecontinuous and discrete approximations share the same asymptotic behavior, inparticular having a convergence rate faster than any algebraic order of N1 ifu(x) Cp [0, 2]. We shall return to the proof of these results in Section 2.3.2.

    Let us at this point illustrate the behavior of the discrete Fourier series byapplying it to the examples considered previously.

    Example 2.7 Consider the Cp [0, 2 ] function

    u(x) = 35 4 cos(x) .

    In Figure 2.4 we plot the discrete Fourier series approximation of u and thepointwise error for increasing N . This example conrms the spectral conver-gence of the discrete Fourier series.We note in particular that the approximationerror is of the same order as observed for the continuous Fourier series in Exam-ple 2.3. The appearance of spikes in the pointwise error approaching zero inFigure 2.4 illustrates the interpolating nature of INu(x), i.e., INu(x j ) = u(x j )as expected.

  • 28 Trigonometric polynomial approximation

    Figure 2.4 (a) Discrete Fourier series approximation of Example 2.7 forincreasing resolution. (b) Pointwise error of approximation for increasingresolution.

    Figure 2.5 (a)Discrete Fourier series approximation of Example 2.8 for increasingresolution. (b) Pointwise error of approximation for increasing resolution.

    Example 2.8 Consider again the function

    u(x) = sin( x2

    ),

    and recall that u(x) C0p[0, 2 ]. In Figure 2.5 we show the discrete Fourierseries approximation and the pointwise error for increasing N . As for the con-tinuous Fourier series approximation we recover a quadratic convergence rateaway from the boundary points at which it is only linear.

    2.2.2 The odd expansionHow can this type of interpolation operator be dened for the space BN con-taining an odd number of basis functions? To do so, we dene a grid with an

  • 2.2 Discrete trigonometric polynomials 29

    0.00 0.25 0.50 0.75 1.000.25

    0.00

    0.25

    0.50

    0.75

    1.00

    1.25

    h0(x) h2(x) h4(x) h6(x)h8(x)

    x /2p

    Figure 2.6 The interpolation polynomial, h j (x), for N = 8 for various values of j .

    odd number of grid points,

    x j = 2N + 1 j, j [0, . . . , N ], (2.12)

    and use the trapezoidal rule

    un = 1N + 1N

    j=0u(x j )einx j , (2.13)

    to obtain the interpolation operatorJNu(x) =

    |n|N/2

    uneinx .

    Again, the quadrature formula is highly accurate:

    Theorem 2.9 The quadrature formula12

    20

    f (x) dx = 1N + 1

    Nj=0

    f (x j ),

    is exact for any f (x) = einx , |n| N, i.e., for all f (x) B2N .The scheme may also be expressed through the use of a Lagrange interpo-

    lation polynomial,

    JNu(x) =N

    j=0u(x j )h j (x),

    where

    h j (x) = 1N + 1sin( N+1

    2 (x x j ))

    sin( xx j

    2) . (2.14)

    One easily shows that h j (xl) = jl and that h j (x) BN . Examples of h j (x) areshown in Figure 2.6 for N = 8.

  • 30 Trigonometric polynomial approximation

    Historically, the early availability of the fast fourier transform (FFT), whichis highly efcient for 2p points, has motivated the use of the even number ofpoints approach. However, fast methods are now available for an odd as wellas an even number of grid points.

    2.2.3 A rst look at the aliasing errorLets consider the connection between the continuous Fourier series and thediscrete Fourier series based on an even number of grid points. The conclusionsof this discussion are equally valid for the case of an odd number of points.

    Note that the discrete Fourier modes are based on the points x j , for whichthe (n + Nm)th mode is indistinguishable from the nth mode,

    ei(n+Nm)x j = einx j ei2mj = einx j .This phenomenon is known as aliasing.

    If the Fourier series converges pointwise, e.g., u(x) C1p[0, 2 ], the alias-ing phenomenon implies that the relation between the two sets of expansioncoefcients is

    cn un = un +

    |m|m =0

    un+Nm, (2.15)

    In Figure 2.7 we illustrate this phenomenon for N = 8 and we observe that then = 10 wave as well as the n = 6 and the n = 2 wave are all the same atthe grid points.

    In Section 2.3.2 we will show that the aliasing error

    ANuL2[0,2 ] =

    |n|N/2

    m=

    m=m =0

    un+Nm

    einx

    L2[0,2 ]

    , (2.16)

    is of the same order as the error, u PNuL2[0,2], for smooth functions u.If the function is well approximated the aliasing error is generally negligibleand the continuous Fourier series and the discrete Fourier series share similarapproximation properties. However, for poorly resolved or nonsmooth prob-lems, the situation is much more delicate.

    2.2.4 Differentiation of the discrete expansionsTo implement the Fouriercollocation method, we require the derivatives of thediscrete approximation. Once again, we consider the case of an even numberof grid points. The two mathematically equivalent methods given in Equa-tions (2.8)(2.9) and Equation (2.10) for expressing the interpolant yield two

  • 2.2 Discrete trigonometric polynomials 31

    n = 6

    n = 2

    n = 10

    Figure 2.7 Illustration of aliasing. The three waves, n = 6, n = 2 and n = 10are all interpreted as a n = 2 wave on an 8-point grid. Consequently, the n = 2appears as more energetic after the discrete Fourier transform than in the originalsignal.

    computationally different ways to approximate the derivative of a function. Inthe following subsections, we assume that our function u and all its derivativesare continuous and periodic on [0, 2 ].

    Using expansion coefcients Given the values of the function u(x) at thepoints x j , differentiating the basis functions in the interpolant yields

    ddxINu(x) =

    |n|N/2

    inuneinx , (2.17)

    where

    un = 1NcnN1j=0

    u(x j )einx j ,

    are the coefcients of the interpolant INu(x) given in Equations (2.8)(2.9).Higher order derivatives can be obtained simply by further differentiating thebasis functions.

    Note that, unlike in the case of the continuous approximation, the derivativeof the interpolant is not the interpolant of the derivative, i.e.,

    IN dudx = INddxINu, (2.18)

    unless u(x) BN .

  • 32 Trigonometric polynomial approximation

    For example, if

    u(x) = sin(

    N2

    x

    ),

    (i.e., u(x) does not belong to BN ), then INu 0 and so d(INu)/dx = 0.On the other hand, u(x) = N/2 cos(Nx/2) (which is in BN ), and thereforeINu(x) = N/2 cos(Nx/2) = INd(INu)/dx . Ifu(x) BN , thenINu = u, andtherefore

    IN dudx = INddxINu.

    Likewise, if we consider the projection based on an odd number of points, wehave

    JN dudx = JNddxJNu,

    except if u BN .The procedure for differentiating using expansion coefcients can be

    described as follows: rst, we transform the point values u(x j ) in physicalspace into the coefcients un in mode space. We then differentiate in modespace by multiplying un by in, and return to physical space. Computationally,the cost of the method is the cost of two transformations, which can be doneby a fast Fourier transform (FFT). The typical cost of an FFT is O(N log N ).Notice that this procedure is a transformation from a nite dimensional spaceto a nite dimensional space, which indicates a matrix multiplication. In thenext section, we give the explicit form of this matrix.

    The matrix method The use of the Lagrange interpolation polynomials yields

    INu(x) =N1j=0

    u(x j )g j (x),

    where

    g j (x) = 1N sin[N

    x x j2

    ]cot

    [x x j

    2

    ].

    An approximation to the derivative of u(x) at the points, xi , is then obtained bydifferentiating the interpolation directly,

    ddxINu(x)

    xl

    =N1j=0

    u(x j ) ddx g j (x)xl

    =N1j=0

    Dl j u(x j ).

  • 2.2 Discrete trigonometric polynomials 33

    The entries of the differentiation matrix are given by

    Di j = ddx g j (x)xi

    ={ (1)i+ j

    2 cot[ xix j

    2]

    i = j0 i = j.

    (2.19)

    It is readily veried that D is circulant and skew-symmetric.The approximation of higher derivatives follows the exact same route as

    taken for the rst order derivative. The entries of the second order differentiationmatrix D(2), based on an even number of grid points, are

    d2

    dx2g j (x)

    xi

    = D(2)i j =

    (1)i+ j2[sin(

    xix j2

    )]2i = j

    N 2+212 i = j.(2.20)

    It is interesting to note that, in the case of even number of points, D(2) is notequal to the square of D. i.e.,

    IN d2

    dx2IN =

    (IN ddx

    )2IN

    To illustrate this, consider the function cos( N2 x),

    IN d2

    dx2IN cos

    (N2

    x

    )= IN

    [(

    N2

    )2cos

    (N2

    x

    )]=

    (N2

    )2cos

    (N2

    x

    ),

    on the other hand, since IN ddx IN cos( N2 x) = 0, the two are not the same.The reason for this discrepancy is that the differentiation operator takes

    elements of BN out of BN . In the above example, cos( N2 x) is in the space BN ,but its derivative is not. However, elements of BN , when differentiated, remainin BN and thus,

    JN d2

    dx2JN =

    (JN ddx

    )2JN ,

    for the interpolation based on an odd number of grid points.For the sake of completeness, we list the entries of the differentiation matrix

    D for the interpolation based on an odd number of points,

    Di j ={

    (1)i+ j2

    [sin(

    xix j2

    )]1i = j

    0 i = j,which is the limit of the nite difference schemes as the order increases. Onceagain D is a circulant, skew-symmetric matrix. As mentioned above, in this

  • 34 Trigonometric polynomial approximation

    case we have

    D(q) = JN dq

    dxqJN = Dq ,

    for all values of q .In thismethod,wedonot go through themode space at all. The differentiation

    matrix takes us from physical space to physical space, and the act of differentia-tion is hidden in the matrix itself. The computational cost of the matrix methodis the cost of a matrix-vector product, which is anO(N 2) operation, rather thanthe cost of O(Nlog(N )) in the method using expansion coefcients. However,the efciency of the FFT is machine dependent and for small values of N itmay be faster to perform the matrix-vector product. Also, since the differentia-tion matrices are all circulant one need only store one column of the operator,thereby reducing the memory usage to that of the FFT.

    2.3 Approximation theory for smooth functionsWe will now rigorously justify the behavior of the continuous and discreteFourier approximations of the function u and its derivatives. When using theFourier approximation to discretize the spatial part of the equation

    ut = Lu,

    where L is a differential operator (e.g., the hyperbolic equation ut = a(x)ux ),it is important that our approximation, both to u and to Lu, be accurate. Toestablish consistency we need to consider not only the difference between u andPNu, but also the distance between Lu and LPNu, measured in an appropriatenorm. This is critical, because the actual rate of convergence of a stable schemeis determined by the truncation error

    PNL (I PN ) u.

    The truncation error is thus determined by the behavior of the Fourier approx-imations not only of the function, but of its derivatives as well.

    It is natural, therefore, to use the Sobolev q-norm denoted by Hqp [0, 2 ],which measures the smoothness of the derivatives as well as the function,

    u2Hqp [0,2 ] =q

    m=0

    20

    u(m)(x)2 dx .

  • 2.3 Approximation theory for smooth functions 35

    The subscript p indicates the fact that all our functions are periodic, for whichthe Sobolev norm can be written in mode space as

    u2Hqp [0,2 ] = 2q

    m=0

    |n|

    |n|2m |un|2 = 2

    |n|

    (q

    m=0|n|2m

    )|un|2,

    where the interchange of the summation is allowed provided u(x) has sufcientsmoothness, e.g., u(x) Cqp[0, 2], q > 12 .

    Since for n = 0,

    (1 + n2q ) q

    m=0n2m (q + 1)(1 + n2q ),

    the norm Wqp [0,2 ] dened by

    uWqp [0,2] =(

    |n|(1 + n2q )|un|2

    )1/2,

    is equivalent to Hqp [0,2 ]. It is interesting to note that one can easily dene anorm Wqp [0, 2 ] with noninteger values of q .

    2.3.1 Results for the continuous expansionConsider, rst, the continuous Fourier series

    P2Nu(x) =|n|N

    uneinx .

    We start with an L2 estimate for the distance between u and its trigonometricapproximation P2Nu.

    Theorem 2.10 For any u(x) Hrp[0, 2 ], there exists a positive constant C,independent of N , such that

    u P2NuL2[0,2 ] CNqu(q)L2[0,2],

    provided 0 q r .

    Proof: By Parsevals identity,

    u P2Nu2L2[0,2 ] = 2|n|>N

    |un|2.

  • 36 Trigonometric polynomial approximation

    We rewrite this summation|n|>N

    |un|2 =|n|>N

    1n2q

    n2q |un|2

    N2q|n|>N

    n2q |un|2

    N2q|n|0

    n2q |un|2

    = 12

    N2qu(q)2L2[0,2].

    Putting this all together and taking the square root, we obtain our result.QED

    Note that the smoother the function, the larger the value of q , and therefore,the better the approximation. This is in contrast to nite difference or niteelement approximations, where the rate of convergence is xed, regardless ofthe smoothness of the function. This rate of convergence is referred to in theliterature as spectral convergence.

    If u(x) is analytic thenu(q)L2[0,2 ] Cq! uL2[0,2],and so

    u P2NuL2[0,2] CNqu(q)L2[0,2] C q!Nq uL2[0,2].

    Using Stirlings formula, q! qqeq , and assuming that q N , we obtainu P2NuL2[0,2 ] C

    ( qN

    )qequL2[0,2 ] KecNuL2[0,2 ].

    Thus, for an analytic function, spectral convergence is, in fact, exponentialconvergence.

    Since the Fourier method is used for computation of derivatives, we areparticularly interested in estimating the convergence of both the function andits derivative. For this purpose, the Sobolev norm is appropriate.

    Theorem 2.11 For any real r and any real q where 0 q r , if u(x) Wrp[0, 2 ], then there exists a positive constant C, independent of N , suchthat

    u P2NuWqp [0,2 ] C

    NrquWrp[0,2 ].

    Proof: Parsevals identity yieldsu P2Nu2Wqp [0,2 ] = 2

    |n|>N

    (1 + |n|2q )|un|2.

  • 2.3 Approximation theory for smooth functions 37

    Since |n| + 1 N , we obtain

    (1 + |n|2q ) (1 + |n|)2q = (1 + |n|)2r

    (1 + |n|)2(rq) (1 + |n|)2rN 2(rq)

    (1 + r ) (1 + n2r )

    N 2(rq),

    for any 0 q r .This immediately yields

    u P2Nu2Wqp [0,2] C|n|>N

    (1 + n2r )N 2(rq)

    |un|2 Cu2Wrp[0,2 ]

    N 2(rq).

    QEDA stricter measure of convergence may be obtained by looking at the point-

    wise error in the maximum norm.

    Theorem 2.12 For any q > 1/2 and u(x) Cqp[0, 2], there exists a positiveconstant C, independent of N , such that

    u P2NuL C 1Nq 12

    u(q)L2[0,2] .Proof: Provided u(x) Cqp[0, 2 ], q > 1/2, we have for any x [0, 2 ],

    |u P2Nu| =|n|>N

    uneinx

    =|n|>N

    nq uneinx

    nq

    (

    |n|>N

    1n2q

    )1/2 (|n|>N

    n2q |un|2)1/2

    ,

    using the CauchySchwartz inequality. The second term in the product above isbounded by the norm. The rst term is the tail of a power series which convergesfor 2q > 1. Thus, for q > 12 , we can bound the tail, and so we obtain

    |u P2Nu| CNq1/2u(q)L2[0,2 ] .

    QEDAgain, we notice spectral accuracy in the maximum norm, with exponential

    accuracy for analytic functions. Finally, we are ready to use these results tobound the truncation error for the case of a constant coefcient differentialoperator.

  • 38 Trigonometric polynomial approximation

    Theorem 2.13 Let L be a constant coefcient differential operator

    Lu =s

    j=1a j

    d judx j

    .

    For any real r and any real q where 0 q + s r , if u(x) Wrp[0, 2 ], thenthere exists a positive constant C, independent of N , such that

    Lu LPNuWqp [0,2 ] CN(rqs)uWrp[0,2 ].

    Proof: Using the denition of L,

    Lu LPNuWqp [0,2] =

    sj=1

    a jd judx j

    s

    j=1a j

    d jPNudx j

    Wqp [0,2]

    max0 js

    |a j |

    sj=1

    d j

    dx j(u PNu)

    Wqp [0,2 ]

    max0 js

    |a j |s

    j=1u PNuWq+sp [0,2 ]

    Cu PNuWq+sp [0,2 ]This last term is bounded in Theorem 2.7, and the result immediately follows.

    QED

    2.3.2 Results for the discrete expansionThe approximation theory for the discrete expansion yields essentially the sameresults as for the continuous expansion, though with more effort. The proofsfor the discrete expansion are based on the convergence results for the contin-uous approximation, and as the fact that the Fourier coefcients of the discreteapproximation are close to those of the continuous approximation.

    Recall that the interpolation operator associated with an even number of gridpoints is given by

    I2Nu =

    |n|< Nune

    inx ,

    with expansion coefcients

    un = 12Ncn2N1j=0

    u(x j )einx j , x j = 22N j.

  • 2.3 Approximation theory for smooth functions 39

    Rather than deriving the estimates of the approximation error directly, we shalluse the results obtained in the previous section and then estimate the differencebetween the two different expansions, which we recognize as the aliasing error.

    The relationship between the discrete expansion coefcients un , and thecontinuous expansion coefcients un , is given in the following lemma.

    Lemma 2.14 Consider u(x) Wqp [0, 2 ], where q > 1/2. For |n| N wehave

    cnun = un +

    |m|m =0

    un+2Nm .

    Proof: Substituting the continuous Fourier expansion into the discrete expan-sion yields

    cn un = 12N2N1j=0

    |l|

    ulei(ln)x j .

    To interchange the two summations we must ensure uniform convergence, i.e.,|l| |ul | < . This is satised, since

    |l||ul | =

    |l|

    (1 + |l|)q |ul |(1 + |l|)q

    (2q

    |l|

    (1 + l2q )|ul |2)1/2 (

    |l|(1 + |l|)2q

    )1/2,

    where the last expression follows from the CauchySchwarz inequality. Asu(x) Wqp [0, 2] the rst part is clearly bounded. Furthermore, the secondterm converges provided q > 1/2, hence ensuring boundedness.

    Interchanging the order of summation and using orthogonality of the expo-nential function at the grid yields the desired result.

    QEDAs before, we rst consider the behavior of the approximation in the L2-

    norm. We will rst show that the bound on the aliasing error, AN , in Equa-tion (2.16) is of the same order as the truncation error. The error caused bytruncating the continuous expansion is essentially the same as the error pro-duced by using the discrete coefcients rather than the continuous coefcients.

    Lemma 2.15 For any u(x) Wrp[0, 2], where r > 1/2, the aliasing error

    ANuL2[0,2 ] =(

    |n||cn un un|2

    )1/2 CNru(r )L2[0,2 ].

  • 40 Trigonometric polynomial approximation

    Proof: From Lemma 2.14 we have

    |cn un un|2 =

    |m|m =0

    un+2Nm

    2

    .

    To estimate this, we rst note that

    |m|m =0

    un+2Nm

    2

    =

    |m|m =0

    |n + 2Nm|r un+2Nm 1|n + 2Nm|r

    2

    |m|m =0

    |n + 2Nm|2r |un+2Nm |2

    |m|m =0

    1|n + 2Nm|2r

    ,

    using the CauchySchwartz inequality.Since |n| N , bounding of the second term is ensured by

    |m|m =0

    1|n + 2Nm|2r

    2N 2r

    m=1

    1(2m 1)2r = C1N

    2r ,

    provided r > 1/2. Here, the constant C1 is a consequence of the fact that thepower series converges, and it is independent of N .

    Summing over n, we have

    |n|N

    |m|m =0

    un+2Nm

    2

    |n|N

    C1N2r

    |m|m =0

    |n + 2mN |2r |un+2Nm |2

    C2N2ru(r )2L2[0,2 ].

    QED

    We are now in a position to state the error estimate for the discreteapproximation.

    Theorem 2.16 For any u(x) Wrp[0, 2 ] with r > 1/2, there exists a positiveconstant C, independent of N , such that

    u I2NuL2[0,2] CNru(r )L2[0,2].

  • 2.3 Approximation theory for smooth functions 41

    Proof: Lets write the difference between the function and its discreteapproximation

    u I2Nu2L2[0,2 ] = (P2N I2N )u + u P2Nu2L2[0,2 ] (P2N I2N )u2L2[0,2 ] + u P2Nu2L2[0,2 ].

    Thus, the error has two components. The rst one, which is the differencebetween the continuous and discrete expansion coefcients, is the aliasing error,which is bounded in Lemma 2.15. The second, which is the tail of the series,is the truncation error, which is bounded by the result of Theorem 2.10. Thedesired result follows from these error bounds.

    QED

    Theorem 2.16 conrms that the approximation errors of the continuousexpansion and the discrete expansion are of the same order, as long as u(x)has at least half a derivative. Furthermore, the rate of convergence depends, inboth cases, only on the smoothness of the function being approximated.

    The above results are in the L2 norm. We can obtain essentially the sameinformation about the derivatives, using the Sobolev norms. First, we need toobtain a Sobolev norm bound on the aliasing error.

    Lemma 2.17 Let u(x) Wrp[0, 2 ], where r > 1/2. For any real q, for which0 q r , the aliasing error

    ANuWqp [0,2] =(

    n=|cn un un|2

    )1/2 CN(rq)uWrp[0,2].

    Proof:

    |m|m =0

    un+2Nm

    2

    =

    |m|m =0

    (1 + |n + 2Nm|)r un+2Nm 1(1 + |n + 2Nm|)r

    2

    ,

    such that

    |m|m =0

    un+2Nm

    2

    |m|m =0

    (1 + |n + 2Nm|)2r |un+2Nm |2

    |m|m =0

    1(1 + |n + 2Nm|)2r

    .

  • 42 Trigonometric polynomial approximation

    The second factor is, as before, bounded by|m|m =0

    1(1 + |n + 2Nm|)2r

    2N 2r

    m=1

    1(2m 1)2r = C1N

    2r ,

    provided r > 1/2 and |n| N .Also, since (1 + |n|)2q C2N 2q for |n| N we recover

    |n|N

    (1 + |n|)2q

    |m|m =0

    un+2Nm

    2

    |n|N

    C1C2N2(rq)

    |m|m =0

    (1 + |n + 2mN |)2r |un+2Nm |2

    C3N2(rq)u2Wrp[0,2 ].QED

    With this bound on the aliasing error, and the truncation error bounded byTheorem 2.11, we are now prepared to state.Theorem 2.18 Let u(x) Wrp[0, 2 ], where r > 1/2. Then for any real q forwhich 0 q r , there exists a positive constant, C, independent of N , such that

    u I2NuWqp [0,2 ] CN(rq)uWrp[0,2 ].The proof follows closely that of Theorem 2.16. As in the case of the continuousexpansion, we use this result to bound the truncation error.Theorem 2.19 Let L be a constant coefcient differential operator

    Lu =s

    j=1a j

    d judx j

    then there exists a positive constant, C, independent of N such thatLu LINuWqp [0,2 ] CN(rqs)uWrp[0,2 ].

    2.4 Further reading

    The approximation theory for continuous Fourier expansions is classical andcan be found in many sources, e.g. the text by Canuto et al (1988). Many ofthe results on the discrete expansions and the aliasing errors are discussed byOrszag (1972), Gottlieb et al (1983), and Tadmor (1986), while the rst instanceof realizing the connection between the discrete expansions and the Lagrangeform appears to be in Gottlieb and Turkel (1983).

  • 3Fourier spectral methods

    We are now ready to formally present and analyze Fourier spectral methodsfor the solution of partial differential equations. As in the previous chapterwe restrict ourselves to problems dened on [0, 2 ] and assume that the solu-tions, u(x), can be periodically extended. Furthermore, we assume that u(x)and its derivatives are smooth enough to allow for any Fourier expansionswhich may become required. The rst two sections of this chapter feature theFourierGalerkin and Fouriercollocation methods. The nal section discussesthe stability of these methods.

    3.1 FourierGalerkin methodsConsider the problem

    u(x, t)t

    = Lu(x, t), x [0, 2 ], t 0,

    u(x, 0) = g(x), x [0, 2 ], t = 0,

    In the FourierGalerkin method, we seek solutions uN (x, t) from the spaceBN span{einx }|n|N/2, i.e.,

    uN (x, t) =

    |n|N/2an(t)einx .

    Note that an(t) are unknown coefcients which will be determined by themethod. In general, the coefcients an(t) of the approximation are not equal tothe Fourier coefcients un; only if we obtain the exact solution of the problemwill they be equal. In the FourierGalerkin method, the coefcients an(t) are

    43

  • 44 Fourier spectral methods

    determined by the requirement that the residual

    RN (x, t) = uN (x, t)t

    LuN (x, t),

    is orthogonal to BN .If we express the residual in terms of the Fourier series,

    R(x, t) =

    |n|Rn(t)einx ,

    the orthogonality requirement yields

    Rn(t) = 12 20

    RN (x, t)einx dx = 0 |n| N2 .

    These are (N + 1) ordinary differential equations to determine the (N + 1)unknowns, an(t), and the corresponding initial conditions are

    uN (x, 0) =

    |n|N/2an(0)einx ,

    an(0) = 12 11

    g(x)einx dx .

    The method is dened by the requirement that the orthogonal projection ofthe residual onto the space BN is zero. If the residual is smooth enough, thisrequirement implies that the residual itself is small. In particular, if the residualitself lives in the space BN , the orthogonal complement must be zero. This is avery special case which only occurs in a small number of cases, among themlinear constant coefcient problems.

    Example 3.1 Consider the linear constant coefcient problem

    u(x, t)t

    = c u(x, t)x

    + 2u(x, t)x2

    ,

    with the assumption that u(x, 0) Cp [0, 2 ], and c and are constants.We seek a trigonometric polynomial,

    uN (x, t) =

    |n|N/2an(t)einx ,

    such that the residual

    RN (x, t) = uN (x, t)t

    c uN (x, t)x

    2uN (x, t)

    x2,

    is orthogonal to BN .

  • 3.1 FourierGalerkin methods 45

    Recall that

    xuN (x, t) =

    |n|N/2

    (in)an(t)einx ,

    so that the residual is

    RN (x, t) =

    |n|N/2

    (dan(t)

    dt cin an(t) + n2an(t)

    )einx .

    Projecting the residual onto BN and setting to zero, we obtaindan(t)

    dt= (cin n2)an(t) |n| N2 .

    Observe that in this case, RN (x, t) BN , and therefore setting its projectiononto BN to zero, is equivalent to setting the residual itself equal to zero. The con-stant coefcient operatorL has the propertyPNL = LPN , and so the truncationerror

    PNL (I PN ) u = 0.In this case, the approximation coefcients an(t) are, in fact, equal to the Fouriercoefcients un and so the approximate solution is the projection of the truesolution, PNu(x, t) = uN (x, t).Example 3.2 Next, consider the linear, variable coefcient problem

    u(x, t)t

    = sin(x)u(x, t)x

    ,

    with smooth initial conditions.We seek solutions in the form of a trigonometric polynomial

    uN (x, t) =

    |n|N/2an(t)einx , (3.1)

    and require that the residual

    RN (x, t) = uN (x, t)t

    sin(x)uN (x, t)x

    ,

    is orthogonal to BN .The residual is

    RN (x, t) =

    |n|N/2

    (dan(t)

    dt e

    ix ei x2i

    (in)an(t))

    einx .

    To simplify the expression of RN (x, t) we dene a(N/2+1)(t) = aN/2+1(t) = 0,

  • 46 Fourier spectral methods

    and the residual becomes

    RN (x, t) =

    |n|N/2

    dan(t)dt

    einx 12

    |n|N/2nei(n+1)xan(t)

    + 12

    |n|N/2nei(n1)xan(t)

    =

    |n|N/2

    dan(t)dt

    einx 12

    |n|N/2(n 1)einxan1(t)

    + 12

    |n|N/2(n + 1)einxan+1(t)

    N4

    (ei

    N+22 xaN/2(t) + ei N+22 xaN/2(t)

    )

    =

    |n|N/2

    (dan(t)

    dt n 1

    2an1(t) + n + 12 an+1(t)

    )einx

    N4

    (ei

    N+22 xaN/2(t) + ei N+22 xaN/2(t)

    ).

    The last two terms in this expression are not in the space BN , and so the residualRN (x, t) is not solely in the space of BN , and the truncation error will not beidentically zero. Since the last two terms are not in BN , projecting RN (x, t) tozero in BN yields

    dan(t)dt

    n 12

    an1(t) + n + 12 an+1(t) = 0,with a(N/2+1)(t) = aN/2+1(t) = 0.

    In these two examples we illustrated the fact that the FourierGalerkinmethod involves solving the equations in the mode space rather than in physicalspace. Thus, for each problem we need to derive the equations for the expan-sion coefcients of the numerical solution. While this was relatively easy forthe particular variable coefcient case considered in the previous example, thismay be more complicated in general, as seen in the next example.

    Example 3.3 Consider the nonlinear problemu(x, t)

    t= u(x, t)u(x, t)

    x,

    with smooth, periodic initial conditions. The solution of such a problem devel-ops discontinuities, which may lead to stability problems; however, the con-struction of the FourierGalerkin method is not affected by this.

  • 3.1 FourierGalerkin methods 47

    As usual, we seek a solution of the form

    uN (x, t) =

    |n|N/2an(t)einx ,

    and require that the residual

    RN (x, t) = uN (x, t)t

    uN (x, t) x

    uN (x, t),

    be orthogonal to BN .The second term is

    uN (x, t) x

    uN (x, t) =

    |l|N/2

    |k|N/2

    al(t)(ik)ak(t)ei(l+k)x

    =

    |k|N/2

    N/2+kn=N/2+k

    (ik)ank(t)ak(t)einx .

    As a result of the nonlinearity, the residual RN (x, t) B2N , and not BN . Pro-jecting this onto BN and setting equal to zero we obtain a set of (N + 1)ODEs

    dan(t)dt

    =

    |k|N/2ikank(t)ak(t), |n| N2 .

    In this example, we obtain the FourierGalerkin equation with relative easedue to the fact that the nonlinearity was only quadratic. Whereas quadraticnonlinearity is quite common in the equations of mathematical physics, thereare many cases in which the nonlinearity is of a more complicated form, andthe derivation of the FourierGalerkin equations may become untenable, as inthe next example.

    Example 3.4 Consider the strongly nonlinear problem

    u(x, t)t

    = eu(x,t) u(x, t)x

    ,

    where the initial conditions are, as usual, smooth and periodic.We seek a numerical solution of the form

    uN (x, t) =

    |n| N/2an(t)einx ,

    and require that the residual

    RN (x, t) = uN (x, t)t

    euN (x,t) x

    uN (x, t),

  • 48 Fourier spectral methods

    is orthogonal to BN . It is exceedingly difcult to obtain the analytic form ofthe resulting system of ODEs. Hence, it is untenable to formulate the FourierGalerkin scheme.

    To summarize, the FourierGalerkinmethod is very efcient for linear, constantcoefcient problems, but tends to become complicated for variable coefcientand nonlinear problems. The main drawback of the method is the need toderive and solve a different system of governing ODEs for each problem. Thisderivation may prove very difcult, and even impossible.

    3.2 Fouriercollocation methodsWe can circumvent the need for evaluating the inner products, which causedus such difculty in the previous section, by using quadrature formulas toapproximate the integrals. This amounts to using the interpolating operator INinstead of the orthogonal projection operator PN , and is called the Fouriercollocation method. This is also known as the pseudospectral method.

    When forming the FourierGalerkin method we require that the orthogonalprojection of the residual onto BN vanishes. To form the Fouriercollocationmethod we require, instead, that the residual vanishes identically on some setof gridpoints y j . We refer to this grid as the collocation grid, and note that thisgrid need not be the same as the interpolation grid which we have been usingup to now, comprising the points x j .

    In the following, we deal with approximations based on the interpolationgrid

    x j = 2N j, j [0, . . . , N 1],

    where N is even. However, the discussion holds true for approximations basedon an odd number of points, as well.

    We assume that the solution, u(x, t) L2[0, 2 ], is periodic and consider,once again, the general problem

    u(x, t)t

    = Lu(x, t), x [0, 2 ], t 0,

    u(x, 0) = g(x), x [0, 2 ], t = 0.

    In the Fouriercollocation method we seek solutions,

    uN BN = span{(cos(nx), 0 n N/2) (sin(nx), 1 n N/2 1)},

  • 3.2 Fouriercollocation methods 49

    of the form

    uN (x, t) =

    |n|N/2an(t)einx ,

    This trigonometric polynomial can also be expressed

    uN (x, t) =N1j=0

    uN (x j , t)g j (x),

    where g j (x) is the Lagrange interpolation polynomial for an even number ofpoints.

    Now the difference between the FourierGalerkin and the Fouriercollocation method will appear: we require that the residual

    RN (x, t) = uN (x, t)t

    LuN (x, t),

    vanish at the grid points, y j , i.e.,

    RN (y j , t) = 0, j [0, . . . , N 1]. (3.2)This yields N equations to determine the N point values, uN (x j , t), of thenumerical solution. In other words, the pseudospectral approximation uN sat-ises the equation

    uN (x, t)t

    INLuN (x, t) = 0.

    Next, we will revisit some of the examples for the FourierGalerkin method.For simplicity, the collocation grid will be the same as the interpolation grid,except when stated explicitly.

    Example 3.5 Consider rst the linear constant coefcient problem

    u(x, t)t

    = c u(x, t)x

    + 2u(x, t)x2

    ,

    with the assumption that u(x, t) Cp [0, 2 ], and c and are constants.We seek solutions of the form

    uN (x, t) =

    |n|N/2an(t)einx =

    N1j=0

    uN (x j , t)g j (x), (3.3)

    such that the residual

    RN (x, t) = uN (x, t)t

    c x

    uN (x, t) 2

    x2uN (x, t),

  • 50 Fourier spectral methods

    vanishes at a specied set of grid points, y j . In this case we choose y j = x j .This results in N ordinary differential equations for uN (x j , t),

    duN (x j , t)dt

    = cIN xINuN (x j , t) + IN

    2

    x2INuN (x j , t)

    =N1k=0

    (cD(1)jk + D(2)jk

    )uN (xk, t),

    where D(1) and D(2) are the differentiation matrices. Consequently, the schemeconsists of solving the ODEs only at the grid points. Note that in the case ofvariable coefcient c = c(x), the derivation above remains the same, exceptthat c is replaced by c(xk).

    In the collocationmethod the only usewemake of the Fourier approximationis in obtaining the derivatives of the numerical approximation in physical space.As we mentioned in the last chapter, this can be done in two mathematicallyidentical, though computionally different,ways.Onewayuses the Fourier seriesand possibly a fast Fourier transform (FFT), while the other employs the directmatrix-vector multiplication.

    If we require that the residual vanishes at a set of grid points, y j , which isdifferent from x j , we get N equations of the form

    duN (y j , t)dt

    = cN1i=0

    uN (xi , t) dgidx

    y j

    + N1i=0

    uN (xi , t) d2gi

    dx2

    y j

    ,

    where, as in the previous chapter, gi are the interpolating functions based onthe points x j .

    In the simple linear case above, the formulation of the collocation methodis straightforward. However, the Galerkin method was not complicated forthis simple linear case either. It is in the realm of nonlinear problems that theadvantage of the collocation method becomes apparent.

    Example 3.6 Consider the nonlinear problem

    u(x, t)t

    = u(x, t)u(x, t)x

    ,

    where the initial conditions are given and the solution and all its derivatives aresmooth and periodic over the time-interval of interest.

    We construct the residual

    RN (x, t) = uN (x, t)t

    uN (x, t)uN (x, t)x

    ,

  • 3.2 Fouriercollocation methods 51

    where, as before, uN is a trigonometric polynomial of degree N . The residualis required to vanish at the gridpoints x j , j = 0, . . . , N 1, leading to

    duN (x j , t)dt

    uN (x j , t)uN (x, t)x

    x=x j

    = 0,

    i.e.,

    duN (x j , t)dt

    uN (x j , t)N1k=0

    DjkuN (xk, t) = 0.

    Note that obtaining the equations is equally simple for a nonlinear problemas for a constant coefcient linear problem. This is in marked contrast to theGalerkin case.

    Finally, we revisit the problem where the nonlinearity was so strong thatformulating the FourierGalerkin method was untenable.

    Example 3.7 Consider the strongly nonlinear problemu(x, t)

    t= eu(x,t) u(x, t)

    x.

    We seek solutions of the form

    uN (x, t) =

    |n|N/2an(t)einx =

    N1j=0

    uN (x j , t)g j (x),

    by requiring that the residual

    RN (x, t) = uN (x, t)t

    euN (x,t) uN (x, t)x

    ,

    vanishes at the grid points, x j ,

    RN (x j , t) = duN (x j , t)dt euN (x j ,t) uN (x, t)

    x

    x=x j

    = 0.

    Once again, the differentiation can be carried out by Fourier series (using FFT)or by matrix-vector multiplication. The nonlinear term is trivial to evaluate,because this is done in physical space.

    Thus, the application of the Fouriercollocation method is easy even forproblems where the FourierGalerkin method fails. This is due to the fact thatwe can easily evaluate the nonlinear function, F(u), in terms of the point valuesof u(x), while it may be very hard, and in some cases impossible, to expressthe Fourier coefcients of F(u) in terms of the expansion coefcients of u(x).In other words: projection is hard, interpolation is easy.

  • 52 Fourier spectral methods

    3.3 Stability of the FourierGalerkin methodIn Chapter 2, we analyzed the truncation error of the Fourier series. This wasdone for the continuous approximation and for the discrete approximation,which is mainly relevant to the Galerkin and collocation methods, respectively.In this section and the next, we discuss the stability of Fourier spectral methods.This, too, will be done in two parts: the continuous case, which is the Galerkinmethod, will be addressed in this section; and the discrete case, which is thecollocation method, in the next. This analysis is for the semidiscrete form,where only the spatial components are discretized.

    We consider the initial boundary value problem

    u

    t= Lu, (3.4)

    with proper initial data. The solution u(x, t) is in some Hilbert space with thescalar product (, ), e.g L2 [0, 2].

    A special case of a well posed problem is ifL is semi-bounded in the Hilbertspace scalar product, i.e., L+ L 2I for some constant . In this specialcase, the FourierGalerkin method is stable.

    First, lets show that Equation (3.4) with a semi-bounded operator L is wellposed. To show this, we estimate the derivative of the norm by considering

    ddt

    u2 = ddt

    (u, u) =(

    dudt

    , u

    )+(u,

    dudt

    )(3.5)

    = (Lu, u) + (u,Lu) = (u,Lu) + (u,Lu)= (u, (L+ L)u).

    Since L+ L 2I, we have ddt u2 2u2 and so ddt u u, whichmeans that the norm is bounded

    u(t) etu(0),

    and the problem is well posed.In the following, we consider two specic examples of semi-bounded oper-

    ators in L2.

    Example 3.8 Consider the operator

    L = a(x) x

    ,

    operating on the Hilbert space L2[0, 2 ] where a(x) is a real periodic functionwith periodic boundedderivative. The adjoint operator is obtainedby integration

  • 3.3 Stability of the FourierGalerkin method 53

    by parts:

    (Lu, v)L2[0,2] = 20

    a(x)ux

    v dx

    = 20

    u

    x(a(x)v) dx

    =(u,

    [a(x)

    x da(x)

    dx

    ]v

    )L2[0,2 ]

    .

    Thus,

    L = x

    a(x)I = a(x) x

    a(x)I.This means that

    L+ L = a(x)I,and since the derivative of a(x) is bounded a(x) 2, we have

    L+ L 2I.Example 3.9 Consider the operator

    L = x

    b(x) x

    ,

    where, once again, L operates on the Hilbert space L2[0, 2 ] and b(x) is a realperiodic nonnegative function with a periodic bounded derivative.

    As above,

    (u,Lu)L2[0,2 ] =(u,

    xb(x)

    xu

    )L2[0,2 ]

    =(b(x)

    xu,

    xu

    )L2[0,2]

    .

    Thus,

    (u, (L+ L)u)L2[0,2] = 2(b(x)

    xu,

    xu

    )L2[0,2]

    0.

    The nice thing about the FourierGalerkinmethod is that it is stable providedonly that the operator is semi-bounded.

    Theorem 3.10 Given the problem u/t = Lu, where the operator L is


Recommended