Home >Documents >7.2 Partial Differential Equations (PDE) PDE overview · 149PDE7.2 Partial Differential Equations...

7.2 Partial Differential Equations (PDE) PDE overview · 149PDE7.2 Partial Differential Equations...

Date post:07-May-2018
Category:
View:253 times
Download:6 times
Share this document with a friend
Transcript:
  • 149 PDE 7.2 Partial Differential Equations (PDE)

    7.2 Partial Differential Equations (PDE)

    PDE overviewExamples of PDE-s:

    Laplaces equation

    important in many fields of science,

    electromagnetism astronomy fluid dynamics

    behaviour of electric, gravitational, and fluid potentials

    The general theory of solutions to Laplaces equation potential theory

    In the study of heat conduction, the Laplace equation the steady-stateheat equation

  • 150 PDE 7.2 Partial Differential Equations (PDE)

    Maxwells equations electrical and magnetical fields relationships set of four partial differential equations

    describe the properties of the electric and magnetic fields and relate themto their sources, charge density and current density

    Navier-Stokes equations fluid dynamics (dependencies between pressure,speed of fluid particles and fluid viscosity)

    Equations of linear elasticity vibrations in elastic materials with given prop-erties and in case of compression and stretching out

    Schrdinger equations quantum mechanics how the quantum state of aphysical system changes in time. It is as central to quantum mechanics as New-tons laws are to classical mechanics

    Einstein field equations set of ten equations in Einsteins theory of generalrelativity describe the fundamental interaction of gravitation as a result ofspacetime being curved by matter and energy.

  • 151 PDE 7.3 2nd order PDEs

    7.3 2nd order PDEs

    We consider now only a single equation caseIn many practical cases, 2nd order PDE-s occur, for example:

    Heat equation: ut = uxx

    Wave equation: utt = uxx

    Laplaces equation: uxx +uyy = 0.

    General second order PDE has the form: (canonical form)

    auxx +buxy + cuyy +dux + euy + f u+g = 0.

    Assuming not all of a, b and c zero, then depending on discriminant b24ac:b24ac > 0: hyperbolic equation, typical representative wave equation;b24ac = 0: parabolic equation, typical representative heat equationb24ac < 0: elliptical equation, typical representative Poisson equation

  • 152 PDE 7.3 2nd order PDEs

    In case of changing coefficient in time, equations can change their type

    In case of equation systems, each equation can be of different type

    Of course, problem can be non-linear or higher order as well

    In general,

    Hyperbolic PDE-s describe time-dependent conservative physical processes likewave propagation

    Parabolic PDE-s describe time-dependent dissipative (or scattering) physicalprocesses like diffusion, which move towards some fixed-point

    Elliptic PDE-s describe systems that have reached a fixed-point and are there-fore independent of time

  • 153 PDE 7.4 Time-independent PDE-s

    7.4 Time-independent PDE-s

    7.4.1 Finite Difference Method (FDM)

    Discrete mesh in solving region

    Derivatives replaced with approximation by finite differences

    Example. Conside Poisson equation in 2D:

    uxxuyy = f , 0 x 1, 0 y 1, (19)

    boundary values as on the figure on the left:

  • 154 PDE 7.4 Time-independent PDE-s

    0

    y

    x

    0

    1

    0

    y

    x

    0

    1

    0

    Define discrete nodes as on the figure on right

    Inner nodes, where computations are carried out are defined with

    (xi,y j) = (ih, jh), i, j = 1, ...,n

    (in our case n = 2 and h = 1/(n+1) = 1/3)

  • 155 PDE 7.4 Time-independent PDE-s

    Consider here that f = 0.Replacing 2nd order derivatives with standard 2nd order differences in mid-points,

    we get

    ui+1, j2ui, j +ui1, jh2

    +ui, j+12ui, j +ui, j1

    h2= 0, i, j = 1, ...,n,

    where ui, j is approximation of the real solution u = u(xi,y j) in point (xi,y j), and in-cludes one boundary value, if i or j is 0 or n+1. As a result we get:

    4u1,1u0,1u2,1u1,0u1,2 = 04u2,1u1,1u3,1u2,0u2,2 = 04u1,2u0,2u2,2u1,2u1,3 = 04u2,2u1,2u3,2u2,2u2,3 = 0.

    In matrix form:

  • 156 PDE 7.4 Time-independent PDE-s

    Ax =

    4 1 1 01 4 0 11 0 4 10 1 1 4

    u1,1u2,1u1,2u2,2

    =

    u0,1 +u1,0u3,1 +u2,0u0,2 +u1,3u3,2 +u2,3

    =

    0011

    = b.This positively defined system can be solved directly with Cholesky factorisation(Gauss elimination for symmetric matrix, where factorisation A = LT L is found) oriteratively. Exact solution of the problem is:

    x =

    u1,1u2,1u1,2u2,2

    =

    0.1250.1250.3750.375

    .

  • 157 PDE 7.4 Time-independent PDE-s

    In general case n2n2 Laplaces matrix has form:

    A =

    B I 0 0I B I . . . ...0 I B . . . 0... . . . . . . . . . I0 0 I B

    , (20)

    where nn matrix B is of form:

    B =

    4 1 0 01 4 1 . . . ...0 1 4 . . . 0... . . . . . . . . . 10 0 1 4

    .

    It means that most of the elements of matrix A are zero How are such matrices called? it is a sparse matrix

  • 158 PDE 7.4 Time-independent PDE-s

    7.4.2 Finite element Method (FEM)

    Consider as an example Poisson equation

    u(x) = f (x), x ,u(x) = g(x), x ,

    where Laplacian is defined by

    (u)(x) = ( 2ux2

    + 2uy2

    )(x), x =

    (xy

    )

    In Finite element Method the region is divided into finite elements.

  • 159 PDE 7.4 Time-independent PDE-s

    A region divided into Finite Elements:

  • 160 PDE 7.4 Time-independent PDE-s

    Consider unit square.

    y

    x

    The problem in Variational Formulation: Find uh Vh such that

    a(uh,v) = ( f ,v), v Vh (21)

    where in case of Poisson equation

    a(u,v) =

    u vdx

  • 161 PDE 7.4 Time-independent PDE-s

    and (u,v) =

    u(x)v(x)dx. The gradient of a scalar function f (x,y) is defined by:

    f =(

    fx

    , fy

    )

    In FEM the equation (21) needs to be satisfied on a set of testfunctions i =i(x),

    which are defined such that

    i =

    {1, x = xi0 x = x j j 6= i

    and it is demanded that (21) is satisfied with each i (i = 1, ...,N) .

    As a result, a matrix of the linear equations is obtained

  • 162 PDE 7.5 Sparse matrix storage schemes

    The matrix is identical with the matrix from (20) !

    Benefits of FEM over finite difference schemes

    more flexible in choosing discretization

    existence of thorough mathematical constructs for proof of convergenceand error estimates

    7.5 Sparse matrix storage schemes

    As we saw, different discretisation schemes give systems with similar matrixstructures

    (In addition to FDM and FEM often also some other discretisation schemes areused like Finite Volume Method (but we do not consider it here))

    In each case, the result is a system of linear equations with sparse matrix.

    How to store sparse matrices?How to store sparse matrices?

  • 163 PDE 7.5 Sparse matrix storage schemes

    7.5.1 Triple storage format

    nm matrix A each nonzero with 3 values: integers i and j and (in most appli-cations) real matrix element ai j. = three arrays:

    indi(1:nz), indj(1:nz), vals(1:nz)of length nz, number of matrix A nonzeroes

    Advantages of the scheme:

    Easy to refer to a particular element

    Freedom to choose the order of the elements

    Disadvantages :

    Nontrivial to find, for example, all nonzeroes of a particular row or column andtheir positions

  • 164 PDE 7.5 Sparse matrix storage schemes

    7.5.2 Column-major storage format

    For each matrix A column k a vector row_ind(j) giving row numbers i forwhich ai j 6= 0.

    To store the whole matrix, each column nonzeros

    added into a 1-dimensonal array row_ind(1:nz)

    introduce cptr(1:M) referring to column starts of each column inrow_ind.

    row_ind(1:nz), cptr(1:M), vals(1:nz)

    Advantages:

    Easy to find matrix column nonzeroes together with their positions

  • 165 PDE 7.5 Sparse matrix storage schemes

    Disadvantages:

    Algorithms become more difficult to read

    Difficult to find nonzeroes in a particular row

    7.5.3 Row-major storage format

    For each matrix A row k a vector col_ind(i) giving column numbers j forwhich ai j 6= 0.

    To store the whole matrix, each row nonzeros

    added into a 1-dimensonal array col_ind(1:nz)

    introduce rptr(1:N) referring to row starts of each row in col_ind.

    col_ind(1:nz), rptr(1:N), vals(1:nz)

  • 166 PDE 7.5 Sparse matrix storage schemes

    Advantages:

    Easy to find matrix row nonzeroes together with their positions

    Disadvantages:

    Algorithms become more difficult to read.

    Difficult to find nonzeroes in a particular column.

    7.5.4 Combined schemes

    Triple format enhanced with cols(1:nz), cptr(1:M), rows(1:nz),rptr(1:N). Here cols and rows refer to corresponding matrix A values intriple format. E.g., to access row-major type stuctures, one has to index throughrows(1:nz)

    Advantages:

  • 167 PDE 7.5 Sparse matrix storage schemes

    All operations easy to perform

    Disadvantages:

    More memory needed.

    Reference through indexing in all cases

  • 168 Iterative methods 8.1 Problem setup

    8 Iterative methods

    8.1 Problem setup

    Itereative methods for solving systems of linear equations withsparse matrices

    Consider system of linear equations

    Ax = b, (22)

    where NN matrix A

    is sparse,

    number of elements for which Ai j 6= 0 is O(N).

    Typical example: Poisson equation discretisation on nn mesh, (N = nn)

    in average 5, nonzeros per A row

  • 169 Iterative methods 8.1 Problem setup

    In case of direct methods, like LU-factorisation

    memory consumption (together with fill-in): O(N2) = O(n4).

    flops: 2/3 N3 +O(N2) = O(n6).

    Banded matrix LU-decomposition

    memory consumption (together with fill-in): O(N L) = O(n3), where L isbandwidth

    flops: 2/3 N L2 +O(N L) = O(n4).

  • 170 Iterative methods 8.2 Jacobi Method

    8.2 Jacobi Method Iterative method for solving (22)

    With given initial approximation x(0), approximate solution x(k), k = 1,2,3, ...of (22) real solution x are calculated as follows:

    i-th component of x(k+1), x(k+1)i is obtained by taking from (22) only thei-th row:

    Ai,1x1 + +Ai,ixi + +Ai,NxN = bi

    solving this with respect to xi, an iterative scheme is obtained:

    x(k+1)i =1

    Ai,i

    (bi

    j 6=iAi, jx

    (k)j

    )(23)

  • 171 Iterative methods 8.2 Jacobi Method

    The calculations are in essence parallel with respect to i no dependence onother componens x(k+1)j , j 6= i. Iteration stop criteria can be taken, for example:x(k+1)x(k)< or k+1 kmax, (24)

    given error tolerance

    kmax maximal number of iterations

    memory consumption (no fill-in):

    NA6=0 number of nonzeroes of matrix A

    Number of iterations to reducex(k)x

    2<

    x(0)x2:

    #IT 2ln1

    2(n+1)2 = O(n2)

  • 172 Iterative methods 8.2 Jacobi Method

    flops/iteration 10 N = O(n2), =

    #IT flopsiteration

    =Cn4 +O(n3) = O(n4).

    coefficent C in front of n4 is:

    C 2ln1

    210 2 ln1

    Is this good or bad?This is not very good at all... We need some better methods, because

    For LU-decomposition (banded matrices) we had C = 2/3

  • 173 Iterative methods 8.3 Conjugate Gradient Method (CG)

    8.3 Conjugate Gradient Method (CG) C a l c u l a t e r(0) = bAx(0) wi th g i v e n s t a r t i n g v e c t o r x(0)

    f o r i = 1 , 2 , . . .s o l v e Mz(i1) = r(i1) # we assume here t h a t M = I nowi1 = r(i1)

    T z(i1)

    i f i ==1p(1) = z(0)

    e l s ei1 = i1/i2p(i) = z(i1)+i1p(i1)

    e n d i fq(i) = Ap(i) ; i = i1/p(i)

    T q(i)

    x(i) = x(i1)+ip(i) ; r(i) = r(i1)iq(i)

    check c o n v e r g e n c e ; c o n t in u e i f neededend

  • 174 Iterative methods 8.3 Conjugate Gradient Method (CG)

    memory consumption (no fill-in):

    NA6=0 +O(N) = O(n2),

    where NA6=0 # nonzeroes ofA

    Number of iterations to achievex(k)x

    2<

    x(0)x2:

    #IT ln1

    2

    (A) = O(n)

    Flops/iteration 24 N = O(n2) , =

    #IT flopsiteration

    =Cn3 +O(n2) = O(n3),

  • 175 Iterative methods 8.3 Conjugate Gradient Method (CG)

    where C 12ln1

    2(A).

    = C depends on condition number of A! This paves the way for preonditioningtechnique

  • 176 Iterative methods 8.4 Preconditioning

    8.4 PreconditioningIdea:Replace Ax = b with system M1Ax = M1b.Apply CG to

    Bx = c, (25)

    where B = M1A and c = M1b.But how to choose M?Preconditioner M = MT to be chosen such that

    (i) Problem Mz = r being easy to solve

    (ii) Matrix B being better conditioned than A, meaning that 2(B)< 2(A)

  • 177 Iterative methods 8.4 Preconditioning

    Then#IT(25) = O(

    2(B))< O(

    2(A)) = #IT(22)

    butflops

    iteration(25) =

    flopsiteration

    (22)+ (i) >flops

    iteration(22)

    =We need to make a compromise!

    (In extreme cases M = I or M = A)

    Preconditioned Conjugate Gradients (PCG) Method

    obtained if to take in previous algorithm M 6= I

  • 178 Iterative methods 8.5 Preconditioner examples

    8.5 Preconditioner examplesDiagonal Scaling (or Jacobi method)

    M = diag(A)

    (i) flopsIteration = N

    (ii) 2(B) = 2(A)

    Is this good?no large improvement to be expeted

  • 179 Iterative methods 8.5 Preconditioner examples

    Incomplete LU-factorisation

    M = LU ,

    L and U approximations to actual factors L and U in LU-decompoition

    nonzeroes in Li j and Ui j only where Ai j 6= 0 (i.e. fill-in is ignored in LU-factorisation algorithm)

    (i) flopsIiteration = O(N)

    (ii) 2(B)< 2(A)

    How good is this preconditioner?Some improvement at least expected!

    2(B) = O(n2)

  • 180 Iterative methods 8.5 Preconditioner examples

    Gauss-Seidel method do k=1,2,...

    do i=1,...,n

    x(k+1)i =1

    Ai,i

    (bi

    i1

    j=1

    Ai jx(k+1)j

    n

    j=i+1

    Ai, jx(k)j

    )(26)

    enddo

    enddoNote that in real implementation, the method is done like:

    do k=1,2,...

    do i=1,...,n

    xi =1

    Ai,i

    (bi

    j 6=iAi jx j

    )(27)

    enddo

    enddoDo you see a problem with this preconditioner (with PCG method)?But the preconditioner is not symmetric, which makes CG not to converge!

  • 181 Iterative methods 8.5 Preconditioner examples

    Symmetric Gauss-Seidel methodTo get the symmetric preconditioner, another step is added:

    do k=1,2,...

    do i=1,...,n

    xi = 1Ai,i(bi j 6=i Ai jx j

    )enddo

    enddo

    do k=1,2,...

    do i=n,...,1

    xi = 1Ai,i(bi j 6=i Ai jx j

    )enddo

    enddo

of 33/33
149 PDE 7.2 Partial Differential Equations (PDE) 7.2 Partial Differential Equations (PDE) PDE overview Examples of PDE-s: Laplace’s equation important in many fields of science, * electromagnetism * astronomy * fluid dynamics behaviour of electric, gravitational, and fluid potentials The general theory of solutions to Laplace’s equation – potential theory In the study of heat conduction, the Laplace equation – the steady-state heat equation
Embed Size (px)
Recommended