+ All Categories
Home > Documents > PH36010: Numerical Methods - Portfolio · PH36010: Numerical Methods ... C Numerical Integration...

PH36010: Numerical Methods - Portfolio · PH36010: Numerical Methods ... C Numerical Integration...

Date post: 28-Aug-2018
Category:
Upload: trinhduong
View: 267 times
Download: 3 times
Share this document with a friend
18
PH36010: Numerical Methods - Portfolio Mr. Benjamen P. Reed (110108461) [email protected] IMPACS, Aberystwyth University November 13, 2013 Abstract A number of numerical methods were investigated, reproduced and tested rigorously using FORTRAN 90. Topics covered include: Linear & Cubic Spline Interpolation; Numerical Root- Finding; Numerical Integration; Discretized Fourier Transforms; and computational evaluation of coupled ordinary differential equation systems. Various algorithms were discussed and were compared against each other to determine their functionality and shortfalls. Full FORTRAN code is available in the appendices.
Transcript
Page 1: PH36010: Numerical Methods - Portfolio · PH36010: Numerical Methods ... C Numerical Integration ... imperative programming language designed for numeric computation in ...

PH36010: Numerical Methods - Portfolio

Mr. Benjamen P. Reed (110108461)

[email protected]

IMPACS, Aberystwyth University

November 13, 2013

Abstract

A number of numerical methods were investigated, reproduced and tested rigorously using FORTRAN 90. Topics covered include: Linear & Cubic Spline Interpolation; Numerical Root-Finding; Numerical Integration; Discretized Fourier Transforms; and computational evaluation of coupled ordinary differential equation systems. Various algorithms were discussed and were compared against each other to determine their functionality and shortfalls. Full FORTRAN code is available in the appendices.

Page 2: PH36010: Numerical Methods - Portfolio · PH36010: Numerical Methods ... C Numerical Integration ... imperative programming language designed for numeric computation in ...

Benjamen Reed (110108461) [email protected]

2

Table of Contents

1. Introduction 3

2. Linear & Cubic Spline Interpolation 3

2.1 Linear Interpolation……………………………………………………… 3 2.2 Cubic Spline Interpolation………………………………………………. 4 2.3 Linear & Cubic Spline Interpolation of a Real Dataset…………………. 4

3. Root-Finding 5 3.1 Bisection Method of Root Finding……………………………………… 5 3.2 Newton’s Method of Root Finding……………………………………… 6 3.3 Comparison of Newton’s Method against Bisection Method…………… 7

4. Numerical Integration 7

4.1 Riemann-Method Integration…………………………………………… 7 4.2 Trapezium Integration Method…………………………………………. 8 4.3 Simpson’s 1/3 Rule Integration Method………………………………… 9 4.4 Comparison of Numerical Integration Methods…………………………. 9

5. Numerical Fourier Analysis: The Fast Fourier Transform 10

5.1 Application of the Fast Fourier Transform Method……………………… 11 5.2 The Square Wave and the Sawtooth Wave………………………………. 12

6. Numerical Solutions to Ordinary Differential Equations 14

6.1 Euler’s Method…………………………………………………………… 14 6.2 Modified Euler’s Method………………………………………………… 14 6.3 Comparison of Euler’s Method & Modified Euler’s Method……………. 15

7. Numerical Solutions to Coupled Differential Equations 15

7.1 Fourth-Order Runge-Kutta Method……………………………………… 16 7.2 Modified Euler’s Method & Runge-Kutta for Coupled ODEs…………… 16

8. Conclusion 17

Acknowledgements 17

References 18

Appendixes 19

A Linear & Cubic Spline Integration………………………………………. 19 B Root Finding Methods…………………………………………………… 21 C Numerical Integration……………………………………………………. 23 D Fast Fourier Transform…………………………………………………… 26 E Ordinary Differential Equations………………………………………….. 28

Page 3: PH36010: Numerical Methods - Portfolio · PH36010: Numerical Methods ... C Numerical Integration ... imperative programming language designed for numeric computation in ...

Benjamen Reed (110108461) [email protected]

3

1. Introduction

Numerical analysis is a branch of mathematics focusing on algorithms (i.e. numerical methods) that use numerical approximation. This implies producing a sequence of approximations by repeating the same procedure for many iterations. In comparison, analytical analysis is used to understand the mechanism and physical effects of a system through a model problem (e.g. Newton’s equations of motion). Using numerical methods is useful when ascertaining an exact answer via a different approach may not be possible, or too time consuming. However it is important to remember that any solution a numerical method can provide is merely an approximation and may not take into consideration other solutions to the problem. This is due to the convergent nature of the methods employed. Nevertheless, these methods can be iterated as many times as possible before the required tolerance is produced.

Programming languages are often very useful for implementing numerical analysis; one such language is known as FORTRAN (FORmula TRANslation). FORTRAN is script-based, imperative programming language designed for numeric computation in scientific and engineering. It has been used for many applications, including modelling fluid dynamics, numerical weather prediction, and computational chemistry. It is particularly useful for dealing with large amounts of data in arrays of varying dimensions. For example, it is able to deal with 6-dimensional arrays of data, so long as there is enough storage available (i.e. a 6-dimensional array of data would require a few terabytes of hard drive space). FORTRAN can however be used for simpler numerical methods, some of which are explained and compared in this report.

2. Linear & Cubic Spline Interpolation

2.1 Linear Interpolation

When presented with a discrete set of data points (xi, fi), the simplest method of approximating the function (and hence any discrete value of the function within the given dataset) is to construct a straight line between each pair of data points (xi, xi+1). Interpolation is easy to implement and simple to understand, however if the number of data points in the original data set is too few, then the method is susceptible producing an inaccurate function. This is analogous to aliasing when sampling a signal with a small sampling frequency. The resulting function may look nothing like the original, hence it is important that there is a sufficient number of data points available to produce an accurate approximation.

To linearly interpolate between two points in a data set, the following approximation function is used…

f (x) = fi +x − xixi+1 − xi

fi+1 − fi( )+Δf (x) (1)

…where Δf(x) is the error function, given by…

Δf (x) = γ8xi+1 − xi( )2 (2)

The γ is the error function which can be shown to be the second derivate of the function f(x) so long as the function is smooth and continuous. Using this information, it was possible to create a Fortran program capable of interpolating between each pair of points in a given dataset. By applying equations 1 and 2 to each pair in turn, the program could find the intermediate (i.e. half-way) value of y between them, and the error on the calculated value. The full code of this program can be found in appendix A.

Page 4: PH36010: Numerical Methods - Portfolio · PH36010: Numerical Methods ... C Numerical Integration ... imperative programming language designed for numeric computation in ...

Benjamen Reed (110108461) [email protected]

4

2.2 Cubic Spline Interpolation

Generally, a spline is a method of piecewise polynomial interpolation that produces a smooth curve for every point along a function or dataset whilst adhering to certain continuity conditions. These curves are connected together to form a larger curve that represents the entire function or dataset. Higher order splines are more accurate but can be unnecessarily complicated especially if the function describing a data set is simple. For this reason, cubic splines (of the third order) are the most popular method of producing a spline interpolant without investing a large amount of time to design and code an appropriate program (Judd, 1998). The cubic spline takes the form…

Si (xi ) = ai (x − xi )3 + bi (x − xi )

2 + ci (x − xi )+ di (3)

This expression is modified using the condition Si(xi) = yi to equate di to yi and hence produce another expression for yi+1…

yi = di (4)

yi+1 = ai (x − xi+1)3 + bi (x − xi+1)

2 + ci (x − xi+1)+ yi (5)

For a cubic spline of n+1 data points, there are n splines, for which there are four unknown constants a, b, c and d for each spline. Hence for a dataset of n+1 points, there are 4n unknown constants. Between each set of data points (xi and xi+1), there is a different cubic polynomial Si. The interval between each pair of points is denoted as hi and then the second derivative of Si (now denoted as σ) is calculated. Using σi and σi+1, the values of ai and bi can be found and then substituted back into the spline expressions (3, 4, 5) to find ci.

a i =σ i+1 −σ i

6hi bi =

σ i

2 ci =yi+1 − yihi

−2hiσ i − hiσ i+1

6

By combining the first derivative of equation 5 and the first derivative of the previous interval, a matrix of n-1 linear equations. This tridiagonal matrix is solved using Gaussian elimination to ascertain the splines coefficients required to construct the individual curves between each pair of data points (Pollock, 1998).

2.3 Linear & Cubic Spline Interpolation of a Real Dataset

A program was written in FORTRAN to conduct both linear and cubic spline interpolation. The program was tested using a real set of data not produced by a predetermined function. The data provided in table 1 appendix A corresponded to the apparent magnitude of a star over the period of one second at 0.1-second intervals. The program imported the data into separate arrays and then interpolated at intervals of 0.05 seconds using both linear and cubic spline methods. The resulting outputs were written to a comma separated variable file and then loaded into gnuplot to produce a graph comparing the two methods (figure 1).

Page 5: PH36010: Numerical Methods - Portfolio · PH36010: Numerical Methods ... C Numerical Integration ... imperative programming language designed for numeric computation in ...

Benjamen Reed (110108461) [email protected]

5

For the linear sections of the data, the two methods appear to conform quite closely. The difference between the methods becomes apparent at the minima and maxima of the dataset. The linear interpolant does not compensate for the curves in the data and thus ‘cuts the corner’, giving rise to large inaccuracies. The cubic spline on the other hand deals with changing differential close to the critical points relatively well, although due to small number of interpolation points, it too shows evidence of truncating in these areas. For demonstration purposes, a second spline curve is shown which was constructed using the built in ‘cspline’ smoothing function of gnuplot. It shows, that with enough interpolation points, a very smooth curve can be produced with no truncating at the critical points.

3. Root Finding

Quite often in mathematics and physics, it is useful to determine the roots of a function such that f(x) = 0. It is common boundary condition in physical systems, but often it is not possible to solve such nonlinear equation root finding problem analytically. Numerical methods are useful in this situation and there are a multitude of root-finding iterative algorithms that can be utilised to approximate the solution. Such iterative algorithms are chosen depending on the situation and their convergence rate (i.e. how rapidly the algorithm converges to the solution within a given tolerance). Two common, and relatively simple root-finding algorithms are the ‘bisection method’ and ‘Newton’s method’ (Kopecky, 2007). Both are explained in the following section.

3.1 Bisection Method of Root Finding

The bisection, or interval halving method, is one of the most simple and robust methods for finding roots of a one-dimensional continuous function. It employs a basic convergence method that is easy to program, requiring the user to input an acceptable pair of boundaries and a desired

Page 6: PH36010: Numerical Methods - Portfolio · PH36010: Numerical Methods ... C Numerical Integration ... imperative programming language designed for numeric computation in ...

Benjamen Reed (110108461) [email protected]

6

tolerance. The process begins with two starting points, an upper bound xb and a lower bound xa that lie on either side of the root. An intermediate point xc is calculated using the equation…

xc =xa + xb( )2 (6)

The values of f(x) are determined for xa, xb and xc. If the sign of f(x) changes between xa and xb, then a root exists in the interval, as expected if the initial interval is chosen carefully. The main mechanic of the bisection method is to change the value of xa or xb depending which subinterval the root exists within. If f(xa) × f(xc) < 0, then the root lies between xa and xc, and hence xc is set as xb. Conversely, if f(xa) × f(xc) > 0, then the root lies between xc and xb, and hence xc is set as xa. This process serves to reduce the size of target interval, and subsequent iterations will reduce this interval until a desired tolerance is achieved, described by the equation…

error =xb − xa( )

2n

(7)

…where n is the iteration number. Once the desired tolerance is reached, the next value of xc is usually taken as the approximate root of the function. This method is also useful if the function is question is not smooth; because it does not rely on derivatives, it does not exploit the curvature of function in question, so it can find roots in discontinuous functions as well. One disadvantage to this algorithm and many like it is that it demands a level of informed guess work with the initial input. Knowledge of what the target function’s profile looks like is required to provide the algorithm with sensible initial interval bounds, therefore it is common practice to plot the function first using a graphing program, to ascertain some approximate knowledge of where the roots should be (Kopecky, 2007).

3.2 Newton’s Method of Root Finding

Newton’s method of root finding (a.k.a. Newton-Raphson method) is a very popular algorithm due to its high convergence rate and accuracy. It employs the principle of successive linearization, a method by which a difficult-to-solve non-linear curve is solved by a succession of linear tangential solutions that converge on the root of the non-linear curve. Another advantage of the Newton method is that only one input is required: the initial x-position, x1. The algorithm then calculates the derivative of the curve at that point hence producing a tangential linear equation f1(x) using the equation…

xn+1 = xn −f (xn )f '(xn )

(8)

…which can be solved for f1(x) = 0 (i.e. the x-axis intercept). The x-intercept of the tangent is assigned as the new calculation point x2 and the derivative of the function is taken at this point to produce the new tangential linear equation f2(x). The process repeats until the required tolerance is reached (equation 7). This derivative-based process gives rise to an extremely rapid convergence, even if the required tolerance is highly precise. This rapid convergence comes at a cost however. The non-linear function must be differentiable, and the differential must be supplied, which can become a problem if the function in question is difficult to differentiate or is discontinuous. Another issue is that global convergence is likely to fail in many cases. For example, if the iterative method reaches a local minima or maxima (i.e. where ∇f(xn) = 0), then the algorithm cannot compute the next step, as the tangential equation will never intersect the x-axis. In other situations, the method may not converge on the desired root, but instead a

Page 7: PH36010: Numerical Methods - Portfolio · PH36010: Numerical Methods ... C Numerical Integration ... imperative programming language designed for numeric computation in ...

Benjamen Reed (110108461) [email protected]

7

tangential equation may intersect the x-axis at a value far away from the desired root. Clearly, the Newton method’s success depends critically on the initial guess from the user (Kopecky, 2007).

3.3 Comparison of Newton’s Method against Bisection Method

Two programs were written, employing the bisection method and Newton’s method respectively. The code for these programs can be found in appendix B. Two functions were used to test the effectiveness of the two methods. For all of the following tests, a tolerance of 10-5 was used. The first function was…

f (x) = 3x + sin(x)− ex (9)

…which is a non-periodic, inverse parabola with real roots at 0.360421702 and 1.890029729. Concentrating on the smaller root, the bisection method was applied using 0.1 and 1.0 as lower and upper bounds respectively. The bisection method converged on a value of 0.367187500 in 7 iterations. For the same function, Newton’s method was tested using an initial value of 1.0; Newton’s method converged on a value of 0.360421687 in 4 iterations. Clearly, Newton’s method converged quicker and was more accurate in it’s final value. The second function used for testing was…

f (x) = cos(x) (10)

…which should be noted, is a periodic function with an infinite number of x-intercepts. The first intercept of the function occurs at 1.57079632, so the inputs were chosen to try and converge on this value. For the bisection method program, input values of 0.1 and 2.0 were chosen for the lower and upper bounds respectively. The bisection method converged on a value of 1.57324219 in 9 iterations. For the Newton’s method, an initial value of 0.1 was chosen; Newton’s method converged on a value of 10.9955740 in 4 iterations. The failure of Newton’s method to converge on the correct value, illustrates the aforementioned issue of choosing an initial x-value close to a local maxima. Such an issue will be quite common in highly periodic functions, such as the trigonometric functions sine and cosine. Despite finishing on the incorrect value, the Newton’s method still demonstrates it’s rapid convergence and high accuracy, given that actual value of the root it converged on was 10.9955740.

4 Numerical Integration

Standard integration between limits can be used to calculate the area or volume contained by a function, and this type of calculus is used in many fields of science for various applications. For example, integrating the path taken by a test charge as it moves through an electric field will gives the electrostatic work done on that test charge. Often, the integrand in these situations is complex and can require an extended amount of effort to derive. Once again, numerical analysis offers a solution. Splitting the area bound by a function into small rectangular strips and calculating the cumulative area of each strip iteratively can find an approximation for the integral between limits. In this section, three of these methods are discussed: Riemann, Trapezium, and Simpson.

4.1 Riemann-Method Integration

The Riemann integration method is the simplest of the methods to be discussed and can be solved in various situations by applying the second fundamental theorem of calculus, which states that a definite integral can be calculated using any one of its infinitely many antiderivatives (Tan, 2010). The upshot of this is that the integrand can be solved between limits

Page 8: PH36010: Numerical Methods - Portfolio · PH36010: Numerical Methods ... C Numerical Integration ... imperative programming language designed for numeric computation in ...

Benjamen Reed (110108461) [email protected]

8

if it is split into an appropriate number of rectangular strips n, and hence summed. For example, if an integral with limits is presented…

f (x)dxxmin

xmax

∫ (11)

…the approximation using rectangular strips can be written as…

f (xi )δxi=1

n−1

∑ (12)

…where δx is defined as…

δx = b− an

xi = a+ iδx (13)

Where b and a are the upper and lower limits of the strip respectively. The Riemann method can be altered to take each interval between the lower bounds, upper bounds, or middle of each strip. The change is subtle but can make a noticeable effect on the convergence rate of the integration. The upper, lower, and middle Riemann methods all produce a degree of error called the ‘excess’ which is the area of a strip or the function that is missed by the iterative algorithm (figure 2). This further demonstrates the approximate nature of the method (Tan, 2010).

4.2 Trapezium Integration Method

The trapezium method of integration is a modification to the Riemann method, which instead of using rectangular strips, uses trapeziums of height h and width x. These are formed by joining a line f(xi) and f(xi+1) The use of trapeziums reduces the error of the Riemann method due to excess at the ends of the strips (see figure 3), but does not remove it altogether. The following expression is used to calculate an integral using trapezium method (Wilding, 2013).

Figure 2 - Riemann integration performed with the interval defined between the (a) lower bounds, (b) middle of the strips, and (c) the upper bounds. Note how much excess changes depending on the limits used. (Source: http://en.wikipedia.org/wiki/Riemann_sum)

(a) (b) (c)

Figure 3 - Trapezium integration with reduced excess. (Source: http://en.wikipedia.org/wiki/Riemann_sum)

Page 9: PH36010: Numerical Methods - Portfolio · PH36010: Numerical Methods ... C Numerical Integration ... imperative programming language designed for numeric computation in ...

Benjamen Reed (110108461) [email protected]

9

f (x)dx ≈ δx2

f (a)+ 2 f (a+ iδx)+ f (b)i=1

n−1

∑$

%&

'

()

a

b

∫ (14)

4.3 Simpson’s 1/3 Rule Integration Method

Simpson’s 1/3 method of numerical integration approximates the integrand with a quadratic equation, which is then elementary to integrate. The integrand is split into an even number of strips with a width of δx, which is determined using equation 13a. The following expression is used in Simpson’s 1/3 Rule to evaluate the integral (Wilding, 2013).

f (x)dx ≈ δx3

f (a)+ 2 f (x2i−2 )+ 4 f (x2i−1)+ f (b)i=1

n/2

∑i=2

n/2

∑$

%&

'

()

a

b

∫ (15)

4.4 Comparison of Numerical Integration Methods

To determine the accuracy and usefulness of the discussed methods, a FORTRAN program for each method was coded (appendix C) and tested using a specific function with defined limits.

f (x)1

5

∫ = cos(x)dx1

5

∫ (16)

The actual solution to this integral determined analytically is -1.800395, but the methods discussed will only approximate and their precision is dependant on the number of strips that the user inputs. Naturally, the larger the number of strips, the smaller the strips widths will be, and hence the program will provide a more precise approximate. The precision of the methods can be characterised by their ‘residuals’, which are defined as the difference between the computed values and the analytical value. Obviously the smaller the residual the greater the precision.

residual = Iapprox − Iactual (17)

Page 10: PH36010: Numerical Methods - Portfolio · PH36010: Numerical Methods ... C Numerical Integration ... imperative programming language designed for numeric computation in ...

Benjamen Reed (110108461) [email protected]

10

As figure 4 shows, Simpson’s method has by far the most rapid convergence on the correct analytical value, achieving a near-zero residual at only 10 strips. The other methods are still able to reach agreement as to the approximate value of equation 16, however they require at least 1000 integration strips to provide an appreciable precision.

5 Numerical Fourier Analysis: The Fast Fourier Transform

Fourier transformation is used to convert functions in the temporal or spatial domain, into a frequency domain and hence individual harmonics of a constructed wave pattern can be determined. The inverse situation is also valid; multi-frequency functions can be reconstructed using their respective Fourier transform coordinates. One method of performing a Fourier transform analysis involves using a numerical iterative process called the Fast Fourier Transform (FFT). The FFT is discretised version of the standard Fourier transform, thus making it well suited for calculation by a computer program. The forward FFT is given by…

Fk = e2πikj/N f jj=0

N−1

∑ (18)

…and the inverse FFT is thus…

fk =1N

e−2πikj/NFjj=0

N−1

∑ (19)

…where Fk and fk are the Fourier transform resultants and Fj and fj are the original functions. The FFT is a more technical application of the discrete Fourier transformation where a Fourier

Page 11: PH36010: Numerical Methods - Portfolio · PH36010: Numerical Methods ... C Numerical Integration ... imperative programming language designed for numeric computation in ...

Benjamen Reed (110108461) [email protected]

11

transform of length N can be formed as the sum of two discrete Fourier transforms of length N/2; one holds the odd–numbered points and the other holds the even-numbered points. The FFT requires that the size N of the data set be a power of two, i.e. N = 2m where m is an integer. The FFT algorithm re-orders the data into bit-reversed order, and another section executes m times and evaluates transforms of length 2, 4, 8, 16…N. After the summations are decomposed m times, then the individual data points are added up in pairs. The resulting output of the FFT is an array of real and imaginary components of each Fourier transformed data point, which can then be plotted to produce visual representation of the frequencies contained within a function (Wilding & Flikkema, 2013).

5.1 Application of the Fast Fourier Transform Method

Appendix D outlines the FORTRAN code written to perform Fast Fourier Transform’s on a variety of waveforms. It was sensible to test this code on a simple periodic function, such as sine or cosine in order to ascertain functionality. It was expected that a single peak would be observed at a frequency of 1Hz for a periodic function defined between zero and 2π (figure 5).

So the FFT code works for simple single-frequency functions, so the next step would be to test it against multi-frequency functions. The chosen function for this test was…

f (x) = sin(x)+ 4cos(8x)+ sin(10x)cos(15x) (20)

…and applying the FFT to this function, a Fourier transform containing several peaks representing multiple frequencies was constructed (figure 6).

or Cosine Wave Figure 5:

Page 12: PH36010: Numerical Methods - Portfolio · PH36010: Numerical Methods ... C Numerical Integration ... imperative programming language designed for numeric computation in ...

Benjamen Reed (110108461) [email protected]

12

5.2 The Square Wave and the Sawtooth Wave

Both the square and the sawtooth wave can be constructed using integer numbers of harmonics superpositioned on top of one another until the required level of approximation is required (or for the perfect case, an infinite number). For a square wave, the construction…

fk = aj cos(2π jk / N )+ bj sin(2π jk / N )j=1

N /2−1

∑j=0

N /2−1

∑ (21)

…is required, where aj and bj are Fourier coefficients dependant on the real and imaginary part of Fj respectively. For a sawtooth wave, the construction…

f (x) 2x, when 0 ≤ x ≤ 0.52(x −1), when 0.5≤ x ≤1

#$%

&% (22)

…is required. Both the square and sawtooth waves demanded adjustments to the original FFT coding, the result of which produced figures 7 and 8. Both the square and sawtooth wave employ the same construction of harmonics, but converge on different waveform profiles. In the Fourier transform, only the frequency of the constructing waves is seen, as thus the Fourier transforms for both the square and sawtooth waves is the same. Each level of harmonics contributes less and less to the general construction of a Fourier synthesised function, and hence the intensity of the higher frequencies is less than the base low frequencies (Erhlich, 2002).

Figure 6:

Page 13: PH36010: Numerical Methods - Portfolio · PH36010: Numerical Methods ... C Numerical Integration ... imperative programming language designed for numeric computation in ...

Benjamen Reed (110108461) [email protected]

13

Figure 7:

Page 14: PH36010: Numerical Methods - Portfolio · PH36010: Numerical Methods ... C Numerical Integration ... imperative programming language designed for numeric computation in ...

Benjamen Reed (110108461) [email protected]

14

6 Numerical Solutions to Ordinary Differential Equations

An ordinary differential equation is a relationship between an independent variable x, a dependant variable f(x) and any number of derivatives of f(x) with respect to x. Partial differential equations on the other hand, deal with more than one dependant variable. Equation 23 is an example of an ordinary differential equation (Stroud & Booth, 2007).

x dydx

= y2 +1 (23)

Differential equations represent dynamics relationships and are thus frequently occur in scientific problems. Whilst such an equation can be solved analytically, it is often easier to use numerical analysis to solve the equation and provide usable data. One such situation where differential equations are used is radioactive decay. In radioactivity, the rate of emission is determined by the amount of radioactive material left in a sample, such that…

dNdt

= −αN (24)

…where N is the remaining amount of radioactive material and α is the decay constant. There are two useful numerical methods available to solve this equation: Euler’s method, and modified Euler’s method.

6.1 Euler’s Method

Euler’s method is an iterative differential equation solving algorithm that employs the use of a Taylor Series. Euler’s method can only be used to solve first-order differential equations because it is only a first-order numerical procedure, and hence cannot evaluate differential equation of higher order. If the explicit solution of the differential is not known, then the gradient at any point along the curve can only be approximated by a tangential linear equation. Each new approximated coordinate is determined by…

n1 = n0 +δtf (n0, t0 ) (25)

This works well at short range, but as the function’s derivative changes, the initial tangential line will begin to diverge. This is rectified by taking derivatives of the function at regular intervals, and then connecting the tangential lines at these points. The disadvantage of this method is that despite the ‘corrections’ in the tangential gradients, the approximated ‘curve’ will always diverge away from the actual function. This means that the step size needs to be extremely small to give a worthwhile approximation (Süli, 2013).

6.2 Modified Euler’s Method

As the name suggests, the modified Euler method has been modified in an attempt to correct the divergence that occurs between the actual function and approximation in the simple Euler method. Whereas the simple Euler only evaluates the function at iterative steps of fixed width, the modified Euler method takes an average of upper and lower bounds of the iterative step to obtain a midpoint that lies closer to the original curve. This midpoint is then used to calculate a tangential line, and the next iterative step is performed. The midpoints tmid and nmid are calculated using the following expressions (Süli, 2013).

tmid = t0 +δt2 (26)

Page 15: PH36010: Numerical Methods - Portfolio · PH36010: Numerical Methods ... C Numerical Integration ... imperative programming language designed for numeric computation in ...

Benjamen Reed (110108461) [email protected]

15

nmid = n0 +δt2f (n0, t0 ) (27)

n1 = n0 +δtf (nmid, tmid ) (28)

6.3 Comparison of Euler’s Method & Modified Euler’s Method

To compare the two methods, two programs were written in FORTRAN to evaluate equation 28 and produce data that could then be compared on a plot (appendix E). Figure 5 shows programmed values for each method and a third set of data that represents the actual function. The aforementioned divergence in the simple Euler method is apparent, as the steps in time iterate, the calculated values for the radioactive decay rate lie further away from the actual data. In collation, the modified Euler method follows the actual data set very closely such that there is almost no noticeable difference between the two. It can thus be concluded that in general, the modified Euler method is far more accurate.

7 Numerical Solutions to Coupled Differential Equations

Following on from ordinary differential equations, the next logical step is to discuss coupled differential equations. Differential equations that describe a coupled system will have two dependant variables that both rely on the same independent value. Carrying on for the radioactive decay example detailed in the last section, it is a common situation for one

Figure 9:

Page 16: PH36010: Numerical Methods - Portfolio · PH36010: Numerical Methods ... C Numerical Integration ... imperative programming language designed for numeric computation in ...

Benjamen Reed (110108461) [email protected]

16

radioactive substance to decay into another radioactive substance, and so on until a stable element isotope is reached (e.g. Thorium-232 eventually decays into Lead-208 through a series of unstable isotopes). A simple three-isotope system can be formulated by using the following equations.

dXdt

= −αX (29)

dYdt

=αX −βY (30)

In these equations, X denotes the amount of the initial radioactive isotope, for example Thorium, and the Y is the amount of primary decay product, which would be Radium. The secondary decay product Radon, can be defined as Z and the amount of this product can be characterised as Z(t) =1− X(t)−Y (t)where t is the time passed. Initially, X = 1 and Y = Z = 0, but as the thorium decays, the amounts of Y and Z will increase at differing speeds. The decay functions can be evaluated using the modified Euler’s method, however in the case of coupled equations, the global error of modified Euler will be noticeable. Fortunately there is another alternative in the form of Fourth-Order Runge-Kutta.

7.1 Fourth-Order Runge-Kutta Method

The Fourth-Order Runge Kutta Method, or the Runge-Kutta is another iterative algorithm that can solve ordinary differential equations and is widely considered to be the most accurate method of doing so. At each iterative step evaluates the function at four different points of the derivative, two at the bounds, and two in the middle (Press et al., 2007).

F1 = δtf (yi, ti )

F2 = δtf yi +12F1, ti +

12δt

!

"#

$

%&

F3 = δtf yi +12F2, ti +

12δt

!

"#

$

%&

F4 = δtf yi +F3, ti +δt( )

yi+1 = yi +16F1 + 2F2 + 2F3 +F4( )

(31a-e)

7.2 Modified Euler’s Method and Runge-Kutta for Coupled Radioactive Decay

A new program was written in FORTRAN for the Runge-Kutta method (appendix E) using the equations in section 7.1 (equations 31a-e), and the code for the modified Euler’s method was imported. Both methods were used to solve the coupled radioactive decay equations and a plot was made showing the amount of material for each substance over time using the two methods (figure 10). The Runge-Kutta method shows less divergence than the modified Euler, which suggests that it is much more adapt at dealing with global error from equation to equation during the evaluation process. It appears that any initial divergence error in the modified Euler has been greatly increased leading to less accurate representation.

Page 17: PH36010: Numerical Methods - Portfolio · PH36010: Numerical Methods ... C Numerical Integration ... imperative programming language designed for numeric computation in ...

Benjamen Reed (110108461) [email protected]

17

8. Conclusion

A portfolio detailing the use of the script-based programming language FORTRAN to perform numerical analysis has been written, along with several useable sets of code that can be used in future experiments or analyses.

The differences between methods in the same mathematical topics were compared for their functionality, ease of use, and data analysis. The coding written for these numerical methods can be found in the appendixes with comments detailing how they work.

Acknowledgements

The author would like to thank Dr. Martin Wilding and Dr. Edwin Flikkema for their tutoring on the subject of numerical methods, and Mr. Paddy Dixon for coding expertise. The author would also like to extend a heartfelt thanks to their fellow students for much needed help with FORTRAN coding.

Page 18: PH36010: Numerical Methods - Portfolio · PH36010: Numerical Methods ... C Numerical Integration ... imperative programming language designed for numeric computation in ...

Benjamen Reed (110108461) [email protected]

18

References

Ehrlich, R., 2002. Fourier Synthesis: Superposition of Waves. [pdf] Mich. State. University. Available at: <http://www.physnet.org/modules/pdf_modules/m352.pdf> [Accessed 29 January 2014]

Heath, M.T., 2002. Scientific Computing: An Introductory Survey; Chapter 8 Numerical Integration and Differentiation. [pdf] Department of Computer Science. University of Illinois. Available at: <http://www.cs.illinois.edu/~heath/scicomp/notes/chap08.pdf> [Accessed 28 January 2014]

Judd, K.L. 1998. Numerical Methods in Economics. Chapter 6.9 Approximation Methods; pp. 225. MIT Press.

Kopecky, K. 2007. Root-Finding Methods. [pdf] Department of Economics. University of Western Ontario. Available at: <http://www.karenkopecky.net/Teaching/eco613614/Notes_RootFindingMethods.pdf> [Accessed 9 January 2014]

Pollock, D.S.G. 1998. Smoothing with Cubic Splines. [pdf] Queen Mary & Westfield College. The University of London. Available at: <r.789695.n4.nabble.com/file/n905996/SPLINES.PDF> [Accessed 8 January 2014]

Press, W.H., Teukolsky, S.A., Vetterling, W.T. & Flannery, B.P. 2007. Numerical Recipes: The Art of Scientific Computing. 3rd ed. Cambridge: Cambridge Press.

Stroud, K. A. & Booth, D. J., 2007. Engineering Mathematics. 6th ed. Basingstoke: Palgrave Macmillan, Limited.

Süli, E. 2013. Numerical Solution of Ordinary Differential Equations. [pdf] Mathematical Institute, University of Oxford. Available at: <http://people.maths.ox.ac.uk/suli/nsodes.pdf> [Accessed 28 January 2014]

Tan, S.T. 2010. Calculus. Belmont, CA: Brooks/Cole in assoc. with Cengage Learning.

Wilding, M.C. & Flikkema, E. 2013. The Fast Fourier Transform. [pdf] Institute of Mathematics, Physics & Computer Science, Aberystwyth University. Available through: Aberystwyth Blackboard System [Accessed 29 January 2014]

Wilding, M.C. 2013. Numerical Integration. [pdf] Institute of Mathematics, Physics & Computer Science, Aberystwyth University. Available through: Aberystwyth Blackboard System [Accessed 28 January 2014]


Recommended