1
MAT641- Numerical Analysis
(Master course)- - - D. Samy MZIOU
Review: Calculus linear Algebra Numerical Analysis
MATLAB software will be used intensively in this course
There will be regular homework assignments, usually
computational, but with lots of freedom. Submit the solutions
on time (preferably early), preferably as a PDF (give LaTex
editor a try!). Always submit your codes.
Syllabus:
2
TOPIC 1
Introduction to Numerical Methods
Errors and Numbers representation
Lecture 1Introduction to Numerical Methods
What is numerical Analysis?
What are NUMERICAL METHODS?
Why do we need them?
3
What is numerical analysis
• Wrong definition:
Numerical analysis is the study of rounding errors
• Trefhensen definition:
Numerical analysis is the study of algorithms for the problems ofcontinuous mathematics
• Atkinson definition:
Numerical analysis is the area of mathematics and computerscience that creates, analyzes, and implements algorithms forsolving numerically the problems of continuous mathematics.
4
Numerical analysis - Scholarpedia
Numerical MethodsNumerical Methods:
Algorithmic methods used to obtain numerical solutions of a continuous mathematical problem. Although there
are many kinds of numerical methods, they have one common characteristic: they invariably involve large numbers of tedious arithmetic calculations. It is little wonder that with the development of fast, efficient digital computers, the role of numerical methods in engineering problem solving has increased dramatically in recent years.
Why do we need them?
1. No analytical solution exists,
2. An analytical solution is difficult to obtain (sometimes impossible) and is not practical.
3. Graphical methods are not precise and useless in high order dimension >3 5
• Find the intersection of
y1 = 2x + 3
y2 = x + 2
• Find the intersection of
y1 = x
y2 = cos(x)
Analytical vs. Numerical Methods
What do we need?Basic Needs in the Numerical Methods:
• Practical:
Can be computed in a reasonable amount of time.
• Accurate:
• Good approximate to the true value,
• Information about the approximation error (Bounds, error order,… ).
What is a “good” Numerical Methods:
• How good is our approximation? (Error Analysis)
• How efficient is our method? (Algorithm design, Convergence rate)
• Does our methods always work? (Convergence) 7
Outlines of the Course
• Taylor Theorem
• Number Representation
• Solution of nonlinear Equations
• Interpolation
• Numerical Differentiation
• Numerical Integration
• Solution of linear Equations
• Least Squares curve fitting
• Solution of ordinary differential equations
• Solution of Partial differential equations
8
Solution of Nonlinear Equations• Some simple equations can be solved analytically:
• Many other equations have no analytical solution:
9
31
)1(2
)3)(1(444solution Analytic
034
2
2
xandx
roots
xx
solution analytic No052 29
xex
xx
Methods for Solving Nonlinear Equations
o Bisection Method
o Newton-Raphson Method
o Secant Method
o Brent’s method
o Aitken’s method & Muller method
10
Solution of Systems of Linear Equations
11
unknowns. 1000in equations 1000
have weif do What to
123,2
523,3
:asit solvecan We
52
3
12
2221
21
21
xx
xxxx
xx
xx
Cramer’s Rule is Not Practical
12
this.compute toyears 10 than more needscomputer super A
needed. are tionsmultiplica102.3 system, 30by 30 a solve To
tions.multiplica
1)N!1)(N(N need weunknowns, N with equations N solve To
problems. largefor practicalnot is Rule sCramer'But
2
21
11
51
31
,1
21
11
25
13
:system thesolve toused becan Rule sCramer'
20
35
21
xx
Methods for Solving Systems of Linear Equations
o Naive Gaussian Elimination
o Gaussian Elimination with Scaled Partial Pivoting
o Algorithm for Tri-diagonal Equations
o Jacobi, Gauss-Seidel & SOR methods
o Conjugate gradient method
13
Curve Fitting
• Given a set of data:
• Select a curve that best fits the data. One choice is to find the curve so that the sum of the square of the error is minimized.
14
x 0 1 2
y 0.5 10.3 21.3
Interpolation
• Given a set of data:
• Find a polynomial P(x) whose graph passes through all tabulated points.
15
tablein the is)( iii xifxPy
xi 0 1 2
yi 0.5 10.3 15.3
Methods for Curve Fitting
o Least Squares
o Linear Regression
o Nonlinear Least Squares Problems
o Interpolation
o Newton Polynomial Interpolation
o Lagrange Interpolation
16
Integration
• Some functions can be integrated analytically:
17
?
:solutions analytical no have functionsmany But
42
1
2
9
2
1
0
3
1
2
3
1
2
dxe
xxdx
a
x
Methods for Numerical Integration
o Upper and Lower Sums
o Trapezoid Method
o Romberg Method
o Gauss Quadrature
18
Solution of Ordinary Differential Equations
19
only. cases special
for available are solutions Analytical *
equations. thesatisfies that function a is
0)0(;1)0(
0)(3)(3)(
:equation aldifferenti theosolution tA
x(t)
xx
txtxtx
Solution of Partial Differential Equations
Partial Differential Equations are more difficult to solve than ordinary differential equations:
20
)sin()0,(,0),1(),0(
022
2
2
2
xxututu
t
u
x
u
Methods for ODEs
o Implicit and Explicit Euler schemes
o Taylor and Runge - Kutta methods
o Multistep methods
21
Lecture 2
Number Representation and Accuracy
Number Representation
Normalized Floating Point Representation
Significant Digits
Accuracy and Precision
Rounding and Chopping
22
Representing Real Numbers
• You are familiar with the decimal system:
• Decimal System: Base = 10 , Digits (0,1,…,9)
• Standard Representations:
21012 10510410210110345.312
part part
fraction integralsign
54.213
23
Normalized Floating Point Representation
• Normalized Floating Point Representation:
• Scientific Notation: Exactly one non-zero digit appears before decimal point.
• Advantage: Efficient in representing very small or very large numbers.
exponent signed:,0
exponent mantissa sign
104321.
nd
nffffd
24
Binary System
• Binary System: Base = 2, Digits {0,1}
exponent signed mantissa sign
2.1 4321nffff
25
10)625.1(10)3212201211(2)101.1(
Fact
• Numbers that have a finite expansion in one numbering system may have an infinite expansion in another numbering system:
• You can never represent 1.1 exactly in binary system.
26
210 ...)011000001100110.1()1.1(
IEEE 754 Floating-Point Standard• Single Precision (32-bit representation)
• 1-bit Sign + 8-bit Exponent + 23-bit Fraction
S bit: sign bit. 1 for positive, 0 for negative
Exponent: 8 bits. Bias of +127
Fraction: 23 bits
Normalized: Value=(-1) S ×1.fraction × 2(exp-127)
Example
1995=1.1111001011×21010
27
S Exponent8 Fraction23
IEEE 754 Floating-Point Standard
• Double Precision (64-bit representation)
• 1-bit Sign + 11-bit Exponent + 52-bit Fraction
Exponent: 11 bits. Bias of +1023
Fraction: 52 bits.
28
Exponent: 11 bits. Bias of +1023
S Exponent11 Fraction52
(Continued)
IEEE 754 FLOATING POINT REPRESENTATION
• Exponents of all 0's and 1's are reserved for special numbers.
• Zero is a special value denoted with an exponent field of zero and a mantissa field of zero, and we could have +0 and -0.
• +∞ an -∞ are denoted with an exponent of all 1's and a mantissa field of all 0's.
• NaN (Not-a-number) is denoted with an exponent of all 1's and a non-zero mantissa field.
Significant Digits
30
Significant digits are those digits that can be
used with confidence.
Single-Precision: 7 Significant Digits
1.175494… × 10-38 to 3.402823… × 1038
Double-Precision: 15 Significant Digits
2.2250738… × 10-308 to 1.7976931… × 10308
Larger exponent Wider range of numbersLonger mantissa Higher precision
Remarks• Some numbers can not be represented exactly in machine
representation.
• Machine numbers cannot represent all real numbers (infinite)i.e. Only limited range of quantities may be represented
Number too larger overflow
Number too small (too close to 0) underflow
• Numbers that can be exactly represented are called machine numbers.
• Machine representation is not unique that’s why we “normalize” the representation.
• Difference between machine numbers is not uniform
• Sum of machine numbers is not necessarily a machine number
31
Calculator Example• Suppose you want to compute:
3.578 * 2.139
using a calculator with two-digit fractions
32
3.57 * 2.13 7.60=
7.653342True answer:
48.9
33
Significant Digits - Example
Accuracy and Precision
34
Accuracy is related to the closeness to the true value.
Precision is related to the closeness to other estimated values.
35
Rounding and Chopping
Only a finite number of quantities may be represented (round-off or chopping errors)
• Rounding: Replace the number by the nearest
machine number.
• Chopping: Throw all extra digits.
36
Rounding and Chopping
37
There are discrete points on thenumber lines that can berepresented by our computer.
How about the space between ?
Errors and Significant Digits
• I paid $10 for 7 oranges. What is unit price of each orange?
• $1.428571429 (that is the exact output from my computer !!)
• Is there any difference between $1.427571429 and $1.4?
• Is there any difference between $1.4 and $1.40?
Significant figures, or digits
• The significant digits of a number are those that can be used with confidence.
• They correspond to the number of certain digits plus one estimated digits.
• x = 3.5 (2 significant digits) 3.45 ≤ x < 3.55
• x = 0.51234 (5 significant digits)
0.512335 ≤ x < 0.512345
Can be computed if the true value is known:
100* valuetrue
ionapproximat valuetrue
Error RelativePercent Absolute
valuetrue
ionapproximat valuetrue
Error Relative Absolute
ionapproximat valuetrue
Error True Absolute
t
t
tE
40
Error Definitions – True Error
When the true value is not known:
100*estimatecurrent
estimate previous estimatecurrent
Error RelativePercent Absolute Estimated
estimatecurrent
estimate previous estimatecurrent
Error Relative Absolute Estimated
estimate previous estimatecurrent
Error Absolute Estimated
a
a
aE
41
Error Definitions – Estimated Error
Notation
We say that the estimate is correct to n decimal digits if:
We say that the estimate is correct to n decimal digits rounded if:
n 102
1Error
n10Error 42
Summary
43
Number RepresentationNumbers that have a finite expansion in one numbering system may have an infinite expansion in another numbering system.
Normalized Floating Point Representation Efficient in representing very small or very large numbers,
Difference between machine numbers is not uniform,
Representation error depends on the number of bits used in the mantissa.
Lectures 3-4
Taylor Theorem
Motivation
Taylor Theorem
Examples
44
Motivation
• We can easily compute expressions like:
?)6.0sin(,4.1 computeyou do HowBut,
)4(2
103 2
x
way?practicala thisIs
sin(0.6)? compute to
definition theuse weCan
45
0.6
ab
• Remark: In this course, all angles are assumed to be in radian unless you are told otherwise.
Taylor Series
∑∞
0
)(
0
)(
3)3(
2)2(
'
)()( !
1 )(
:can write weand exists sum theconverge, series theIf
)()( !
1
writingcondensed ain or
...)(!3
)()(
!2
)()()()(
:about )( ofexpansion seriesTaylor The
k
kk
k
kk
f
f
axafk
xf
axafk
Taylor
axaf
axaf
axafafTaylor
axf
46
Maclaurin Series
47
Maclaurin series is a special case of Taylor series with the center of expansion a = 0.
∑∞
0
)(
3)3(
2)2(
'
)0( !
1 )(
:can write weconverge, series theIf
...!3
)0(
!2
)0()0()0(
:)( ofexpansion series Maclaurin The
k
kk xfk
xf
xf
xf
xff
xf
Maclaurin Series – Example 1
∞.xfor converges series The
...!3!2
1!
)0(!
1
11)0()(
1)0()(
1)0(')('
1)0()(
32∞
0
∞
0
)(
)()(
)2()2(
∑∑
xxx
k
xxf
ke
kforfexf
fexf
fexf
fexf
k
k
k
kkx
kxk
x
x
x
48
xexf )( of expansion series n MaclauriObtain
Taylor SeriesExample 1
49
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 10
0.5
1
1.5
2
2.5
3
1
1+x
1+x+0.5x2
exp(x)
Maclaurin Series – Example 2
∞.xfor converges series The
....!7!5!3!
)0()sin(
1)0()cos()(
0)0()sin()(
1)0(')cos()('
0)0()sin()(
753∞
0
)(
)3()3(
)2()2(
∑
xxxxx
k
fx
fxxf
fxxf
fxxf
fxxf
k
kk
50
: )sin()( of expansion series n MaclauriObtain xxf
-4 -3 -2 -1 0 1 2 3 4-4
-3
-2
-1
0
1
2
3
4
x
x-x3/3!
x-x3/3!+x5/5!
sin(x)
51
Maclaurin Series – Example 3
∞.for converges series The
....!6!4!2
1)(!
)0()cos(
0)0()sin()(
1)0()cos()(
0)0(')sin()('
1)0()cos()(
642∞
0
)(
)3()3(
)2()2(
∑
x
xxxx
k
fx
fxxf
fxxf
fxxf
fxxf
k
kk
52
)cos()( :of expansion series Maclaurin Obtain xxf
Maclaurin Series – Example 4
1 ||for converges Series
...xxx1x1
1 :ofExpansion Series Maclaurin
6)0(1
6)(
2)0(1
2)(
1)0('1
1)('
1)0(1
1)(
of expansion series n MaclauriObtain
32
)3(
4
)3(
)2(
3
)2(
2
x
fx
xf
fx
xf
fx
xf
fx
xf
x1
1f(x)
53
Example 4 - Remarks
• Can we apply the series for x≥1??
54
How many terms are needed to get a good approximation???
These questions will be answered using Taylor’s Theorem.
Taylor Series – Example 5
...)1()1()1(1 :)1(Expansion SeriesTaylor
6)1(6
)(
2)1(2
)(
1)1('1
)('
1)1(1
)(
1at of expansion seriesTaylor Obtain
32
)3(
4
)3(
)2(
3
)2(
2
xxxa
fx
xf
fx
xf
fx
xf
fx
xf
ax
1f(x)
55
Taylor Series – Example 6
...)1(3
1)1(
2
1)1( :Expansion SeriesTaylor
2)1(1)1(,1)1(',0)1(
2)(,
1)(,
1)(',)ln()(
)1(at )ln( of expansion seriesTaylor Obtain
32
)3()2(
3
)3(
2
)2(
xxx
ffff
xxf
xxf
xxfxxf
axf(x)
56
Convergence of Taylor Series
• The Taylor series converges fast (few terms are needed) when x is near the point of expansion. If |x-a| is large then more terms are needed to get a good approximation.
57
Taylor’s Theorem
. and between is)()!1(
)(
:where
)(!
)( )(
:by given is )( of value the then and containing interval an on
1)( ..., 2, 1, orders of sderivative possesses )( functiona If
1)1(
0
)(
∑
xaandaxn
fR
Raxk
afxf
xfxa
nxf
nn
n
n
n
k
kk
58
(n+1) terms Truncated
Taylor Series
Remainder
Taylor’s Theorem
.applicablenot is TheoremTaylor
defined.not are sderivative
its and function thethen,1If
.1||if0 expansion ofpoint thewith1
1
:for theoremsTaylor'apply can We
x
xax
f(x)
59
Error Term
. and between allfor
)()!1(
)(
:on boundupper an derive can we
error, ionapproximat about theidea anget To
1)1(
xaofvalues
axn
fR n
n
n
60
Error Term - Example
0514268.82.0)!1(
)()!1(
)(
1≥≤)()(
31
2.0
1)1(
2.0)()(
ERn
eR
axn
fR
nforefexf
nn
nn
n
nxn
61
?2.00at
expansion seriesTaylor its of3)( terms4first the
by )( replaced weiferror theis largeHow
xwhena
n
exf x
Alternative form of Taylor’s Theorem
62
hxxwherehn
fR
hRhk
xfhxf
hxx
nxfLet
nn
n
n
n
k
kk
and between is)!1(
)(
size) step ( !
)( )(
: then and containing interval an on
1)( ..., 2, 1, orders of sderivative have)(
1)1(
0
)(
Taylor’s Theorem – Alternative forms
63
. and between is
)!1(
)(
!
)()(
,
. and between is
)()!1(
)()(
!
)()(
1)1(
0
)(
1)1(
0
)(
hxxwhere
hn
fh
k
xfhxf
hxxxa
xawhere
axn
fax
k
afxf
nnn
k
kk
nnn
k
kk
Mean Value Theorem
64
)( )('
, ,0for Theorem sTaylor' Use:Proof
)('
),( exists therethen
),( interval open theon defined is derivative its and
],[ interval closeda on function continuousa is)( If
abξff(a)f(b)
bhxaxn
ab
f(a)f(b)ξf
baξ
ba
baxf
Alternating Series Theorem
65
termomittedFirst :
n terms)first theof (sum sum Partial:
converges series The
then
0lim
If
S
:series galternatin heConsider t
1
1
4321
4321
n
n
nnnn
a
S
aSS
and
a
and
aaaa
aaaa
Alternating Series – Example
66
!7
1
!5
1
!3
11)1(s
!5
1
!3
11)1(s
:Then
0lim
:since series galternatin convergent a is This
!7
1
!5
1
!3
11)1(s:usingcomputed becansin(1)
4321
in
in
aandaaaa
in
nn
Example 7
67
? 1 with eapproximat to
used are terms1)( whenbeerror thecan largeHow
expansion) ofcenter (the5.0at)( of
expansion seriesTaylor theObtain
12
12
xe
n
aexf
x
x
Example 7 – Taylor Series
...!
)5.0(2...
!2
)5.0(4)5.0(2
)5.0(!
)5.0(
2)5.0(2)(
4)5.0(4)(
2)5.0('2)('
)5.0()(
22
222
∞
0
)(12
2)(12)(
2)2(12)2(
212
212
∑
k
xe
xexee
xk
fe
efexf
efexf
efexf
efexf
kk
k
kk
x
kkxkk
x
x
x
68
5.0,)( of expansion seriesTaylor Obtain 12 aexf x
Example 7 – Error Term
)!1(
max)!1(
)5.0(2
)!1(
)5.01(2
)5.0()!1(
)(
2)(
3
12
]1,5.0[
11
1121
1)1(
12)(
n
eError
en
Error
neError
xn
fError
exf
nn
nn
nn
xkk
69
• TOTAL NUMERICAL ERROR
The total numerical error is the summation of the truncation and round-off errors. In general, the only way to minimize round-off errors is to increase the number of significant figures of the computer. Further, we have noted that round-off error will increase due to subtractive cancellation or due to an increase in the number of computations in an analysis.
In contrast, the truncation error can be reduced by decreasing the step size. Because a decrease in step size can lead to subtractive cancellation or to an increase in computations, the truncation errors are decreased as the round-off errors are increased.
The strategy for decreasing one component of the total error leads to an increase of the other component. In a computation, we could conceivably decrease the step size to minimize truncation errors only to discover that in doing so, the round-off error begins to dominate the solution and the
total error grows! 70
71