Root Finding
886307 1
What is Computer Science? Computer science is a discipline that spans theory and
practice.
It requires thinking both in abstract terms and in concrete terms.
The practical side of computing can be seen everywhere.
Computer science also has strong connections to other disciplines.
Source: https://www.cs.mtu.edu/~john/whatiscs.html
What is Computer Science? Computer Science is practiced by mathematicians,
scientists and engineers.
Mathematics, the origins of Computer Science, provides reason and logic.
Science provides the methodology for learning and refinement.
Engineering provides the techniques for building hardware and software.
Source: https://www.cs.mtu.edu/~john/whatiscs.html
Numerical Computing/Analysis is the study of algorithms that use
numerical approximation (as opposed to general symbolic manipulations) for the problems of mathematical analysis.
Root Finding Given a real valued function f of one variable (say x),
the idea is to find an x such that:
f(x) = 0
886307 5
Root Finding Examples 1) Find real x such that:
2) Again:
3) Again:
Observations on 1, 2 and 3 ?? (two are trick questions)
886307 6
2 4 3 0x x
tanh 3 0x
cos 2 0x
Root Finding Examples สมการตวัแปรเดยีว f = f(x)
f(x) = x + 1 => root f(x*) = 0 => x* = 0-1 = -1
สมการตวัแปรไมเ่ชงิเสน้
f(x) = x2+3x+2 => root (x+2)(x+1) = 0 => x* = -1, -2
886307 7
Requirements For An Algorithmic Approach Idea: find a sequence of x1,x2,x3,x4….
so that for some N, xN is “close” to a root.
i.e. |f(xN)|
Requirements For Such a Root-Finding Scheme Initial guess: x1
Relationship between xn+1 and xn and possibly xn-1, xn-2 , xn-3, …
When to stop the successive guesses?
886307 9
Some Alternative Methods Bracketing Method (ประมาณแบบชว่งค าตอบ)
Graphical Method
Incremental Search Method
Bisection
False-Position Method
Open Method
Newton ’s Method (ประมาณแบบชว่งจดุ: วิง่บนความชนั) Avoiding derivatives in Newton’ method..
886307 10
Intermediate value theorem When you have two points connected by a continuous
curve:
one point below the line
the other point above the line
... then there will be at least one place where the curve crosses the line!
The intermediate value theorem tells us that if a continuous function is positive at one end of an interval and is negative at the other end of the interval then there is a root somewhere in the interval.
886307 11
12
Theorem
Theorem:
x
f(x)
xu x
An equation f(x)=0, where f(x) is a real
continuous function, has at least one root
between xl and xu if f(xl) f(xu) < 0.
886307
886307 13
Theorem If function f(x) in f(x)=0 does not change sign between
two points, roots may still exist between the two points.
x
f(x)
xu x
886307 14
Theorem If the function f(x) in f(x)=0 does not change sign between two
points, there may not be any roots between the two points.
x
f(x)
xu x
x
f(x)
xu
x
886307 15
Theorem If the function f(x) in f(x)=0 changes sign between
two points, more than one root may exist between
the two points.
x
f(x)
xu x
Incremental Search Method If f(x1) and f(x2) have opposite signs, there is at least
one root in the interval (x1, x2).
If the interval is small enough, it is likely contain a single root.
The zeroes of f(x) or root can be detected by evaluating the function at the interval dx and looking for changing in sign.
886307 18
Write a program
Example Use incremental search with dx = 0.2 to bracket the
smallest positive zero of f(x) = x3-10x2+5
886307 19
clear all; close all; clc; f = ‘x.^3 – 2*x.^2 + 5*x - 10'; f = inline(f); a = 0; % lower boundary b = 3; % upper boundary dx = 0.1 x = [a:dx:b]; fx = f(x); [x; fx]' %plot(x, fx); disp('Press Enter to continue') pause;
886307 Numerical Computing: 2/2558 20
flag = true;
if f(a)*f(b) < 0
i = 1;
c = a + i*dx;
while (flag)
[sign(f(a)) sign(f(c))]
if (sign(f(a)) ~= sign(f(c)))
[i c f(c)]
disp 'found root';
flag = false;
end
i = i+1;
c = a + i*dx;
end
elseif f(a)*f(b) == 0
disp 'root is at the boundary';
else
disp 'could not detect root';
end
886307 Numerical Computing: 2/2558 21
Assignment-01 Modify Incremental Search code for detecting all roots
found on the given list and specifying location of each root.
886307 22
886307 23
Algorithm for Bisection Method
886307 26
Step 1 Choose x and xu as two guesses for the root such
that f(x) f(xu) < 0, or in other words, f(x) changes sign between x and xu.
x
f(x)
xu x
886307 27
Step 2 Estimate the root, xm of the equation f (x) = 0 as the
mid-point between x and xu as
xx
m = xu
2
x
f(x)
xu x
886307 28
Step 3 Now check the following If f(x) f(xm) < 0, then the root lies
between x and xm; then x = x ; xu = xm. If f(x ) f(xm) > 0, then the root lies
between xm and xu; then x = xm; xu = xu.
If f(x) f(xm) = 0; then the root is xm.
Stop the algorithm if this is true.
x
f(x)
xu x
xm
886307 29
Step 4
xx
m = xu
2
100
new
m
old
m
new
ax
xxm
root of estimatecurrent newmx
root of estimate previousoldmx
New estimate
Absolute Relative Approximate Error
886307 30
Step 5
Check if absolute
relative approximate
error is less
than prespecified
tolerance or if
maximum number
of iterations is
reached.
Yes
No
Stop
Using the new
upper and lower
guesses from
Step 3, go to Step
2.
Example Use Bisection method to find the root of f(x) = x3-
10x2+5 [within -1:0]
886307 31
clear all; close all; clc; f = 'x.^3-10*x.^2+5'; f = inline(f); a = 0; % lower boundary b = 1; % upper boundary dx = 0.1 x = [a:dx:b]; fx = f(x); [x; fx]' %plot(x, fx); disp('Press Enter to continue') pause;
886307 Numerical Computing: 2/2558 32
flag = true;
delta = 0.000000001;
if f(a)*f(b) < 0
i = 0;
while (flag)
i = i+1;
m = (a+b)/2
if f(m) == 0 %|| (b-a)/(2.^i) < delta
disp 'found root';
flag = false;
else if f(a)*f(m) > 0
a = m;
else
b = m;
end
end
[i a b m f(m)]
end
else if f(a)*f(b)== 0
disp 'root is at the boundary';
else
disp 'could not detect root of non-linear equation’;
end
886307 Numerical Computing: 2/2558 33
Assignment-02 Modify Bisection code for detecting all roots found on
the given list and specifying location of each root.
886307 34
886307 35
Advantages Always convergent
The root bracket gets halved with each iteration - guaranteed.
886307 36
Drawbacks
Slow convergence
886307 37
Drawbacks (continued) If one of the initial guesses is close to the root, the
convergence is slower
886307 38
Drawbacks (continued) If a function f(x) is such that it just touches the x-axis
it will be unable to find the lower and upper guesses.
f(x)
x
2xxf
Bisection Convergence Rate Every time we split the interval we reduce the search
interval by a factor of two.
i.e.
886307 39
0 0
12k k k
a ba b
False-Position Method of Solving a Nonlinear Equation
Major: All Engineering Majors
Authors: Duc Nguyen
http://numericalmethods.eng.usf.edu Numerical Methods for STEM undergraduates
40
http://numericalmethods.eng.usf.edu/
False-Position Method or Regula Falsi Method Because of the convergence of Bisection method is
slow.
We assume that f(a) and f(b) have opposite signs.
The Bisection method used the midpoint of the interval [a, b] as next iterate.
A better approximation is obtained if we find the point (c, 0) where the secant line L joining the points (a, f(a)) and (b, f(b)) crosses the x-axis.
(1)
(2)
1
Introduction
42
Uxf
Uxrx
Lxf
Lx
O
xf
x
Exact root
0)( xf
0)(*)( UL xfxf
In the Bisection method
2
ULr
xxx
(3)
Figure 1 False-Position Method
(c)
44
False-Position Method
Based on two similar triangles, shown in Figure 1, one gets:
Ur
U
Lr
L
xx
xf
xx
xf
)()(
0;0)(
0;0)(
UrU
LrL
xxxf
xxxf
The signs for both sides of Eq. (4) is consistent, since:
(4)
Uxf
Uxrx
Lxf
Lx
O
xf
x
Exact root
45
LUrULr xfxxxfxx
ULrULLU xfxfxxfxxfx
From Eq. (4), one obtains
The above equation can be solved to obtain the next predicted root
UL
ULLUr
xfxf
xfxxfxx
rx , as
(5)
46
Step-By-Step False-Position Algorithms
as two guesses for the root such 1. Choose Lx Uxand that 0UL xfxf
2. Estimate the root, UL
ULLUm
xfxf
xfxxfxx
3. Now check the following
, then the root lies between (a) If
and ; then
0mL xfxf Lx
mx LL xx and mU xx
, then the root lies between (b) If
and ; then
0mL xfxf mx
Ux mL xx and UU xx
47
, then the root is (c) If 0mL xfxf .mxStop the algorithm if this is true.
4. Find the new estimate of the root
UL
ULLUm
xfxf
xfxxfxx
Find the absolute relative approximate error as
100
new
m
old
m
new
ma
x
xx
48
where
= estimated root from present iteration
= estimated root from previous iteration
new
mxold
mx
.001.010 3 ssay5. If sa , then go to step 3,
else stop the algorithm.
Notes: The False-Position and Bisection algorithms are quite similar. The only difference is the formula used to calculate the new estimate of the root ,mx shown in steps #2 and 4!
Assignment-03 Write False Position code for detecting all roots found
on the given list and specifying location of each root.
886307 49
Newton-Raphson Method Slope Method for Finding Roots
y=f(x)
xi
A
Xi+1
B
Xi+2
C
Basic Ideas Slope=f’(x)
f(x)-0
xi-xi+1
f’(x)=dy/dx
Newton-Raphson Method
)(xf
)f(x - = xx
i
iii
1
f(x)
f(xi)
f(xi-1)
xi+2 xi+1 xi X
ii xfx ,
Geometrical illustration of the Newton-Raphson method.
http://numericalmethods.eng.usf.edu 52
Derivation f(x)
f(xi)
xi+1 xi
X
B
C A
)(
)(1
i
iii
xf
xfxx
1
)()('
ii
ii
xx
xfxf
AC
ABtan(
Derivation of the Newton-Raphson method. 53 http://numericalmethods.eng.usf.edu
Algorithm for Newton-Raphson Method
54 http://numericalmethods.eng.usf.edu
Step 1
)(xf Evaluate symbolically.
http://numericalmethods.eng.usf.edu 55
Step 2
i
iii
xf
xf - = xx
1
Use an initial guess of the root, , to
estimate the new value of the root, , as ix
1ix
http://numericalmethods.eng.usf.edu 56
Step 3
0101
1 x
- xx =
i
iia
Find the absolute relative approximate error
as a
http://numericalmethods.eng.usf.edu 57
Step 4 Compare the absolute relative approximate error with
the pre-specified relative error tolerance .
Also, check if the number of iterations has exceeded the maximum number of iterations allowed. If so, one needs to terminate the algorithm and notify the user.
s
Is ?
Yes
No
Go to Step 2 using new
estimate of the root.
Stop the algorithm
sa
http://numericalmethods.eng.usf.edu 58
Example 1 You are working for ‘DOWN THE TOILET COMPANY’ that
makes floats for ABC commodes. The floating ball has a specific gravity of 0.6 and has a radius of 5.5 cm. You are asked to find the depth to which the ball is submerged when floating in water.
Floating ball problem. http://numericalmethods.eng.usf.edu 59
Example 1 Cont.
The equation that gives the depth x in meters to which the ball is submerged under water is given by
423 1099331650 -.+x.-xxf
Use the Newton’s method of finding roots of equations to find
a) the depth ‘x’ to which the ball is submerged under water. Conduct three
iterations to estimate the root of the above equation.
b) The absolute relative approximate error at the end of each iteration, and
c) The number of significant digits at least correct at the end of each
iteration.
http://numericalmethods.eng.usf.edu 60
Floating ball problem.
Example 1 Cont.
423 1099331650 -.+x.-xxf
61
To aid in the
understanding of how
this method works to
find the root of an
equation, the graph of
f(x) is shown to the right,
where
Solution
Graph of the function f(x)
Example 1 Cont.
62
x-xxf
.+x.-xxf -
33.03'
1099331650
2
423
Let us assume the initial guess of the root of
is . This is a reasonable guess
(discuss why
and are not good choices)
0xfm05.00 x
0x m11.0x
Solve for xf '
as the extreme values of the depth x would be 0
and the diameter (0.11 m) of the ball.
Example 1 Cont.
06242.0
01242.00.05
109
10118.10.05
05.033.005.03
10.993305.0165.005.005.0
'
3
4
2
423
0
001
xf
xfxx
63
Iteration 1
The estimate of the root is
Example 1 Cont.
64 http://numericalmethods.eng.usf.edu
Figure 5 Estimate of the root for the first iteration.
Example 1 Cont.
%90.19
10006242.0
05.006242.0
1001
01
x
xxa
65 http://numericalmethods.eng.usf.edu
The absolute relative approximate error at the end of Iteration 1 is a
The number of significant digits at least correct is 0, as you need an
absolute relative approximate error of 5% or less for at least one
significant digits to be correct in your result.
Example 1 Cont.
06238.0
104646.406242.0
1090973.8
1097781.306242.0
06242.033.006242.03
10.993306242.0165.006242.006242.0
'
5
3
7
2
423
1
112
xf
xfxx
66
Iteration 2
The estimate of the root is
Example 1 Cont.
67
Figure 6 Estimate of the root for the Iteration 2.
Example 1 Cont.
%0716.0
10006238.0
06242.006238.0
1002
12
x
xxa
68
The absolute relative approximate error at the end of Iteration 2 is a
The maximum value of m for which is 2.844.
Hence, the number of significant digits at least correct in the
answer is 2.
m
a
2105.0
Example 1 Cont.
06238.0
109822.406238.0
1091171.8
1044.406238.0
06238.033.006238.03
10.993306238.0165.006238.006238.0
'
9
3
11
2
423
2
223
xf
xfxx
69
Iteration 3
The estimate of the root is
Example 1 Cont.
70 Estimate of the root for the Iteration 3.
Example 1 Cont.
%0
10006238.0
06238.006238.0
1002
12
x
xxa
71
The absolute relative approximate error at the end of Iteration 3 is a
The number of significant digits at least correct is 4, as only 4
significant digits are carried through all the calculations.
ตัวอย่าง
หาคา่รากของสมการ 2ex + x – 4 = 0 วิธีท า ให้ f(x) = 2ex + x – 4 f(x) = 2ex + 1
จากสตูรการท าซ า้
)(xf)f(x
xxr
rr1r
ก าหนด x0 = 0
f(x0) = 2e0 + 0 – 4 = -2
f(x0) = 2e0 + 1 = 3
x1 = 0 – (-2/3) = 0.67
f(x1) = f(0.67) = 0.578 f (x1) = 4.908
x2 = 0.67 – (0.578/4.908) = 0.5522
f(x2) = 0.02545 f (x2) = 4.473
x3 = 0.552 – (0.02545/4.473) = 0.54631
f(x3) = 4.82 * 10-5 f (x3) = 4.454
x4 = 0.54631 – (4.82*10-5/4.454) = 0.5462992
…
Algorithm ของการหาคา่รากของสมการ f(x)=0 ด้วยระเบียบวิธีนิวตนั-ราฟสนั
Input : x0 เป็นคา่เร่ิมต้น
= ขอบเขตของความคลาดเคลื่อน
N = จ านวนครัง้สงูสดุของการท าซ า้
= คา่ขอบเขตของฟังก์ชนั
Output : คา่รากของ f(x)=0
Algorithm r = 0
Do
if r < N then
xr+1 = xr – f(xr) / f(xr)
r = r+1
else error( Out of Range )
Until | xr – xr-1| < and | f(xr) | <
Return xr
Advantages and Drawbacks of Newton Raphson Method http://numericalmethods.eng.usf.edu
76
Advantages
Converges fast (quadratic convergence), if it converges.
We will show that the rate of convergence is much faster than the bisection method.
However – as always, there is a catch. The method uses a local linear approximation, which clearly breaks down near a turning point.
Small f ’(xn) makes the linear model very flat and will send the search far away …
Requires only one guess
77
Drawbacks: Avoiding derivatives
78
1. Divergence at inflection points
Selection of the initial guess or an iteration value of the root that is
close to the inflection point of the function may start diverging
away from the root in ther Newton-Raphson method.
For example, to find the root of the equation .
The Newton-Raphson method reduces to .
Table 1 shows the iterated values of the root of the equation.
The root starts to diverge at Iteration 6 because the previous estimate
of 0.92589 is close to the inflection point of .
Eventually after 12 more iterations the root converges to the exact
value of
xf
0512.01 3 xxf
2
33
113
512.01
i
iii
x
xxx
1x
.2.0x
Drawbacks – Inflection Points
Iteration Number
xi
0 5.0000
1 3.6560
2 2.7465
3 2.1084
4 1.6000
5 0.92589
6 −30.119
7 −19.746
18 0.2000 0512.013
xxf79
Divergence at inflection point for
Table 1 Divergence near inflection point.
2. Division by zero
For the equation
the Newton-Raphson method
reduces to
For , the
denominator will equal zero.
Drawbacks – Division by Zero
0104.203.0 623 xxxf
80
ii
iiii
xx
xxxx
06.03
104.203.02
623
1
02.0or 0 00 xx Pitfall of division by zero or near a zero number
Results obtained from the Newton-Raphson method may
oscillate about the local maximum or minimum without
converging on a root but converging on the local maximum or
minimum.
Eventually, it may lead to division by a number close to zero
and may diverge.
For example for the equation has no real
roots.
Drawbacks – Oscillations near local maximum and minimum
02 2 xxf
81
3. Oscillations near local maximum and minimum
Drawbacks – Oscillations near local maximum and minimum
82
-1
0
1
2
3
4
5
6
-2 -1 0 1 2 3
f(x)
x
3
4
2
1
-1.75 -0.3040 0.5 3.142
Oscillations around local
minima for . 2 2 xxf
Iteration
Number
0
1
2
3
4
5
6
7
8
9
–1.0000
0.5
–1.75
–0.30357
3.1423
1.2529
–0.17166
5.7395
2.6955
0.97678
3.00
2.25
5.063
2.092
11.874
3.570
2.029
34.942
9.266
2.954
300.00
128.571
476.47
109.66
150.80
829.88
102.99
112.93
175.96
Table 3 Oscillations near local maxima and
mimima in Newton-Raphson method.
ix ixf %a
4. Root Jumping
In some cases where the function is oscillating and has a number of
roots, one may choose an initial guess close to a root. However, the
guesses may jump and converge to some other root.
For example
Choose
It will converge to
instead of -1.5
-1
-0.5
0
0.5
1
1.5
-2 0 2 4 6 8 10
x
f(x)
-0.06307 0.5499 4.461 7.539822
Drawbacks – Root Jumping
0 sin xxf
83
xf
539822.74.20 x
0x
2831853.62 x Root jumping from intended location of root for
. 0 sin xxf
11/5/2012 886307 84
Secant Method Roots of a Nonlinear Equation
Secant Method The Newton-Raphson algorithm is based on the
evaluation of derivation.
The Newton-Raphson algorithm requires the evaluation of two functions per iteration, f(x) and f ’(x).
It is desirable to have method that converge as fast as Newton’s method yet involves only evaluations of f(x) and not of f ’(x).
Secant Method
)(xf
)f(x - = xx
i
iii
1
f(x)
f(xi)
f(xi-1)
xi+2 xi+1 xi X
ii xfx ,
1
1)()()(
ii
ii
ixx
xfxfxf
)()(
))((
1
1
1
ii
iii
iixfxf
xxxfxx
Newton’s Method
Approximate the derivative
Secant Method
)()(
))((
1
1
1
ii
iii
iixfxf
xxxfxx
Geometric Similar Triangles
f(x)
f(xi)
f(xi-1)
xi+1 xi-1 xi X
B
C
E D A
11
1
1
)()(
ii
i
ii
i
xx
xf
xx
xf
DE
DC
AE
AB
Algorithm for Secant Method
Step 1
010x1
1 x
- xx =
i
ii
a
Calculate the next estimate of the root from two initial guesses
Find the absolute relative approximate error
)()(
))((
1
1
1
ii
iii
iixfxf
xxxfxx
Step 2 Find if the absolute relative approximate error is
greater than the prespecified relative error tolerance.
If so, go back to step 1, else stop the algorithm.
Also check if the number of iterations has exceeded the maximum number of iterations.
Example
To find the inverse of a number ‘a’, one can use the equation
01
)( x
axf
where x is the inverse of ‘a’.
Solution
Use the Secant method of finding roots of equations to
Find the inverse of a = 2.5. Conduct three iterations to estimate the root of the above equation.
Find the absolute relative approximate error at the end of each iteration, and
The number of significant digits at least correct at the end of each iteration.
01
xaxf
Graph of function f(x)
0 0.25 0.5 0.75 18
6
4
2
0
2
f(x)
1.5
7.5
0
f x( )
10.1 x
1
1
111
)(1
ii
ii
i
ii
xa
xa
xxx
a
xx
ii
ii
i
i
xx
xxx
a
x11
)(1
1
1
1
1
1
)(
)(1
ii
ii
ii
i
i
xx
xx
xxx
a
x
i
iiix
axxx1
1
)1(11 iiii axxxx
Iteration #1
%091.9
0.55
)1)6.0(5.2(1.06.0
)1(
6.0,1.0
1
0101
01
a
x
axxxx
xx
0 0.25 0.5 0.75 1100
80
60
40
20
0
20
f(x)
x'1, (first guess)
x0, (previous guess)
Secant line
x1, (new guess)
7.5
97.5
0
f x( )
f x( )
f x( )
secant x( )
f x( )
10 x x 0 x 1' x x 1
Iteration #2
%231.69
0.325
)1)55.0(5.2(6.055.0
)1(
55.0,6.0
2
1012
10
a
x
axxxx
xx
0 0.25 0.5 0.75 1100
80
60
40
20
0
20
f(x)
x1 (guess)
x0 (previous guess)
Secant line
x2 (new guess)
2.045
97.5
0
f x( )
f x( )
f x( )
secant x( )
f x( )
10 x x 1 x 0 x x 2
Iteration #3
%0876.24
0.428
))1325.0(5.2(55.0325.0
)1(
325.0,55.0
a
3
2123
21
x
axxxx
xx
Entered function along given interv al w ith current and next root and the
tangent line of the curv e at the current root
0 0.25 0.5 0.75 1100
80
60
40
20
0
20
f(x)
x2 (guess)
x1 (previous guess)
Secant line
x3 (new guess)
3.199
97.5
0
f x( )
f x( )
f x( )
secant x( )
f x( )
10 x x 2 x 1 x x 3
Advantages
Converges fast, if it converges
Requires two guesses that do not need to bracket the root
Summary We have looked at four ways to find the root of a single
valued, single parameter function
We considered a robust, but “slow” bisection method and then a “faster” but less robust Newton’s and Secant method.
886307 98