L23 Numerical Methods part 3
• Project• Homework• Review• Steepest Descent Algorithm• Summary• Test 4 results
1
H22 ans
2
x lower 0.00000
x upper 2.00000
xl+1/3(I) xu-1/3(I)
(I) Iteration xl 1/3(xu+2xl) 1/3(2xu+xl) xu Interval Optimal
1 x 0.00000 0.66667 1.33333 2.00000 2.00000 0.66667 f(x) 4.00000 0.44444 11.11111 36.00000 0.44444
2 x 0.00000 0.44444 0.88889 1.33333 1.33333 0.44444 f(x) 4.00000 0.04938 2.41975 11.11111 0.04938
3 x 0.00000 0.29630 0.59259 0.88889 0.88889 0.59259 f(x) 4.00000 0.66392 0.13717 2.41975 0.13717
4 x 0.29630 0.49383 0.69136 0.88889 0.59259 0.49383 f(x) 0.66392 0.00061 0.58589 2.41975 0.00061
2( ) 16 16 4f
optimum solution __0.444___min value __0.0494__interval of uncertainty__0.889__number of fcn evals __6____
H22 cont’d
3
/2
6/2
/ (2 / 3). .
for 3 iterations, or 6 function evaluations
(2 / 3)(2 / 3)(2 / 3)(2 / 3)0.296
2(0.296) 0.592
nnew old
new
FR I Ie g
FR
I
2ln( )1
ln(2 / 3). .
for FR=0.0012ln( )
1ln(2 / 3)2 ln(0.001)
1ln(2 / 3)
1 34.06 36
FRN
e g
FRN
N
N
For iterations # 2 on….The interval is reduced to 0.888/1.333 =67% …. For the cost of 2 function evaluations. If we create a measure of efficiency
H22
4
Golden Section Region Elimiation Search for locating min of f(x)
x lower 0.00000 1-τ= 0.38197
x upper 2.00000 τ = 0.61803
Iteration xl xL+(1-τ) I xL+ τ I xu Interval I Optimal
1 x 0.00000 0.76393 1.23607 2.00000 2.00000 0.76393 f(x) 4.00000 1.11456 8.66874 36.00000 1.11456
2 x 0.00000 0.47214 0.76393 1.23607 1.23607 0.47214 f(x) 4.00000 0.01242 1.11456 8.66874 0.01242
3 x 0.00000 0.29180 0.47214 0.76393 0.76393 0.47214 f(x) 4.00000 0.69358 0.01242 1.11456 0.01242
4 x 0.29180 0.47214 0.58359 0.76393 0.47214 0.47214 f(x) 0.69358 0.01242 0.11180 1.11456 0.01242
5 x 0.29180 0.40325 0.47214 0.58359 0.29180 0.47214 f(x) 0.69358 0.14976 0.01242 0.11180 0.01242
6 x 0.40325 0.47214 0.51471 0.58359 0.18034 0.51471 f(x) 0.14976 0.01242 0.00346 0.11180 0.00346
H22
5
1
6 1
5
/ (0.618). .
for 6 function evaluations or6 iterations
(0.618)
(0.618)0.090
2(0.090) 0.180
nnew old
new
FR I Ie g
FR
I
ln( )1
ln(0.618). .
for FR=0.001ln( )
1ln(0.618)ln(0.001)
1ln(0.618)
1 14.35 16
FRN
e g
FRN
N
N
optimum solution
__.472____min value
__.0124______interval of uncertainty___0.764_____number of fcn evals
___5_____
For iterations # 2 on….The interval is reduced to 61.8% of interval, I for the cost of only 1 function evaluation. If we create a measure of efficiency Golden
SectionBest
Search algorithm?
6
1. Find a direction, then2. Find best step size for alpha3. Repeat steps 1 and 2 ‘til “done”
7
Figure 10.4 Unimodal function f().
Unimodal functions in “locale”
monotonic increasing then monotonic decreasing
monotonic decreasing then monotonic increasing
Review: Step Size Methods• “Analytical”
Search direction = (-) gradient, (i.e. line search)Find f’(α)=0, f’’(α)≥0
• Region Elimination (“interval reducing”)Equal intervalAlternate equal intervalGolden Section
• OthersNewton-RaphsonSuccessive quadratic Interpolation
8
Successive Alternate Equal Interval
9
Assume bounding phase has found Min can be on either side of
But for sure its not in this region!
1
32 1
3 3
b l
b l u
I
I I
u land
b
Point values… not a line
10
Figure 10.9 Graphic of a section partition.
Golden section
22
1,2
1 1 4(1)( 1)4
2 2(1)
1 5 1 2.236
2 20.618, 1.618
b b ac
a
2
leftside = rightside(1 )
[ ] (1 )[ ] (1 )
1 0
I II I
Descent Algorithm?
11
2
2
0cos( )<0
if we let cos(180)
cos(180)
( 1)
0
c dc d
d c c
c d c c
c
c
( *)Trecall f gradient c x
Descent is guaranteed!
steepest descent
( *)Tgradient f
d cd x
12How does it work?
Steepest descent algorithm
“Modified” Steepest-Descent Algorithm
13
(0)
(k) (0)
Step 0. Select convergence parameters (stopping) , 0 ( ) , 0 ( ( ), " ")
set iteration counter k=0
Step 1. Estimate starting pt Step 2. Calculate ( )Step 3
gradientf change in f Excel convergence
f
cx
xc x
(k) (k)
(k+1) (k) (k)
max
. Calculate , if stop, otherwise continue
Step 4. Set = -Step 5. Find optimal step size * alongStep 6. Update design, = + , calculate ( ), set k=k+1, Step 7. If k>k or ( ) s
ff x
c c
d cd
x x d xtop, otherwise go to Step 2.
Ex 10.4
14
Iteration 0 1 2 3 4
x1 1 -19 -133.548 -133.386 3845.02x2 0 20 134.5484 134.3864 -3844.02
c1 2 -78 -536.193 -535.545 15378.08c2 -2 78 536.1935 535.5455 -15378.1||c|| 2.8 110.3 758.3 757.4 21747.9
d1 -2 78 536.1935 535.5455 -15378.1d2 2 -78 -536.193 -535.545 15378.08
α 10 -1.46857 0.000302 7.428699 0.25
xnew1 -19 -133.548 -133.386 3845.02 0.500002xnew2 20 134.5484 134.3864 -3844.02 0.499998
f (x) 1521 71875.86 71702.24 59121329 1.34E-11
2 21 2 2
1 2
2 1 *
( ) 22 2 2(1) 2(0) 2
( *) 2 2 2(1) 2(1) 2
x
x
f x x x xx x
fx x
x
c x
Use Solver to find α*
EX 10.4
15
Iteration 0 1 2 3 4
x1 1 0.5 0.5 0.5 0.499999x2 0 0.5 0.5 0.5 0.500001
c1 2 1.22E-08 8.39E-08 8.38E-08 -2.4E-06c2 -2 -1.2E-08 -8.4E-08 -8.4E-08 2.41E-06||c|| 2.8 0.0 0.0 0.0 0.0
d1 -2 -1.2E-08 -8.4E-08 -8.4E-08 2.41E-06d2 2 1.22E-08 8.39E-08 8.38E-08 -2.4E-06
α 0.25 -1.46857 0.000302 7.428699 0.25
xnew1 0.5 0.5 0.5 0.499999 0.5xnew2 0.5 0.5 0.5 0.500001 0.5
f (x) 0 1.83E-15 1.78E-15 1.45E-12 0
2 21 2 2
1 2
2 1 *
( ) 22 2 2(1) 2(0) 2
( *) 2 2 2(1) 2(1) 2
x
x
f x x x xx x
fx x
x
c x
||c||=0Done!
H22 Prob 10.52
16
Let’s use SteepDescentTemplate.xls to set up 10.52 and solve.
Summary• Step size methods: analytical, region elimin.• Golden Section is very efficient• Algorithms include stopping criteria (||
c||,∆f )• Steepest descent algorithm
Convergence is assuredLots of Fcn evals (in line search)Each iteration is independent of previous moves (i.e. totally “local” )Successive iterations slow down.. may stall
17