7/30/2019 NLPP Lect Slides
1/50
Nonlinear Programming
Elimination Method
Fibonacci Method, Golden Section Method
Chapters: 5.7 & 5.8
(Engineering Optimization by S S Rao)
Satpathi DK, BITS-Hyderabad Campus
7/30/2019 NLPP Lect Slides
2/50
Unimodal function:
Satpathi DK, BITS-Hyderabad Campus
7/30/2019 NLPP Lect Slides
3/50
Satpathi DK, BITS-Hyderabad Campus
7/30/2019 NLPP Lect Slides
4/50
Satpathi DK, BITS-Hyderabad Campus
7/30/2019 NLPP Lect Slides
5/50
Interval of uncertainty:
This is understood by saying that initially
the interval of uncertainty is [a,b] i.e. the
optimum is somewhere in [a,b]. Then aftertwo experiments [finding ],
the interval of uncertainty reduces to
.
Satpathi DK, BITS-Hyderabad Campus
7/30/2019 NLPP Lect Slides
6/50
Measure of effectiveness:
Let be the initial interval of uncertainty.
Let be the interval of uncertainty after N
experiments.
The measure of effectiveness is defined
as
Satpathi DK, BITS-Hyderabad Campus
7/30/2019 NLPP Lect Slides
7/50
Fibonacci Method
This method is used to find the minimum of afunction of one variable.
Method: The method uses the Fibonacci
sequence {Fn} i.e.
F0=1=F1, Fn=Fn-1+Fn-2 , n>1
which yields the sequence
1,1,2,3,5,8,13,21..
Satpathi DK, BITS-Hyderabad Campus
7/30/2019 NLPP Lect Slides
8/50
Procedure: Let L0 be the initial interval of
uncertainty defined by and n be the
number of experiments to be conducted.Define
(1)
and place the first two experiments at points
x1 and x2, which are located at distance of
from each end of L0.
a x b
* 2
2 0
n
n
FL L
F
*
2L
This gives
Satpathi DK, BITS-Hyderabad Campus
7/30/2019 NLPP Lect Slides
9/50
* 2
1 2 0
* 2
2 2 0
1
0
1
0
1
0
( )
....(2)
n
n
n
n
n n
n
n
n
n
n
Fx a L a L
F
Fx b L b L
FF F
b LF
Fb b a L
FF
a LF
(This shows thatb-x2=x1-a)
L0
a bx1 x2
Satpathi DK, BITS-Hyderabad Campus
7/30/2019 NLPP Lect Slides
10/50
Discard part of the interval by using unimodality
assumption.
Interval of uncertainty depending on f(x1) < or > f(x
2)
after two experiment is
2 2 1
2 1 2 1 1 2
12 0
* 2 12 0 2 0 0
Length of , or ,
or - ( )
1
n
n
n n
n n
L a x x b
x a b x b x x a b x x a
Fx a L
F
F FL L L L LF F
and with one experiment left in it.
Satpathi DK, BITS-Hyderabad Campus
7/30/2019 NLPP Lect Slides
11/50
This experiment will be at a distance of
* 2 2 1 2
2 0 0 2
1 1
n n n n
n n n n
F F F F L L L L
F F F F
from one end and
* 2 1 2 3
2 2 2 2 2
1 1 11
n n n n
n n n
F F F F
L L L L LF F F from the other end.
Now place the third experiment in L2 so that the
current two experiments are located at a distance of
* 3 3 1 3
3 0 0 2
1 1
n n n n
n n n n
F F F F L L L L
F F F F
from each end of the
interval L2.
Satpathi DK, BITS-Hyderabad Campus
7/30/2019 NLPP Lect Slides
12/50
Again the unimodality property will allow us to
reduce the interval of uncertainty to L3 given by
* 3 1 33 2 3 2 2 2
1 1
2 22 0
1
n n n
n n
n n
n n
F F FL L L L L LF F
F FL L
F F
This process of discarding a certain interval and
placing a new experiment in the interval can be
continued, so that the location of the jth experiment
and the interval of uncertainty at the end of jth
experiments are, given by
Satpathi DK, BITS-Hyderabad Campus
7/30/2019 NLPP Lect Slides
13/50
*
1
( 2)
( 1)
0
n j
j j
n j
n j
j
n
FL L
F
F
L LF
and for j=n, we have1
0
1n
n n
L F
L F F
The ratio, Ln/L0 will permit us to determine n,the required number of experiments, to achieve
any desired accuracy in locating the optimum
point.
Satpathi DK, BITS-Hyderabad Campus
7/30/2019 NLPP Lect Slides
14/50
Position of the final experiment:*
*0
1
1 2 ( 2)
1
2
n jnj j
n n j
FL Fn L L
L F F
Thus after concluding n-1 experiments and
discarding the appropriate interval in each step, the
remaining interval will contain one experiment
namely the nth experiment, is also to be placed at the
centre of the present interval of uncertainty. That is,
the position of the nth experiment will be same as
that of (n-1)th one, and this is true for whatever valuewe choose for n.
Satpathi DK, BITS-Hyderabad Campus
7/30/2019 NLPP Lect Slides
15/50
Since no new information can be gained by
placing the nth experiment exactly at the same
location as that of (n-1)th experiment, weplace the nth experiment very close to the
remaining valid experiment. This enables us to
obtain the final interval of uncertainty towithin .1
1
2n
L
Satpathi DK, BITS-Hyderabad Campus
7/30/2019 NLPP Lect Slides
16/50
Limitations of the method:
1. The initial interval of uncertainty, in which the optimum
lies, has to known.
2. The function being optimized has to be unimodal(Unimodal function is one that has only one peak
(maximum) or valley (minimum) in a given interval) in
the initial interval of uncertainty.
3. The exact optimum cannot be located in this method.
Only an interval known as the final interval of
uncertainty will be known. The final interval of
uncertainty can be made as small as desired by usingmore computations.
4. The number of function evaluations to be used in the
search or the resolution required has to be specified
beforehand.Satpathi DK, BITS-Hyderabad Campus
7/30/2019 NLPP Lect Slides
17/50
Q. Find the minimum of2
( ) 2 , 0 1.5 with 4f x x x x n
* 2 2
2 0 0
4
21.5 0.6
5
n
n
F FL L L
F FL0 is the original interval of uncertainty.
Thus the positions of the first two experiments are*
1 2
*
2 2
1 2
0.6
0.9
0.84, 0.99
x a L
x b L
f x f x
0 x1.5
x1 x2
-.84 -.99
Since f(x1) > f(x2) we reject [0,x1]
The third experiment is placed at3 2 1
3
( ) 1.5 (.9 .6) 1.2
with ( ) 0.96
x b x x
f x .6x
1.5
x1 x2
-.99 -.96
x3 b
1.2.9
Satpathi DK, BITS-Hyderabad Campus
7/30/2019 NLPP Lect Slides
18/50
Since f(x3) > f(x2) we reject [1.2,1.5]
Now the interval of uncertainty is [0.6,1.2]
Note that the point 0.9 is the middle point i.e.
it is equal distance from both end points. Take
x4 =0.95. So f(x4)=-0.9975Since f(.9)>f(.95) the new interval of
uncertainty L4 is [.9,1.2].
4
0
.3 1.2 .25
1.5 5
L
L
Satpathi DK, BITS-Hyderabad Campus
7/30/2019 NLPP Lect Slides
19/50
Golden Section Method
Satpathi DK, BITS-Hyderabad Campus
7/30/2019 NLPP Lect Slides
20/50
The golden section method is same as the Fibonacci
method except that in the Fibonacci method the total
number of experiments to be conducted has to be
specified before beginning of the calculation,
whereas this is not required in golden section
method.
In the Fibonacci method, the location of the first twoexperiments is determined by the total number of
experiments, n. In the golden section method we start
with the assumption that we are going to conduct alarge number of experiments. Of course, the total
number of experiments can be decided during the
computation. Satpathi DK, BITS-Hyderabad Campus
7/30/2019 NLPP Lect Slides
21/50
Satpathi DK, BITS-Hyderabad Campus
7/30/2019 NLPP Lect Slides
22/50
Satpathi DK, BITS-Hyderabad Campus
7/30/2019 NLPP Lect Slides
23/50
Satpathi DK, BITS-Hyderabad Campus
7/30/2019 NLPP Lect Slides
24/50
Procedure: The procedure is same as
Fibonacci method, except that the location of
the first two experiments is defined by* 2 2 1 0
2 0 0 02
1
0.382n n n
n n n
F F F LL L L L
F F F
The desired accuracy can be specified to stop
the procedure.
Satpathi DK, BITS-Hyderabad Campus
7/30/2019 NLPP Lect Slides
25/50
Satpathi DK, BITS-Hyderabad Campus
7/30/2019 NLPP Lect Slides
26/50
Definition:
A functionf(X)=f(x1,x2,xn) of n variables is said to be
convex if for each pair of pointsX,Yon the graph, the linesegment joining these 2 points lies entirely above or on the
graph.
f((1-)X +Y) (1-)f(X)+f(Y)
fis called concave( strictly concave) if
f is convex( strictly convex)
Convexity test for function of one variable
Convex if
concave if 0
0
2
2
2
2
dx
fd
dx
fd
7/30/2019 NLPP Lect Slides
27/50
Convexity test for functions of 2 variables
quantity convex Strictly
convex
concave Strictly
concave
fxx-(fxy)2 0 >0 0 >0
fxx 0 >0 0 0 0
7/30/2019 NLPP Lect Slides
28/50
When is a locally optimal solution also
globally optimal?
For minimization problems
The objective function is convex.
The feasible region is convex.
7/30/2019 NLPP Lect Slides
29/50
Local Maximum Property
A local max of a concave function on a convex feasible
region is also a global max.
Strict concavity implies that the global optimum is unique.
Given this, the following NLPs can be solved
Maximization Problems with a concave objective function
and linear constraints
7/30/2019 NLPP Lect Slides
30/50
Nonlinear Programming
Steepest ascent Method, Steepest descent
Method, Conjugate Gradient Method
Chapters: 19.1.1, 19.1.2 (H. A. Taha)
& 6.11 (Engineering Optimization by S S Rao)
G di t f f ti
7/30/2019 NLPP Lect Slides
31/50
Gradient of a function
The gradient of a function is an n-component
vector is given by 1
1
.....n
n
f
x
f
f
x
The gradient has a very important property. If we
move along the gradient direction from any point in
n-dimensional space, the function value increases at
the fastest rate. Hence the gradient direction is calledthe direction of steepest ascent. Unfortunately the
direction of steepest ascent is a local property and not
a global one.
Si h di h
7/30/2019 NLPP Lect Slides
32/50
Since the gradient vector represents the
direction of steepest ascent, the negative of the
gradient vector denotes the direction of steepestdescent. Thus any method that makes use of the
gradient vector can be expected to give the
minimum point faster than one that does notmake use of the gradient vector.
Theorem: The gradient vector represents the
direction steepest ascent.
7/30/2019 NLPP Lect Slides
33/50
Theorem: The maximum rate of change offat
any point X is equal to the magnitude of the
gradient vector at the same point.
Termination of gradient method occurs at the
point where the gradient vector becomes null.
This is the only necessary condition for
optimality. Optimality cannot be verified
unless it is known a priori that f(X) is concave
or convex.
7/30/2019 NLPP Lect Slides
34/50
Steepest Ascent method:
Suppose f(x) is maximized. Let X(0) be the
initial point from which the procedure startsand define as the gradient of f at the
pointXk. The idea is to determine a particular
pathp along which is maximized at a givenpoint. This result is achieved if successive
pointsXkandXk+1 are selected such that
kf X
fp
1k k k k X X r f X
where is the optimal step size at Xk.kr
Th t i i d t i d h th t thk
7/30/2019 NLPP Lect Slides
35/50
The step size is determined such that the
next point,Xk+1 lead to the largest improvement
inf. This is equivalent to determining r=rk
thatmaximizes the function
kr
k kh r f X r f X
The proposed procedure terminates when two
successive trial points Xk
and Xk+1
areapproximately equal. This is equivalent to
having .0k kr f X
Because , the necessary condition issatisfied atXk.
0kr 0k
f X
Q
7/30/2019 NLPP Lect Slides
36/50
Q. Max 2 21 2 2 1 2
2 2 2f X x x x x x
2 1 1 2
1 2
2 2 2
2 2
1 2 1 2
22 2 2
2 2
1 2 1 2
2 2 2 4 2
2 2 0 4 0
4 0
f fx x x x
x x
f f f
x x x x
f f fx x x x
f(x1,x2) is strictly concave.
Suppose thatX0=(0,0)
0,0 0,2f
1 0 0 0 0
7/30/2019 NLPP Lect Slides
37/50
1 0 0 0 00,0 0,0 0, 2 0, 2X X r f r r
20 1 0 0 00,2 4 8h r f X f r r r
20 0 0 0
0
14 8 0 4 16 0
4
dr r r r
dr
1 10,2
X1
1,0f X2 1 1 1 1 11 1
0, 1, 0 ,2 2
X X r f X r r
21 2 1 1 12
h r f X r r and
21 1 1
1
1 10
2 2
dr r r
dr
2 1 1,
2 2X
B i i i hi h b
7/30/2019 NLPP Lect Slides
38/50
By continuing in this process the subsequent
trial solutions would be
1 3 3 3 3 7 7 7, , , , , , , ,........2 4 4 4 4 8 8 8
Because these points are converging to
this solution is the optimal as
* 1,1X
1,1 0f
St t D t M th d
7/30/2019 NLPP Lect Slides
39/50
Steepest Descent Method
The use of the negative of the gradient vector
as a direction of minimization problem wasfirst made by Cauchy in 1847. In this method
we start from an initial trial point X0 and
iteratively move along the steepest descentdirections until the optimum point is found.
The steepest descent method can be
summarized by the following steps:
1 S i h bi i i i l i 0
7/30/2019 NLPP Lect Slides
40/50
1. Start with arbitrary initial pointx0
2. Find the search direction Skas
3. Determine the optimal step length rk in the
direction Skand set
4. Test the new point,Xk+1 , for optimality
k kS f X
1k k k k X X r f X
10
kf X
Q Mi 2 2
7/30/2019 NLPP Lect Slides
41/50
Q. Min 2 21 2 1 1 2 2
2 2f X x x x x x x
1 2 1 2
1 2
2 2 2
2 2
1 2 1 2
2
2 2 2
2 2
1 2 1 2
1 4 2 1 2 2
2 4 2
4 0
f fx x x x
x x
f f f
x x x x
f f f
x x x x
f(x1,x2) is strictly convex.
Suppose thatX0=(0,0)
0,0 1, 1f
1 0 1 1 1 10 0 0 0 1 1X X f
7/30/2019 NLPP Lect Slides
42/50
1 0 1 1 1 10,0 0,0 1, 1 ,X X r f r r r
21 1 1 1 1 1, 2h r f X f r r r r
1 1
10 2 2 0 1
dhr r
dr
1
1,1X
1
1, 1f X2 1 2 1 2 2 2
1,1 1, 1 1 ,1X X r f X r r r
22 2 2 25 2 1h r f X r r and
2 2
2
10 10 2 0
5
dhr r
dr
2 4 6,
5 5X
B ti i i thi th b t
7/30/2019 NLPP Lect Slides
43/50
By continuing in this process the subsequent
trial solutions would be
*1,1.4 ,..... 1,1.5X
7/30/2019 NLPP Lect Slides
44/50
7/30/2019 NLPP Lect Slides
45/50
7/30/2019 NLPP Lect Slides
46/50
7/30/2019 NLPP Lect Slides
47/50
7/30/2019 NLPP Lect Slides
48/50
7/30/2019 NLPP Lect Slides
49/50
7/30/2019 NLPP Lect Slides
50/50