+ All Categories
Home > Documents > Optimization Nonlinear programming: One dimensional minimization methods.

Optimization Nonlinear programming: One dimensional minimization methods.

Date post: 18-Dec-2015
Category:
Upload: jeffrey-haynes
View: 477 times
Download: 24 times
Share this document with a friend
Popular Tags:
144
Optimization Nonlinear programming: One dimensional minimization methods
Transcript
Page 1: Optimization Nonlinear programming: One dimensional minimization methods.

Optimization

Nonlinear programming:One dimensional minimization methods

Page 2: Optimization Nonlinear programming: One dimensional minimization methods.

Introduction

The basic philosophy of most of the numerical methods of optimization is to produce a sequence of improved approximations to the optimum according to the following scheme:

1. Start with an initial trial point Xi

2. Find a suitable direction Si (i=1 to start with) which points in the general direction of the optimum

3. Find an appropriate step length i* for movement along the direction Si

4. Obtain the new approximation Xi+1 as

5. Test whether Xi+1 is optimum. If Xi+1 is optimum, stop the procedure. Otherwise set a new i=i+1 and repeat step (2) onward.

iiii SXX *1

Page 3: Optimization Nonlinear programming: One dimensional minimization methods.

Iterative Process of Optimization

Page 4: Optimization Nonlinear programming: One dimensional minimization methods.

Introduction

• The iterative procedure indicated is valid for unconstrained as well as constrained optimization problems.

• If f(X) is the objective function to be minimized, the problem of determining i* reduces to finding the value i = i* that minimizes f (Xi+1) = f (Xi+ i Si) = f (i ) for fixed values of Xi

and Si.

• Since f becomes a function of one variable i only, the methods of finding i* in the previous slide are called one-dimensional minimization methods.

Page 5: Optimization Nonlinear programming: One dimensional minimization methods.

One dimensional minimization methods

• Analytical methods (differential calculus methods)

• Numerical methods– Elimination methods

• Unrestricted search

• Exhaustive search

• Dichotomous search

• Fibonacci method

• Golden section method

– Interpolation methods• Requiring no derivatives (quadratic)

• Requiring derivatives– Cubic

– Direct root

» Newton

» Quasi-Newton

» Secant

Page 6: Optimization Nonlinear programming: One dimensional minimization methods.

One dimensional minimization methods

Differential calculus methods:

• Analytical method

• Applicable to continuous, twice differentiable functions

• Calculation of the numerical value of the objective function is virtually the last step of the process

• The optimal value of the objective function is calculated after determining the optimal values of the decision variables

Page 7: Optimization Nonlinear programming: One dimensional minimization methods.

One dimensional minimization methods

Numerical methods:

• The values of the objective function are first found at various combinations of the decision variables

• Conclusions are then drawn regarding the optimal solution

• Elimination methods can be used for the minimization of even discontinuous functions

• The quadratic and cubic interpolation methods involve polynomial approximations to the given function

• The direct root methods are root finding methods that can be considered to be equivalent to quadratic interpolation

Page 8: Optimization Nonlinear programming: One dimensional minimization methods.

Unimodal function

• A unimodal function is one that has only one peak (maximum) or valley (minimum) in a given interval

• Thus a function of one variable is said to be unimodal if, given that two values of the variable are on the same side of the optimum, the one nearer the optimum gives the better functional value (i.e., the smaller value in the case of a minimization problem). This can be stated mathematically as follows:

A function f (x) is unimodal if– x1 < x2 < x* implies that f (x2) < f (x1) and

– x2 > x1 > x* implies that f (x1) < f (x2) where x* is the minimum point

Page 9: Optimization Nonlinear programming: One dimensional minimization methods.

Unimodal function

• Examples of unimodal functions:

• Thus, a unimodal function can be a nondifferentiable or even a discontinuous function

• If a function is known to be unimodal in a given range, the interval in which the minimum lies can be narrowed down provided that the function values are known at two different values in the range.

Page 10: Optimization Nonlinear programming: One dimensional minimization methods.

Unimodal function• For example, consider the normalized interval [0,1] and two function evaluations

within the interval as shown:

• There are three possible outcomes:

– f1 < f2

– f1 > f2

– f1 = f2

Page 11: Optimization Nonlinear programming: One dimensional minimization methods.

Unimodal function

• If the outcome is f1 < f2, the minimizing x can not lie to the right of x2

• Thus, that part of the interval [x2,1] can be discarded and a new small interval of uncertainty, [0, x2] results as shown in the figure

Page 12: Optimization Nonlinear programming: One dimensional minimization methods.

Unimodal function

• If the outcome is f (x1) > f (x2) , the interval [0, x1] can be discarded to obtain a new smaller interval of uncertainty, [x1, 1].

Page 13: Optimization Nonlinear programming: One dimensional minimization methods.

Unimodal function

• If f1 = f2 , intervals [0, x1] and [x2,1] can both be discarded to obtain the new interval of uncertainty as [x1,x2]

Page 14: Optimization Nonlinear programming: One dimensional minimization methods.

Unimodal function

• Furthermore, if one of the experiments (function evaluations in the elimination method) remains within the new interval, as will be the situation in Figs (a) and (b), only one other experiment need be placed within the new interval in order that the process be repeated.

• In Fig (c), two more experiments are to be placed in the new interval in order to find a reduced interval of uncertainty.

Page 15: Optimization Nonlinear programming: One dimensional minimization methods.

Unimodal function

• The assumption of unimodality is made in all the elimination techniques

• If a function is known to be multimodal (i.e., having several valleys or peaks), the range of the function can be subdivided into several parts and the function treated as a unimodal function in each part.

Page 16: Optimization Nonlinear programming: One dimensional minimization methods.

Elimination methods

In most practical problems, the optimum solution is known to lie within restricted ranges of the design variables. In some cases, this range is not known, and hence the search has to be made with no restrictions on the values of the variables.

UNRESTRICTED SEARCH

• Search with fixed step size

• Search with accelerated step size

Page 17: Optimization Nonlinear programming: One dimensional minimization methods.

Unrestricted Search

Search with fixed step size• The most elementary approach for such a problem is to use a

fixed step size and move from an initial guess point in a favorable direction (positive or negative).

• The step size used must be small in relation to the final accuracy desired.

• Simple to implement

• Not efficient in many cases

Page 18: Optimization Nonlinear programming: One dimensional minimization methods.

Unrestricted Search

Search with fixed step size1. Start with an initial guess point, say, x1

2. Find f1 = f (x1)

3. Assuming a step size s, find x2=x1+s

4. Find f2 = f (x2)

5. If f2 < f1, and if the problem is one of minimization, the assumption of unimodality indicates that the desired minimum can not lie at x < x1. Hence the search can be continued further along points x3, x4,….using the unimodality assumption while testing each pair of experiments. This procedure is continued until a point, xi=x1+(i-1)s, shows an increase in the function value.

Page 19: Optimization Nonlinear programming: One dimensional minimization methods.

Unrestricted Search

Search with fixed step size (cont’d)

6. The search is terminated at xi, and either xi or xi-1 can be taken as the optimum point

7. Originally, if f1 < f2 , the search should be carried in the reverse direction at points x-2, x-3,…., where x-j=x1- ( j-1 )s

8. If f2=f1 , the desired minimum lies in between x1 and x2, and the minimum point can be taken as either x1 or x2.

9. If it happens that both f2 and f-2 are greater than f1, it implies that the desired minimum will lie in the double interval

x-2 < x < x2

Page 20: Optimization Nonlinear programming: One dimensional minimization methods.

Unrestricted Search

Search with accelerated step size • Although the search with a fixed step size appears to be very

simple, its major limitation comes because of the unrestricted nature of the region in which the minimum can lie.

• For example, if the minimum point for a particular function happens to be xopt=50,000 and in the absence of knowledge about the location of the minimum, if x1 and s are chosen as 0.0 and 0.1, respectively, we have to evaluate the function 5,000,001 times to find the minimum point. This involves a large amount of computational work.

Page 21: Optimization Nonlinear programming: One dimensional minimization methods.

Unrestricted Search

Search with accelerated step size (cont’d)• An obvious improvement can be achieved by increasing the

step size gradually until the minimum point is bracketed.

• A simple method consists of doubling the step size as long as the move results in an improvement of the objective function.

• One possibility is to reduce the step length after bracketing the optimum in ( xi-1, xi). By starting either from xi-1 or xi, the basic procedure can be applied with a reduced step size. This procedure can be repeated until the bracketed interval becomes sufficiently small.

Page 22: Optimization Nonlinear programming: One dimensional minimization methods.

Example Find the minimum of f = x (x-1.5) by starting from 0.0 with an initial step size of

0.05.

Solution:

The function value at x1 is f1=0.0. If we try to start moving in the negative x direction, we find that x-2=-0.05 and f-2=0.0775. Since f-2>f1, the assumption of unimodality indicates that the minimum can not lie toward the left of x-2. Thus, we start moving in the positive x direction and obtain the following results:

i Value of s xi=x1+s fi = f (xi) Is fi > fi-1

1 - 0.0 0.0 -

2 0.05 0.05 -0.0725 No

3 0.10 0.10 -0.140 No

4 0.20 0.20 -0.260 No

5 0.40 0.40 -0.440 No

6 0.8 0.80 -0.560 No

7 1.60 1.60 +0.160 Yes

Page 23: Optimization Nonlinear programming: One dimensional minimization methods.

Example Solution:

From these results, the optimum point can be seen to be xopt x6=0.8.

In this case, the points x6 and x7 do not really bracket the minimum point but provide information about it.

If a better approximation to the minimum is desired, the procedure can be restarted from x5 with a smaller step size.

Page 24: Optimization Nonlinear programming: One dimensional minimization methods.

Exhaustive search

• The exhaustive search method can be used to solve problems where the interval in which the optimum is known to lie is finite.

• Let xs and xf denote, respectively, the starting and final points of the interval of uncertainty.

• The exhaustive search method consists of evaluating the objective function at a predetermined number of equally spaced points in the interval (xs, xf), and reducing the interval of uncertainty using the assumption of unimodality.

Page 25: Optimization Nonlinear programming: One dimensional minimization methods.

Exhaustive search

• Suppose that a function is defined on the interval (xs, xf), and let it be evaluated at eight equally spaced interior points x1 to x8. The function value appears as:

• Thus, the minimum must lie, according to the assumption of unimodality, between points x5 and x7. Thus the interval (x5,x7) can be considered as the final interval of uncertainty.

Page 26: Optimization Nonlinear programming: One dimensional minimization methods.

Exhaustive search

• In general, if the function is evaluated at n equally spaced points in the original interval of uncertainty of length L0= xf - xs, and if the optimum value of the function (among the n function values) turns out to be at point xj, the final interval of uncertainty is given by:

• The final interval of uncertainty obtainable for different number of trials in the exhaustive search method is given below:

011 1

2L

nxxL jjn

Number

of trials 2 3 4 5 6 … n

Ln/L0 2/3 2/4 2/5 2/6 2/7 … 2/(n+1)

Page 27: Optimization Nonlinear programming: One dimensional minimization methods.

Exhaustive search

• Since the function is evaluated at all n points simultaneously, this method can be called a simultaneous search method.

• This method is relatively inefficient compared to the sequential search methods discussed next, where the information gained from the initial trials is used in placing the subsequent experiments.

Page 28: Optimization Nonlinear programming: One dimensional minimization methods.

Example

Find the minimum of f = x(x-1.5) in the interval (0.0,1.0) to within 10 % of the exact value.

Solution: If the middle point of the final interval of uncertainty is taken as the approximate optimum point, the maximum deviation could be 1/(n+1) times the initial interval of uncertainty. Thus, to find the optimum within 10% of the exact value, we should have

9or 10

1

1

1

n

n

Page 29: Optimization Nonlinear programming: One dimensional minimization methods.

Example

By taking n = 9, the following function values can be calculated:

Since f7 = f8 , the assumption of unimodality gives the final interval of uncertainty as L9= (0.7,0.8). By taking the middle point of L9 (i.e., 0.75) as an approximation to the optimum point, we find that it is in fact, the true optimum point.

i 1 2 3 4 5 6 7 8 9

xi 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

fi=f(xi) -0.14 -0.26

-0.36 -0.44 -0.50 -0.54 -0.56 -0.56 -0.54

Page 30: Optimization Nonlinear programming: One dimensional minimization methods.

Dichotomous search

• The exhaustive search method is a simultaneous search method in which all the experiments are conducted before any judgement is made regarding the location of the optimum point.

• The dichotomous search method , as well as the Fibonacci and the golden section methods discussed in subsequent sections, are sequential search methods in which the result of any experiment influences the location of the subsequent experiment.

• In the dichotomous search, two experiments are placed as close as possible at the center of the interval of uncertainty.

• Based on the relative values of the objective function at the two points, almost half of the interval of uncertainty is eliminated.

Page 31: Optimization Nonlinear programming: One dimensional minimization methods.

Dichotomous search

• Let the positions of the two experiments be given by:

where is a small positive number chosen such that the two experiments give significantly different results.

22

22

02

01

Lx

Lx

Page 32: Optimization Nonlinear programming: One dimensional minimization methods.

Dichotomous Search

• Then the new interval of uncertainty is given by (L0/2+/2).

• The building block of dichotomous search consists of conducting a pair of experiments at the center of the current interval of uncertainty.

• The next pair of experiments is, therefore, conducted at the center of the remaining interval of uncertainty.

• This results in the reduction of the interval of uncertainty by nearly a factor of two.

Page 33: Optimization Nonlinear programming: One dimensional minimization methods.

Dichotomous Search• The intervals of uncertainty at the ends of different pairs of

experiments are given in the following table.

• In general, the final interval of uncertainty after conducting n experiments (n even) is given by:

Number of

experiments 2 4 6

Final interval of uncertainty (L0+ )/2

2242

1 0

L

222

1 0

L

2/2/0

2

11

2 nnn

LL

Page 34: Optimization Nonlinear programming: One dimensional minimization methods.

Dichotomous Search

Example: Find the minimum of f = x(x-1.5) in the interval (0.0,1.0) to within 10% of the exact value.

Solution: The ratio of final to initial intervals of uncertainty is given by:

where is a small quantity, say 0.001, and n is the number of experiments. If the middle point of the final interval is taken as the optimum point, the requirement can be stated as:

2/0

2/0 2

11

2

1nn

n

LL

L

5

1

2

11

2

1

..

10

1

2

1

2/0

2/

0

nn

n

L

ei

L

L

Page 35: Optimization Nonlinear programming: One dimensional minimization methods.

Dichotomous Search

Solution: Since = 0.001 and L0 = 1.0, we have

Since n has to be even, this inequality gives the minimum admissable value of n as 6. The search is made as follows: The first two experiments are made at:

0.5199

9992or

5000

995

2

1

1000

999

..

5

1

2

11

1000

1

2

1

n/22/

2/2/

n

nn

ei

5005.00005.05.022

4995.00005.05.022

02

01

Lx

Lx

Page 36: Optimization Nonlinear programming: One dimensional minimization methods.

Dichotomous Search

with the function values given by:

Since f2 < f1, the new interval of uncertainty will be (0.4995,1.0). The second pair of experiments is conducted at :

which gives the function values as:

50025.0)9995.0(5005.0)(

49975.0)0005.1(4995.0)(

22

11

xff

xff

75025.00005.0)2

4995.00.14995.0(

74925.00005.0)2

4995.00.14995.0(

4

3

x

x

5624994375.0)74975.0(75025.0)(

5624994375.0)75075.0(74925.0)(

44

33

xff

xff

Page 37: Optimization Nonlinear programming: One dimensional minimization methods.

Dichotomous Search

Since f3 > f4 , we delete (0.4995,x3) and obtain the new interval of uncertainty as:

(x3,1.0)=(0.74925,1.0)

The final set of experiments will be conducted at:

which gives the function values as:

875125.00005.0)2

74925.00.174925.0(

874125.00005.0)2

74925.00.174925.0(

4

3

x

x

5468437342.0)624875.0(875125.0)(

5470929844.0)625875.0(874125.0)(

66

55

xff

xff

Page 38: Optimization Nonlinear programming: One dimensional minimization methods.

Dichotomous Search

Since f5 < f6 , the new interval of uncertainty is given by (x3, x6) (0.74925,0.875125). The middle point of this interval can be taken as optimum, and hence:

5586327148.0

8121875.0

opt

opt

f

x

Page 39: Optimization Nonlinear programming: One dimensional minimization methods.

Interval halving method

In the interval halving method, exactly one half of the current interval of uncertainty is deleted in every stage. It requires three experiments in the first stage and two experiments in each subsequent stage.

The procedure can be described by the following steps:

1. Divide the initial interval of uncertainty L0 = [a,b] into four equal parts and label the middle point x0 and the quarter-interval points x1 and x2.

2. Evaluate the function f(x) at the three interior points to obtain f1 = f(x1), f0 = f(x0) and f2 = f(x2).

Page 40: Optimization Nonlinear programming: One dimensional minimization methods.

Interval halving method (cont’d)

3. (a) If f1 < f0 < f2 as shown in the figure, delete the interval ( x0,b), label x1 and x0 as the new x0 and b, respectively, and go to step 4.

Page 41: Optimization Nonlinear programming: One dimensional minimization methods.

Interval halving method (cont’d)

3. (b) If f2 < f0 < f1 as shown in the figure, delete the interval ( a, x0), label x2 and x0 as the new x0 and a, respectively, and go to step 4.

Page 42: Optimization Nonlinear programming: One dimensional minimization methods.

Interval halving method (cont’d)

3. (c) If f0 < f1 and f0 < f2 as shown in the figure, delete both the intervals ( a, x1), and ( x2 ,b), label x1 and x2 as the new a and b, respectively, and go to step 4.

Page 43: Optimization Nonlinear programming: One dimensional minimization methods.

Interval halving method (cont’d)

4. Test whether the new interval of uncertainty, L = b - a, satisfies the convergence criterion L ϵ where ϵ

is a small quantity. If the convergence criterion is satisfied, stop the procedure. Otherwise, set the new L0

= L and go to step 1.

Remarks

1. In this method, the function value at the middle point of the interval of uncertainty, f0, will be available in all the stages except the first stage.

Page 44: Optimization Nonlinear programming: One dimensional minimization methods.

Interval halving method (cont’d)

Remarks

2. The interval of uncertainty remaining at the end of n experiments ( n 3 and odd) is given by

0

2/)1(

2

1LL

n

n

Page 45: Optimization Nonlinear programming: One dimensional minimization methods.

Example

Find the minimum of f = x (x-1.5) in the interval (0.0,1.0) to within 10% of the exact value.

Solution: If the middle point of the final interval of uncertainty is taken as the optimum point, the specified accuracy can be achieved if:

Since L0=1, Eq. (E1) gives

(E1) 52

1or

102

1 00

2/)1(

0 LL

LL

n

n

(E2) 52or 5

1

2

1 1)/2-(n2/)1(

n

Page 46: Optimization Nonlinear programming: One dimensional minimization methods.

Example

Solution: Since n has to be odd, inequality (E2) gives the minimum permissable value of n as 7. With this value of n=7, the search is conducted as follows. The first three experiments are placed at one-fourth points of the interval L0=[a=0, b=1] as

Since f1 > f0 > f2, we delete the interval (a,x0) = (0.0,0.5), label x2 and x0 as the new x0 and a so that a=0.5, x0=0.75, and b=1.0. By dividing the new interval of uncertainty, L3=(0.5,1.0) into four equal parts, we obtain:

5625.0)75.0(75.0,75.0

5000.0)0.1(50.0,50.0

3125.0)25.1(25.0,25.0

02

00

11

fx

fx

fx

546875.0)625.0(875.0,875.0

562500.0)750.0(750.0,750.0

546875.0)875.0(625.0,625.0

22

00

11

fx

fx

fx

Page 47: Optimization Nonlinear programming: One dimensional minimization methods.

Example

Solution: Since f1 > f0 and f2 > f0, we delete both the intervals (a,x1) and (x2,b), and label x1, x0 and x2 as the new a,x0, and b, respectively. Thus, the new interval of uncertainty will be L5=(0.625,0.875). Next, this interval is divided into four equal parts to obtain:

Again we note that f1 > f0 and f2>f0, and hence we delete both the intervals (a,x1) and (x2,b) to obtain the new interval of uncertainty as L7=(0.6875,0.8125). By taking the middle point of this interval (L7) as optimum, we obtain:

This solution happens to be the exact solution in this case.

558594.0)6875.0(8125.0,8125.0

5625.0)75.0(75.0,75.0

558594.0)8125.0(6875.0,6875.0

02

00

11

fx

fx

fx

5625.0and,75.0 optopt fx

Page 48: Optimization Nonlinear programming: One dimensional minimization methods.

Fibonacci method

As stated earlier, the Fibonacci method can be used to find the minimum of a function of one variable even if the function is not continuous. The limitations of the method are:

• The initial interval of uncertainty, in which the optimum lies, has to be known.

• The function being optimized has to be unimodal in the initial interval of uncertainty.

Page 49: Optimization Nonlinear programming: One dimensional minimization methods.

Fibonacci method

The limitations of the method (cont’d):

• The exact optimum cannot be located in this method. Only an interval known as the final interval of uncertainty will be known. The final interval of uncertainty can be made as small as desired by using more computations.

• The number of function evaluations to be used in the search or the resolution required has to be specified before hand.

Page 50: Optimization Nonlinear programming: One dimensional minimization methods.

Fibonacci method

This method makes use of the sequence of Fibonacci numbers, {Fn}, for placing the experiments. These numbers are defined as:

which yield the sequence 1,1,2,3,5,8,13,21,34,55,89,...

,4,3,2,

1

21

10

nFFF

FF

nnn

Page 51: Optimization Nonlinear programming: One dimensional minimization methods.

Fibonacci method

Procedure:

Let L0 be the initial interval of uncertainty defined by a x b and n be the total number of experiments to be conducted. Define

and place the first two experiments at points x1 and x2, which are located at a distance of L2

* from each end of L0.

02*

2 LF

FL

n

n

Page 52: Optimization Nonlinear programming: One dimensional minimization methods.

Fibonacci method

Procedure:

This gives

Discard part of the interval by using the unimodality assumption. Then there remains a smaller interval of uncertainty L2 given by:

01

02*

22

02*

21

LF

FaL

F

FbLbx

LF

FaLax

n

n

n

n

n

n

012

0*202 1 L

F

F

F

FLLLL

n

n

n

n

Page 53: Optimization Nonlinear programming: One dimensional minimization methods.

Fibonacci method

Procedure: The only experiment left in will be at a distance of

from one end and

from the other end. Now place the third experiment in the interval L2 so that the current two experiments are located at a distance of:

21

20

2*2 L

F

FL

F

FL

n

n

n

n

21

30

3*22 L

F

FL

F

FLL

n

n

n

n

21

30

3*3 L

F

FL

F

FL

n

n

n

n

Page 54: Optimization Nonlinear programming: One dimensional minimization methods.

Fibonacci method

Procedure:• This process of discarding a certain interval and placing a new

experiment in the remaining interval can be continued, so that the location of the jth experiment and the interval of uncertainty at the end of j experiments are, respectively, given by:

0

)1(

1)2(

*

LF

FL

LF

FL

n

jnj

jjn

jnj

Page 55: Optimization Nonlinear programming: One dimensional minimization methods.

Fibonacci method

Procedure:

• The ratio of the interval of uncertainty remaining after conducting j of the n predetermined experiments to the initial interval of uncertainty becomes:

and for j = n, we obtain

n

jnj

F

F

L

L )1(

0

nn

n

FF

F

L

L 11

0

Page 56: Optimization Nonlinear programming: One dimensional minimization methods.

Fibonacci method

• The ratio Ln/L0 will permit us to determine n, the required number of experiments, to achieve any desired accuracy in locating the optimum point.Table gives the reduction ratio in the interval of uncertainty obtainable for different number of experiments.

Page 57: Optimization Nonlinear programming: One dimensional minimization methods.

Fibonacci method

Position of the final experiment:• In this method, the last experiment has to be placed with some

care. Equation

gives

• Thus, after conducting n-1 experiments and discarding the appropriate interval in each step, the remaining interval will contain one experiment precisely at its middle point.

1)2(

*

jjn

jnj L

F

FL

n allfor 2

1

2

0

1

*

F

F

L

L

n

n

Page 58: Optimization Nonlinear programming: One dimensional minimization methods.

Fibonacci method

Position of the final experiment:• However, the final experiment, namely, the nth

experiment, is also to be placed at the center of the present interval of uncertainty.

• That is, the position of the nth experiment will be the same as that of ( n-1)th experiment, and this is true for whatever value we choose for n.

• Since no new information can be gained by placing the nth experiment exactly at the same location as that of the (n-1)th experiment, we place the nth experiment very close to the remaining valid experiment, as in the case of the dichotomous search method.

Page 59: Optimization Nonlinear programming: One dimensional minimization methods.

Fibonacci method

Example: Minimize f(x)=0.65-[0.75/(1+x2)]-0.65 x tan-1(1/x) in the interval [0,3]

by the Fibonacci method using n=6.

Solution: Here n=6 and L0=3.0, which yield:

Thus, the positions of the first two experiments are given by x1=1.153846 and x2=3.0-1.153846=1.846154 with f1=f(x1)=-0.207270 and f2=f(x2)=-0.115843. Since f1 is less than f2, we can delete the interval [x2,3] by using the unimodality assumption.

153846.1)0.3(13

5* 0

22 L

F

FL

n

n

Page 60: Optimization Nonlinear programming: One dimensional minimization methods.

Fibonacci method

Solution:

Page 61: Optimization Nonlinear programming: One dimensional minimization methods.

Fibonacci method

Solution:

The third experiment is placed at x3=0+ (x2-x1)=1.846154-1.153846=0.692308, with the corresponding function value of f3=-0.291364. Since f1 is greater than f3, we can delete the interval [x1,x2]

Page 62: Optimization Nonlinear programming: One dimensional minimization methods.

Fibonacci method

Solution:

The next experiment is located at x4=0+ (x1-x3)=1.153846-0.692308=0.461538, with f4=-0.309811. Noting that f4 is less than f3, we can delete the interval [x3,x1]

Page 63: Optimization Nonlinear programming: One dimensional minimization methods.

Fibonacci method

Solution:

The location of the next experiment can be obtained as x5=0+ (x3-x4)=0.692308-0.461538=0.230770, with the corresponding objective function value of f5=-0.263678. Since f4 is less than f3, we can delete the interval [0,x5]

Page 64: Optimization Nonlinear programming: One dimensional minimization methods.

Fibonacci method

Solution:

The final experiment is positioned at x6=x5+ (x3-x4)=0.230770+(0.692308-0.461538)=0.461540 with f6=-0.309810. (Note that, theoretically, the value of x6 should be same as that of x4; however,it is slightly different from x4 due to the round off error). Since f6 > f4 , we delete the interval [x6, x3] and obtain the final interval of uncertainty as L6 = [x5, x6]=[0.230770,0.461540].

Page 65: Optimization Nonlinear programming: One dimensional minimization methods.

Fibonacci method

Solution:

The ratio of the final to the initial interval of uncertainty is

This value can be compared with

which states that if n experiments (n=6) are planned, a resolution no finer than 1/Fn= 1/F6=1/13=0.076923 can be expected from the method.

076923.00.3

230770.0461540.0

0

6

L

L

nn

n

FF

F

L

L 11

0

Page 66: Optimization Nonlinear programming: One dimensional minimization methods.

Golden Section Method

• The golden section method is same as the Fibonacci method except that in the Fibonacci method, the total number of experiments to be conducted has to be specified before beginning the calculation, whereas this is not required in the golden section method.

Page 67: Optimization Nonlinear programming: One dimensional minimization methods.

Golden Section Method

• In the Fibonacci method, the location of the first two experiments is determined by the total number of experiments, n.

• In the golden section method, we start with the assumption that we are going to conduct a large number of experiments.

• Of course, the total number of experiments can be decided during the computation.

Page 68: Optimization Nonlinear programming: One dimensional minimization methods.

Golden Section Method

• The intervals of uncertainty remaining at the end of different number of experiments can be computed as follows:

• This result can be generalized to obtain

0

2

1

01

1

20

23

01

2

lim

limlim

lim

LF

F

LF

F

F

FL

F

FL

LF

FL

N

N

N

N

N

N

N

NN

N

N

N

N

N

0

1

1lim LF

FL

k

N

N

Nk

Page 69: Optimization Nonlinear programming: One dimensional minimization methods.

Golden Section Method

Using the relation:

We obtain, after dividing both sides by FN-1,

By defining a ratio as

21 NNN FFF

1

limN

N

N F

F

1

2

1

1

N

N

N

N

F

F

F

F

Page 70: Optimization Nonlinear programming: One dimensional minimization methods.

Golden Section Method

The equation

can be expressed as:

that is:

012

1

2

1

1

N

N

N

N

F

F

F

F

11

Page 71: Optimization Nonlinear programming: One dimensional minimization methods.

Golden Section Method This gives the root =1.618, and hence the equation

yields:

In the equation

the ratios FN-2/FN-1 and FN-1/FN have been taken to be same for large values of N. The validity of this assumption can be seen from the table:

01

0

1

)618.0(1

LLL k

k

k

0

1

1lim LF

FL

k

N

N

Nk

0

2

13 lim L

F

FL

N

N

N

Value of N 2 3 4 5 6 7 8 9 10

Ratio FN-1/FN 0.5 0.667 0.6 0.625 0.6156 0.619 0.6177 0.6181 0.6184 0.618

Page 72: Optimization Nonlinear programming: One dimensional minimization methods.

Golden Section Method The ratio has a historical background. Ancient Greek architects

believed that a building having the sides d and b satisfying the relation

will be having the most pleasing properties. It is also found in Euclid’s geometry that the division of a line segment into two unequal parts so that the ratio of the whole to the larger part is equal to the ratio of the larger to the smaller, being known as the golden section, or golden mean-thus the term golden section method.

b

d

d

bd

Page 73: Optimization Nonlinear programming: One dimensional minimization methods.

Comparison of elimination methods

• The efficiency of an elimination method can be measured in terms of the ratio of the final and the initial intervals of uncertainty, Ln/L0

• The values of this ratio achieved in various methods for a specified number of experiments (n=5 and n=10) are compared in the Table below:

• It can be seen that the Fibonacci method is the most efficient method, followed by the golden section method, in reducing the interval of uncertainty.

Page 74: Optimization Nonlinear programming: One dimensional minimization methods.

Comparison of elimination methods

• A similar observation can be made by considering the number of experiments (or function evaluations) needed to achieve a specified accuracy in various methods.

• The results are compared in the Table below for maximum permissable errors of 0.1 and 0.01.

• It can be seen that to achieve any specified accuracy, the Fibonacci method requires the least number of experiments, followed by the golden section method.

Page 75: Optimization Nonlinear programming: One dimensional minimization methods.

Interpolation methods• The interpolation methods were originally developed as one

dimensional searches within multivariable optimization techniques, and are generally more efficient than Fibonacci-type approaches.

• The aim of all the one-dimensional minimization methods is to find *, the smallest nonnegative value of , for which the function

attains a local minimum.

)()( SX ff

Page 76: Optimization Nonlinear programming: One dimensional minimization methods.

Interpolation methods• Hence if the original function f (X) is expressible as an explicit function of

xi (i=1,2,…,n), we can readily write the expression for f () = f (X + S ) for any specified vector S, set

and solve the above equation to find * in terms of X and S.

• However, in many practical problems, the function f ( ) can not be expressed explicitly in terms of . In such cases, the interpolation methods can be used to find the value of *.

0)( d

df

Page 77: Optimization Nonlinear programming: One dimensional minimization methods.

Quadratic Interpolation Method

• The quadratic interpolation method uses the function values only; hence it is useful to find the minimizing step (*)

of functions f (X) for which the partial derivatives with respect to the variables xi are not available or difficult to

compute.

• This method finds the minimizing step length * in three stages:

– In the first stage, the S vector is normalized so that a step length of = 1 is acceptable.

– In the second stage, the function f () is approximated by a quadratic function h() and the minimum, , of h() is found. If

this is not sufficiently close to the true minimum *, a third stage is used.

– In this stage, a new quadratic function is used to approximate f (), and a new value of is found. This

procedure is continued until a that is sufficiently close to * is found.

*~

2)( cbah

*~*

~

Page 78: Optimization Nonlinear programming: One dimensional minimization methods.

Quadratic Interpolation Method

• Stage 1: In this stage, the S vector is normalized as follows: Find =max|si|,

where si is the ith component of S and divide each component of S by . Another

method of normalization is to find =(s12+ s2

2+ …+sn2 )1/2 and divide each

component of S by .

• Stage 2: Let

be the quadratic function used for approximating the function f (). It is worth noting at this point that a quadratic is the lowest-order polynomial for which a finite minimum can exist.

2)( cbah

Page 79: Optimization Nonlinear programming: One dimensional minimization methods.

Quadratic Interpolation Method

• Stage 2 cont’d: Let

that is,

The sufficiency condition for the minimum of h () is that

that is,

c > 0

02

cbd

dh

c

b

2

~*

0*~

2

2

d

hd

Page 80: Optimization Nonlinear programming: One dimensional minimization methods.

Quadratic Interpolation Method

Stage 2 cont’d:

• To evaluate the constants a, b, and c in the Equation

we need to evaluate the function f () at three points.

• Let =A, =B, and =C be the points at which the function f () is

evaluated and let fA, fB and fC be the corresponding function values,

that is,

2)( cbah

2

2

2

cCbCaf

cBbBaf

cAbAaf

C

B

A

Page 81: Optimization Nonlinear programming: One dimensional minimization methods.

Quadratic Interpolation Method

Stage 2 cont’d:

• The solution of

gives

2

2

2

cCbCaf

cBbBaf

cAbAaf

C

B

A

))()((

)()()(

))()((

)()()(

))()((

)()()(

222222

ACCBBA

BAfACfCBfc

ACCBBA

BAfACfCBfb

ACCBBA

ABABfCACAfBCBCfa

CBA

CBA

CBA

Page 82: Optimization Nonlinear programming: One dimensional minimization methods.

Quadratic Interpolation Method

Stage 2 cont’d:

• From equations

the minimum of h () can be obtained as:

provided that c is positive.

c

b

2

~*

))()((

)()()(

))()((

)()()( 222222

ACCBBA

BAfACfCBfc

ACCBBA

BAfACfCBfb

CBA

CBA

)]()()([2

)()()(

2

~ 222222*

BAfACfCBf

BAfACfCBf

c

b

CBA

CBA

Page 83: Optimization Nonlinear programming: One dimensional minimization methods.

Quadratic Interpolation MethodStage 2 cont’d:

• To start with, for simplicity, the points, A, B and C can be chosen as 0, t, and 2t,

respectively, where t is a preselected trial step length.

• By this procedure, we can save one function evaluation since f A=f (=0) is generally

known from the previous iteration (of a multivariable search).

• For this case, the equations reduce to:

provided that

22

22

34

t

fffc

t

fffb

fa

BAC

CAB

A

tfff

fff

ACB

CAB

224

34*

02

22

t

fffc BAC

Page 84: Optimization Nonlinear programming: One dimensional minimization methods.

Quadratic Interpolation Method

Stage 2 cont’d:

• The inequality

can be satisfied if

i.e., the function value fB should be

smaller than the average value of

fA and fC as shown in figure.

BCA f

ff

2

02

22

t

fffc BAC

Page 85: Optimization Nonlinear programming: One dimensional minimization methods.

Quadratic Interpolation Method

Stage 2 cont’d:

• The following procedure can be used not only to satisfy the inequality

but also to ensure that the minimum lies in the interval 0 < < 2t.

1. Assuming that fA = f (=0) and the initial step size t0 are known, evaluate

the function f at =t0 and obtain f1=f (=t0 ).

BCA f

ff

2

*~ *

~

Page 86: Optimization Nonlinear programming: One dimensional minimization methods.

Quadratic Interpolation Method

Stage 2 cont’d:

B

CA fff

2

*~

Page 87: Optimization Nonlinear programming: One dimensional minimization methods.

Quadratic Interpolation Method

Stage 2 cont’d:2. If f1 > fA is realized as shown in figure, set fC = f1 and evaluate the function f at

= t0 /2 and using the equation

with t= t0 / 2.

*~

tfff

fff

ACB

CAB

224

34*

Page 88: Optimization Nonlinear programming: One dimensional minimization methods.

Quadratic Interpolation Method

Stage 2 cont’d:3. If f1 ≤ fA is realized as shown in figures, set fB = f1 and evaluate the function f at

= 2 t0 to find f2=f (= 2 t0 ). This may result in any of the equations shown in the figure.

Page 89: Optimization Nonlinear programming: One dimensional minimization methods.

Quadratic Interpolation Method

Stage 2 cont’d:4. If f2 turns out to be greater than f1 as shown in the figures, set fC= f2 and

compute according to the equation below with t= t0.

5. If f2 turns out to be smaller than f1, set new f1= f2 and t= 2t0 and repeat steps 2 to 4 until we are able to find .

*~

tfff

fff

ACB

CAB

224

34*

*~

Page 90: Optimization Nonlinear programming: One dimensional minimization methods.

Quadratic Interpolation Method

Stage 3: The found in Stage 2 is the minimum of the approximating quadratic h() and we have to make sure that this is sufficiently close to the true minimum * of f () before taking * = . Several tests are possible to ascertain this.

• One possible test is to compare with and consider a

sufficiently close good approximation if they differ not more than by a small amount. This criterion can be stated as:

*~

*~

*~

*)~

(f *)~

(h *~

1*)

~(

*)~

(*)~

(

f

fh

Page 91: Optimization Nonlinear programming: One dimensional minimization methods.

Quadratic Interpolation Method

Stage 3 cont’d:

Another possible test is to examine whether df /d is close to zero at . Since the derivatives of f are not used in this method, we can use a finite-difference formula for df /d and use the criterion:

to stop the procedure. 1 and 2 are small numbers to be specified depending on the accuracy desired.

*~

2*

~2

*)~

*~

(*)~

*~

(

ff

Page 92: Optimization Nonlinear programming: One dimensional minimization methods.

Quadratic Interpolation Method

Stage 3 cont’d: If the convergence criteria stated in equations

are not satisfied, a new quadratic function

is used to approximate the function f ().

• To evaluate the constants a’, b’ and c’, the three best function values of the current f A=f (=0), f B=f (=t0), f C=f (=2t0), and are to be used.

• This process of trying to fit another polynomial to obtain a better approximation to is known as refitting the polynomial.

2*

~2

*)~

*~

(*)~

*~

(

ff

1*)

~(

*)~

(*)~

(

f

fh

2)( cbah

*)~

(~ ff

*~

Page 93: Optimization Nonlinear programming: One dimensional minimization methods.

Quadratic Interpolation Method

Stage 3 cont’d: For refitting the quadratic, we consider all possible situations and select the best three points of the present A, B, and C, and . There are four possibilities. The best three points to be used in refitting in each case are given in the table.

*~

Page 94: Optimization Nonlinear programming: One dimensional minimization methods.

Quadratic Interpolation MethodStage 3 cont’d:

Page 95: Optimization Nonlinear programming: One dimensional minimization methods.

Quadratic Interpolation Method

Stage 3 cont’d: A new value of is computed by using the general formula:

If this does not satisfy the convergence criteria stated in

A new quadratic has to be refitted according to the scheme outlined in the table.

*~

)]()()([2

)()()(

2

~ 222222*

BAfACfCBf

BAfACfCBf

c

b

CBA

CBA

*~

1*)

~(

*)~

(*)~

(

f

fh2

*~

2

*)~

*~

(*)~

*~

(

ff

Page 96: Optimization Nonlinear programming: One dimensional minimization methods.

Cubic Interpolation Method

• The cubic interpolation method finds the minimizing step length in four stages. It makes use of the derivative of the function f:

• The first stage normalizes the S vector so that a step size =1 is acceptable.

• The second stage establishes bounds on *, and the third stage finds the value of * by approximating f () by a cubic polynomial h ().

• If the found in stage 3 does not satisfy the prescribed convergence criteria, the cubic polynomial is refitted in the fourth stage.

)()()( SXSSX

ffd

d

d

dff T

*~

*~

Page 97: Optimization Nonlinear programming: One dimensional minimization methods.

Cubic Interpolation Method

• Stage 1: Calculate =max|si|, where |si | is the absolute value of

the ith component of S and divide each component of S by .

Another method of normalization is to find =(s12+ s2

2+ …+sn2 )1/2 . and

divide each component of S by .

• Stage 2:To establish lower and upper bounds on the optimal step

size , we need to find two points A and B at which the slope df /

d has different signs. We know that at = 0,

since S is presumed to be a direction of descent.(In this case, the

direction between the steepest descent and S will be less than 90.

*~

*~

00

(X)S fd

df T

Page 98: Optimization Nonlinear programming: One dimensional minimization methods.

Cubic Interpolation Method

• Stage 2 cont’d: Hence to start with, we can take A=0 and try

to find a point =B at which the slope df / d is positive. Point B

can be taken as the first value out of t0, 2t0, 4t0, 8t0…at which f’

is nonnegative, where t0 is a preassigned initial step size. It then

follows that * is bounded in the interval A ≤ * ≤ B.

Page 99: Optimization Nonlinear programming: One dimensional minimization methods.

Cubic Interpolation Method

• Stage 3: If the cubic equation

is used to approximate the function f () between points A and B,

we need to find the values f A=f (=A), f A’=df/d (=A), f B=f

(=B), f B’=df /d (=B) in order to evaluate the constants, a,b,c,

and d in the above equation. By assuming that A ≠0, we can derive

a general formula for . From the above equation, we have:

32)( dcbah

2

2

32

32

32

32

dBcBbf

dAcAbf

dBcBbBaf

dAcAbAaf

B

A

B

A

*

~

Page 100: Optimization Nonlinear programming: One dimensional minimization methods.

Cubic Interpolation Method

• Stage 3 cont’d: The equation

can be solved to find the constants as:

2

2

32

32

32

32

dBcBbf

dAcAbf

dBcBbBaf

dAcAbAaf

B

A

B

A

BABA

BABA

BAA

ffAB

ff

ffZfAfBZBAc

ABZfAfBdAcAbAfa

)3(Z

where)2(B)-3(A

1dand])[(

B)-(A

1

)2(B)-(A

1bwith

22

222

32

Page 101: Optimization Nonlinear programming: One dimensional minimization methods.

Cubic Interpolation Method

• Stage 3 cont’d: The necessary condition for the minimum

of h() given by the equation

is that

32)( dcbah

d

bdcc

dcbd

dh

3

)3(*

~

isthat

032

2/12

2

Page 102: Optimization Nonlinear programming: One dimensional minimization methods.

Cubic Interpolation Method

• Stage 3 cont’d: The application of the sufficiency

condition for the minimum of h() leads to the relation:

0*~

62*

~2

2

dcd

hd

Page 103: Optimization Nonlinear programming: One dimensional minimization methods.

Cubic Interpolation Method

• Stage 3 cont’d: By substituting the expressions for b,c, and d given

by the equations

into

d

bdcc

3

)3(*

~ 2/12

)2(B)-3(A

1d

])[(B)-(A

1and)2(

B)-(A

1b

2

222

2

BA

BABA

ffZ

fAfBZBAcABZfAfB

0*~

62*

~2

2

dcd

hd

Page 104: Optimization Nonlinear programming: One dimensional minimization methods.

Cubic Interpolation Method

Stage 3 cont’d: We obtain:

0)(2

)23)((2

))(2)((2

)(

where

)(2

*~

22

2/12

BA

ABA

ABA

BA

BA

A

ffAB

ZfZfZfAB

QZfffZAB

ffZQ

ABZff

QZfA

Page 105: Optimization Nonlinear programming: One dimensional minimization methods.

Cubic Interpolation Method

Stage 3 cont’d: By specializing all the equations below:

BABA

BABA

BAA

ffAB

ff

ffZfAfBZBAc

ABZfAfBdAcAbAfa

)3(Z

where)2(B)-3(A

1dand])[(

B)-(A

1

)2(B)-(A

1bwith

22

222

32

d

bdcc

dcbd

dh

3

)3(*

~

isthat

032

2/12

2

0*~

62*

~2

2

dcd

hd 0)(2

)23)((2

))(2)((2

)(

where

)(2

*~

22

2/12

BA

ABA

ABA

BA

BA

A

ffAB

ZfZfZfAB

QZfffZAB

ffZQ

ABZff

QZfA

Page 106: Optimization Nonlinear programming: One dimensional minimization methods.

Cubic Interpolation Method

Stage 3 cont’d:

For the case where A=0, we obtain:

BA

BA

BABA

A

BAAAA

ffB

ffZ

ffZZff

QZfB

ffZB

dfZB

cfbfa

)(3

0)(Q 2

*~

)2(3

1 )(

1

2/12

2

Page 107: Optimization Nonlinear programming: One dimensional minimization methods.

Cubic Interpolation Method

Stage 3 cont’d: The two values of in the equations:

correspond to the two possibilities for the vanishing of h’() [i.e., at a maximum of

h() and at a minimum]. To avoid imaginary values of Q, we should ensure the

satisfaction of the condition

in equation

Zff

QZfB

ABZff

QZfA

BA

A

BA

A

2*

~

)(2

*~

*~

02 BA ffZ

2/12 )( BA ffZQ

Page 108: Optimization Nonlinear programming: One dimensional minimization methods.

Cubic Interpolation Method

Stage 3 cont’d:

This inequality is satisfied automatically since A and B are

selected such that f’A <0 and f’B ≥0. Furthermore, the sufficiency

condition when A=0 requires that Q > 0, which is already

satisfied. Now, we compute using

and proceed to the next stage.

*~

Zff

QZfB

BA

A

2*

Page 109: Optimization Nonlinear programming: One dimensional minimization methods.

Cubic Interpolation Method

Stage 4: The value of found in stage 3 is the true minimum of h() and may not

be close to the minimum of f (). Hence the following convergence criteria

can be used before choosing

where 1 and 2 are small numbers whose values depend on the

accuracy desired.

*~

*

*~

2*~

*~

1*)

~(

*)~

(*)~

(

fd

df

f

fh

TS

Page 110: Optimization Nonlinear programming: One dimensional minimization methods.

Cubic Interpolation Method

Stage 4: The criterion

can be stated in nondimensional form as

If the criteria in the above equation and the equation

are not satisfied, a new cubic equation can be used to approximate f () as follows:

2*~

*~

fd

df TS

2

*~

f

f

S

S

1*)

~(

*)~

(*)~

(

f

fh

32)( dcbah

Page 111: Optimization Nonlinear programming: One dimensional minimization methods.

Cubic Interpolation Method

Stage 4 cont’d: The constants a’, b’, c’ and d’ can be evaluated by using the function

and derivative values at the best two points out of the three points currently available: A,

B, and . Now the general formula given by the equation:

is to be used for finding the optimal step size . If , the new points A and B

are taken as and B, respectively; otherwise if , the new points A and B

are taken as A and and equations

and

are again used to test the convergence of . If convergence is achieved, is

taken as * and the procedure is stopped. Otherwise, the entire procedure is repeated

until the desired convergence is achieved.

)(2

*~

ABZff

QZfA

BA

A

*~

*~

0*)~

( f

*~

0*)~

( f

*~

1*)

~(

*)~

(*)~

(

f

fh2

*~

f

f

S

S

*~

*~

Page 112: Optimization Nonlinear programming: One dimensional minimization methods.

Example

Find the minimum of

By the cubic interpolation method.

Solution: Since this problem has not arisen during a multivariable optimization process, we can skip stage 1. We take A=0, and find that:

To find B at which df/d is nonnegative, we start with t0=0.4 and evaluate the derivative at t0, 2t0, 4t0,……This gives

5205 35 f

02020155)0(0

24

Ad

df

Page 113: Optimization Nonlinear programming: One dimensional minimization methods.

Example

This gives:

Thus, we find that:

A=0.0, fA = 5.0, f’A = -20.0,

B=3.2, fB = 113.0, f’B = 350.688,

A < * <B

688.3500.20)2.3(15)2.3(5)2.38(

632.250.20)6.1(15)6.1(5)6.14(

552.270.20)8.0(15)8.0(5)8.02(

272.220.20)4.0(15)4.0(5)4.0(

240

240

240

240

tf

tf

tf

tf

Page 114: Optimization Nonlinear programming: One dimensional minimization methods.

Example

Iteration I: To find the value of , and to test the convergence criteria, we first compute Z and Q as:

*~

1.84*~

:have we value,negative thediscardingBy

0.1396-or 88.1459.176350.68820.0-

244.0229.58820.0-3.2*

~

Hence

0.244)]688.350)(0.20(588.229[

588.229688.3500.202.3

)0.1130.5(3

2/12

Q

Z

Page 115: Optimization Nonlinear programming: One dimensional minimization methods.

Example

Convergence criterion: If is close to the true minimum *, then

should be approximately zero. Since

Since this is not small, we go to the next iteration or refitting. As ,

we take A= and

*~

ddff /*)~

(*)~

(

20155 24 f

0.1320)84.1(15)84.1(5*)~

( 24 f

0*)~

( f

*~

BA

ffB

ff

ff

BB

AA

A

*~

688.350 ,0.113 ,2.3

0.13 ,70.41 1.84,A

Thus,

70.415)84.1(20)84.1(5)84.1(*)~

( 35

Page 116: Optimization Nonlinear programming: One dimensional minimization methods.

Example Iteration 2

BA

ffB

ff

f

f

f

CriteriaeConvergenc

Q

Z

BB

AA

B

*

35.5,90.42,05.2

00.13,70.41 1.84,A

Thus,

90.420.5)05.2(0.20)05.2(0.5)05.2(

and ]0*)~

( [as 2.05*~

Bwith iteration next the togo welarge, is value thisSince

35.50.20)05.2(0.15)05.2(0.5*)~

(

05.2)84.12.3(624.6688.35013.0-

5.673.312-13.0-1.84*

~

Hence,

5.67)]688.350)(0.13()312.3[(

312.3688.35000.13)84.120.3(

)0.1137.41(3

35

24

2/12

Page 117: Optimization Nonlinear programming: One dimensional minimization methods.

Example Iteration 3

Convergence criterion:

Assuming that this value is close to zero, we can stop the iterative process and take

0086.2)84.105.2(98.1835.513.0-

61.129.4913.0-1.84*

~

Hence

61.12)]35.5)(0.13()49.9[(

49.935.513)84.105.2(

)90.427.41(3

2/12

Q

Z

855.00.20)0086.2(0.15)0086.2(0.5*)~

( 24 f

0086.2*~

*

Page 118: Optimization Nonlinear programming: One dimensional minimization methods.

Direct root methods

The necessary condition for f () to have a minimum of * is that

Three root finding methods will be considered here:

• Newton method

• Quasi-Newton method

• Secant methods

0)( f

Page 119: Optimization Nonlinear programming: One dimensional minimization methods.

Newton method

Consider the quadratic approximation of the function f () at = i using the Taylor’s series expansion:

By setting the derivative of this equation equal to zero for the minimum of f (), we obtain:

If i denotes an approximation to the minimum of f (), the above equation can be rearranged to obtain an improved approximation as:

2))((2

1))(()()( iiiii ffff

0))(()()( iii fff

Page 120: Optimization Nonlinear programming: One dimensional minimization methods.

Newton method

Thus, the Newton method is equivalent to using a quadratic approximation for the function f () and applying the necessary conditions.

The iterative process given by the above equation can be assumed to have converged when the derivative, f’(i+1) is close to zero:

)(

)(1

i

iii f

f

quantity small a is where

)( 1

if

Page 121: Optimization Nonlinear programming: One dimensional minimization methods.

Newton method

• FIGURE 5.18a sayfa 318

Page 122: Optimization Nonlinear programming: One dimensional minimization methods.

Newton method

• If the starting point for the iterative process is not close to the true solution *, the Newton iterative process may diverge as illustrated:

Page 123: Optimization Nonlinear programming: One dimensional minimization methods.

Newton method

Remarks:

• The Newton method was originally developed by Newton for solving nonlinear equations and later refined by Raphson, and hence the method is also known as Newton-Raphson method in the literature of numerical analysis.

• The method requires both the first- and second-order derivatives of f ().

Page 124: Optimization Nonlinear programming: One dimensional minimization methods.

Example

Find the minimum of the function:

Using the Newton-Raphson method with the starting point 1=0.1. Use =0.01 in the equation

for checking the convergence.

1

tan65.01

75.065.0)( 1

2

f

quantity small a is where

)( 1

if

Page 125: Optimization Nonlinear programming: One dimensional minimization methods.

Example

Solution: The first and second derivatives of the function f () are given by :

Iteration 1

1=0.1, f (1) = -0.188197, f’ (1) = -0.744832, f’’ (1)=2.68659

Convergence check: | f (2)| =|-0.138230| >

32

2

222

2

32

2

1222

)1(

2.38.2

1

65.0

)1(

)1(65.0

)1(

)31(5.1)(

1tan65.0

1

65.0

)1(

5.1)(

f

f

377241.0)(

)(

1

112

f

f

Page 126: Optimization Nonlinear programming: One dimensional minimization methods.

Example Solution cont’d:

Iteration 2

f (2 ) = -0.303279, f’ (2) = -0.138230, f’’ (2) = 1.57296

Convergence check: | f’(3)| =|-0.0179078| >

Iteration 3

f (3 ) = -0.309881, f’ (3) = -0.0179078, f’’ (3) = 1.17126

Convergence check: | f’(4)| =|-0.0005033| <

Since the process has converged, the optimum solution is taken as * 4=0.480409

465119.0)(

)(

2

223

f

f

480409.0)(

)(

3

334

f

f

Page 127: Optimization Nonlinear programming: One dimensional minimization methods.

Quasi-Newton Method If the function minimized f () is not available in closed form or is

difficult to differentiate, the derivatives f’ () and f’’ () in the equation

can be approximated by the finite difference formula as:

where is a small step size.

)(

)(1

i

iii f

f

2

)()(2)()(

2

)()()(

iiii

iii

ffff

fff

Page 128: Optimization Nonlinear programming: One dimensional minimization methods.

Quasi-Newton Method Substitution of:

into

leads to

2

)()(2)()(

2

)()()(

iiii

iii

ffff

fff

)(

)(1

i

iii f

f

)]()(2)([2

)]()([1

iii

iiii fff

ff

Page 129: Optimization Nonlinear programming: One dimensional minimization methods.

Quasi-Newton Method This iterative process is known as the quasi-Newton method. To test the

convergence of the iterative process, the following criterion can be used:

where a central difference formula has been used for evaluating the derivative of f and is a small quantity.

Remarks: The equation

requires the evaluation of the function at the points i+ and i - in addition to i in each iteration.

2

)()()( 11

1ii

i

fff

)]()(2)([2

)]()([1

iii

iiii fff

ff

Page 130: Optimization Nonlinear programming: One dimensional minimization methods.

Example

Find the minimum of the function

using the quasi-Newton method with the starting point 1=0.1 and the step size =0.01 in central difference formulas. Use =0.01 in equation

for checking the convergence.

1

tan65.01

75.065.0)( 1

2

f

2

)()()( 11

1ii

i

fff

Page 131: Optimization Nonlinear programming: One dimensional minimization methods.

Example

Solution: Iteration I

137300.02

)(

:

377882.0)2(2

)(

180615.0)(,195512.0)(

188197.0)(,01.0,1.0

222

111

1112

1111

11

fff

checkeConvergenc

fff

ff

ffff

ff

Page 132: Optimization Nonlinear programming: One dimensional minimization methods.

Example

Solution: Iteration 2

017700.02

)(

:

465390.0)2(2

)(

,301916.0)(

304662.0)(,303368.0)(

333

222

2223

22

2222

fff

checkeConvergenc

fff

ff

ff

ffff

Page 133: Optimization Nonlinear programming: One dimensional minimization methods.

Example

Solution: Iteration 3

480600.0* assolution optimum the take weconverged, has process theSince

000350.02

)(

:

480600.0)2(2

)(

,309650.0)(

310004.0)(,309885.0)(

4

444

333

3334

33

3333

fff

checkeConvergenc

fff

ff

ff

ffff

Page 134: Optimization Nonlinear programming: One dimensional minimization methods.

Secant method

• The secant method uses an equation similar to equation:

as:

where s is the slope of the line connecting the two points (A, f’(A)) and (B, f’(B)), where A and B denote two different approximations to the correct solution, *. The slope s can be expressed as:

0))(()()( iii fff

0)()()( ii sff

AB

AfBfs

)()(

Page 135: Optimization Nonlinear programming: One dimensional minimization methods.

Secant method

The equation:

approximates the function f’() between A and B as a linear equation (secant), and hence the solution of the above equation gives the new approximation to the root of the f’() as:

The iterative process given by the above equation is known as the secant method. Since the secant approaches the second derivative of f () at A as B approaches A, the secant method can also be considered as a quasi-Newton method.

0)()()( ii sff

)()(

))(()(1 AfBf

ABAfA

s

f iii

Page 136: Optimization Nonlinear programming: One dimensional minimization methods.

Secant method

Page 137: Optimization Nonlinear programming: One dimensional minimization methods.

Secant method• It can also be considered as a form of elimination technique since part of

the interval, (A,İ+1) in the figure is eliminated in every iteration.

The iterative process can be implemented by using the following step-by-step procedure:

1. Set 1=A=0 and evaluate f’(A). The value of f’(A) will be negative. Assume an initial trial step length t0.

2. Evaluate f’(t0).

3. If f’(t0)<0, set A= i=t0, f’(A)= f’(t0), new t0=2t0, and go to step 2.

4. If f’(t0)≥0, set B= t0, f’(B)= f’(t0), and go to step 5.

5. Find the new approximate solution of the problem as:

)()(

))((1 AfBf

ABAfAi

Page 138: Optimization Nonlinear programming: One dimensional minimization methods.

Secant method

6. Test for convergence:

where is a small quantity. If the above equation is satisfied, take * i+1 and stop the procedure. Otherwise, go to step 7.

7. If f’(İ+1) ≥ 0, set new B= İ+1, f’(B) =f’(İ+1), i=i+1, and go to step 5.

8. If f’(İ+1) < 0, set new A= İ+1, f’(A) =f’(İ+1), i=i+1, and go to step 5.

)( 1if

Page 139: Optimization Nonlinear programming: One dimensional minimization methods.

Secant methodRemarks:

1. The secant method is identical to assuming a linear equation for f’(). This implies that the original function, f(), is approximated by a quadratic equation.

2. In some cases, we may encounter a situation where the function f’() varies very slowly with . This situation can be identified by noticing that the point B remains unaltered for several consecutive refits. Once such a situation is suspected, the convergence process can be improved by taking the next value of i+1 as (A+B)/2 instead of finding its value from

)()(

))((1 AfBf

ABAfAi

Page 140: Optimization Nonlinear programming: One dimensional minimization methods.

Secant method Example

Find the minimum of the function

using the secant method with an initial step size of t0=0.1, 1=0.0, and =0.01.

Solution

1=A=0.0, t0=0.1, f’(A)=-1.02102, B=A+t0=0.1, f’(B)=-0.744832

Since f’(B)<0, we set new A=0.1, f’(A)=-0.744832, t0=2(0.1)=0.2,

B= 1+t0=0.2, and compute f’(B)=-0.490343. Since f’(B)<0, we set new A=0.2,

f’(A)=-0.490343, t0=2(0.2)=0.4, B= 1+t0=0.4, and compute f’(B)=-0.103652.

Since f’(B)<0, we set new A=0.4, f’(A)=-0.103652 t0=2(0.4)=0.8, B= 1+t0=0.8,

and compute f’(B)=-0.180800. Since, f’(B)>0, we proceed to find 2

1

tan65.01

75.065.0)( 1

2

f

Page 141: Optimization Nonlinear programming: One dimensional minimization methods.

Secant method Iteration 1

Since A=1=0.4, f’(A)=-0.103652, B=0.8, f’(B)= 0.180800, we compute

Convergence check:

Iteration 2

Since f’(2)=+0.0105789 > 0, we set new A=0.4,f’(A)=-0.103652, B= 2=0.545757,

f’(B)=f’(2)=0.0105789, and compute

Convergence check:

Since the process has converged, the optimum solution is given by *3=0.490632

545757.0)()(

))((2

AfBf

ABAfA

0105789.0)( 2f

490632.0)()(

))((3

AfBf

ABAfA

00151235.0)( 3f

Page 142: Optimization Nonlinear programming: One dimensional minimization methods.

Practical Considerations

Sometimes, the Direct Root Methods such as the Newton, Quasi-Newton and the Secant method or the interpolation methods such as the quadratic and the cubic interpolation methods may be:

• very slow to converge, • may diverge • may predict the minimum of the function f() outside the

initial interval of uncertainty, especially when the interpolating polynomial is not representative of the variation of the function being minimized.

In such cases, we can use the Fibonacci or the golden section method to find the minimum.

Page 143: Optimization Nonlinear programming: One dimensional minimization methods.

Practical Considerations

In some problems, it might prove to be more efficient to combine several techniques. For example:

• The unrestricted search with an accelerated step size can be used to bracket the minimum and then the Fibonacci or the golden section method can be used to find the optimum point.

• In some cases, the Fibonacci or the golden section method can be used in conjunction with an interpolation method.

Page 144: Optimization Nonlinear programming: One dimensional minimization methods.

Comparison of methods

• The Fibonacci method is the most efficient elimination technique in finding the minimum of a function if the initial interval of uncertainty is known.

• In the absence of the initial interval of uncertainty, the quadratic interpolation method or the quasi-Newton method is expected to be more efficient when the derivatives of the function are not available.

• When the first derivatives of the function being minimized are available, the cubic interpolation method or the secant method are expected to be very efficient.

• On the other hand, if both the first and the second derivatives of the function are available, the Newton method will be the most efficient one in finding the optimal step length, *.


Recommended