+ All Categories
Home > Documents > ch_9_1_2

ch_9_1_2

Date post: 04-Apr-2018
Category:
Upload: slvprasaad
View: 213 times
Download: 0 times
Share this document with a friend

of 46

Transcript
  • 7/29/2019 ch_9_1_2

    1/46

    Page 1

    ENGINEERING OPTIMIZATIONMethods and Applications

    A. Ravindran, K. M. Ragsdell, G. V. Reklaitis

    Book Review

  • 7/29/2019 ch_9_1_2

    2/46

    Page 2

    Chapter 9: Direction Generation Methods

    Based on Linearization

    Part 1: Ferhat Dikbiyik

    Part 2:Mohammad F. Habib

    Review Session

    July 30, 2010

  • 7/29/2019 ch_9_1_2

    3/46

    Page 3

    The Linearization-based algorithms in Ch. 8

    LP solution techniques to specify the sequence ofintermediate solution points.

    )(tx

    The linearized

    subproblem atthis point is

    updated

    The exact location

    of next iterate isdetermined by LP

    )1( tx

    The linearized subproblem cannot be expected to

    give a very good estimate of either boundaries ofthe feasible solution region or the contours of the

    objective function

  • 7/29/2019 ch_9_1_2

    4/46

    Page 4

    GoodDirection Search

    Rather than relying on the admittedly inaccuratelinearization to define the precise location of a

    point, it is more realistic to utilize the linear

    approximations only to determine a locally good

    direction for search.

  • 7/29/2019 ch_9_1_2

    5/46

    Page 5

    Outline

    9.1 Method of Feasible Directions

    9.2 Simplex Extensions for LinearlyConstrained Problems

    9.3 Generalized Reduced Gradient Method

    9.4 Design Application

  • 7/29/2019 ch_9_1_2

    6/46

    Page 6

    9.1 Method of Feasible Directions

    G. Zoutendijk

    Mehtods of Feasible

    Directions,

    Elsevier, Amsterdam,

    1960

  • 7/29/2019 ch_9_1_2

    7/46

    Page 7

    Preliminaries

    Source: Dr. Muhammad Al-Slamah, Industrial Engineering, KFUPM

  • 7/29/2019 ch_9_1_2

    8/46

    Page 8

    Preliminaries

    Source: Dr. Muhammad Al-Slamah, Industrial Engineering, KFUPM

  • 7/29/2019 ch_9_1_2

    9/46

    Page 9

    9.1 Method of Feasible Directions

    Suppose that is a starting point that satisfies all

    constraints.

    and suppose that a certain subset of these constraints

    are binding at .

  • 7/29/2019 ch_9_1_2

    10/46

    Page 10

    9.1 Method of Feasible Directions

    Suppose is a feasible point

    Definex as

    The first order Taylor approximation off(x) is given by

    In order for , we have to have

    A direction satisfying this relationship is called adescent

    direction

  • 7/29/2019 ch_9_1_2

    11/46

    Page 11

    9.1 Method of Feasible Directions

    Source: Dr. Muhammad Al-Slamah, Industrial Engineering, KFUPM

    This relationship dictates the angle between dand

    to be greater than 90 and less than 270 .

  • 7/29/2019 ch_9_1_2

    12/46

    Page 12

    9.1 Method of Feasible Directions

    The first order Taylor approximation forconstraints

    And with assumption

    (because its binding)

    In order forx to be a feasible, hence

    Any direction dsatisfying this relationship

    called afeasible direction

  • 7/29/2019 ch_9_1_2

    13/46

    Page 13

    9.1 Method of Feasible Directions

    Source: Dr. Muhammad Al-Slamah, Industrial Engineering, KFUPM

    This relationship dictates the angle between dand

    has to be between 0 and than 90 .

  • 7/29/2019 ch_9_1_2

    14/46

    Page 14

    9.1 Method of Feasible Directions

    Source: Dr. Muhammad Al-Slamah, Industrial Engineering, KFUPM

    In order forx to solve the inequality constrained problem,

    the direction dhas to be both a descent and feasible solution.

  • 7/29/2019 ch_9_1_2

    15/46

    Page 15

    9.1 Method of Feasible Directions

    Zoutendijks basic idea is at each stage ofiteration to determine a vector dthat will be both

    a feasible direction and a descent direction. This

    is accomplished numerically by finding a

    normalized direction vector dand a scalarparameter > 0 such that

    and is as large as possible.

  • 7/29/2019 ch_9_1_2

    16/46

    Page 16

    9.1 Method of Feasible Directions

    Source: Dr. Muhammad Al-Slamah, Industrial Engineering, KFUPM

  • 7/29/2019 ch_9_1_2

    17/46

    Page 17

    9.1.1 Basic Algorithm

    The active constraint set is defined as

    for some small

  • 7/29/2019 ch_9_1_2

    18/46

    Page 18

    9.1.1 Basic Algorithm

    Step 1. Solve the linear programming problem

    Label the solution and

  • 7/29/2019 ch_9_1_2

    19/46

    Page 19

    9.1.1 Basic Algorithm

    Step 2. If the iteration terminates,since no further improvement is possible.Otherwise, determine

    If no exists, set

  • 7/29/2019 ch_9_1_2

    20/46

    Page 20

    9.1.1 Basic Algorithm

    Step 3. Find such that

    Set and continue.

  • 7/29/2019 ch_9_1_2

    21/46

    Page 21

    Example 9.1

    , since g_1 is the only bindingconstraint.

  • 7/29/2019 ch_9_1_2

    22/46

    Page 22

    Example 9.1

    We must search along the ray

    to find the point at which boundary of feasible region is intersected

  • 7/29/2019 ch_9_1_2

    23/46

    Page 23

    Example 9.1

    Since

    is positive for all 0 and is not violated as is increased. Todetermine the point at which will be intersected, we solve

    Finally, we search on over range todetermine the optimum of

  • 7/29/2019 ch_9_1_2

    24/46

    Page 24

    Example 9.1

  • 7/29/2019 ch_9_1_2

    25/46

    Page 25

    9.1.2 Active Constraint Sets and Jamming

    Example 9.2

  • 7/29/2019 ch_9_1_2

    26/46

    Page 26

    9.1.2 Active Constraint Sets and Jamming

    The active constraint set used in the basic formof feasible direction algorithm, namely,

    cannot only slow down the process of iterations

    but also lead to convergence to points that arenot Kuhn-Tucker points.

    This type of false convergence is known as

    jamming

  • 7/29/2019 ch_9_1_2

    27/46

    Page 27

    9.1.2.1 -Perturbation Method

    1. At iteration point and with given ,define and carry out step 1 of the basic

    algorithm.

    2. Modify step 2 with the following: If ,

    set and continue. However, if, set and proceed with line search of

    the basic method. If , then a Kuhn-

    Tucker point has been found.

    With this modification, it is efficient to set rather looselyinitially so as to include the constraints in a larger neighborhood

    of the point . Then, as the iterations proceed, the size of the

    neighborhood will be reduced only when it is found to be

    necessary.

  • 7/29/2019 ch_9_1_2

    28/46

    Page 28

    9.1.2.2 Topkis-Veinott Variant

    This approach simply dispense with the activeconstraint concept altogether and redefine the

    direction-finding subproblem as follows:

    If the constraint loose at , then the selection ofdis less affected by

    constraintj, because the positive constraint value will counterbalance the

    effect of the gradient term. This ensures that no sudden changes are

    introduced in the search direction.

  • 7/29/2019 ch_9_1_2

    29/46

    Page 29

    9.2 Simplex Extensions for Linearly Constrained

    Problems

    At a given point, the number of directions thatare both descent and feasible directions isgenerally infinite.

    In the case of linear programs, the generation ofsearch directions was simplified by changing

    one variable at a time; feasibility was ensured bychecking sign restrictions, and descent wasensured by selecting a variable with negativerelative-cost coefficient.

  • 7/29/2019 ch_9_1_2

    30/46

    Page 30

    9.2.1 Convex Simplex Method

    M rows N components

    Given a feasible point thex variable is partitioned intotwo sets:

    the basic variables , which are all positive

    the nonbasic variables , which are all zero

    anMvector

    an N-Mvector

  • 7/29/2019 ch_9_1_2

    31/46

    Page 31

    9.2.1 Convex Simplex Method

  • 7/29/2019 ch_9_1_2

    32/46

    Page 32

    9.2.1 Convex Simplex Method

    The relative-cost coefficients

    The nonbasic variable to enter is selected by finding

    such that

    The basic variable to leave the basis is selected using the

    minimum-ratio rule. That is, we find rsuch that

    elements of matrix

  • 7/29/2019 ch_9_1_2

    33/46

    Page 33

    9.2.1 Convex Simplex Method

    The new feasible solution

    and all other variables zero. At this point, the

    variables and are relabeled. Since an

    exchange will have occurred,will be redefined. The matrix is recomputed

    and another cycle of iterations is begun.

  • 7/29/2019 ch_9_1_2

    34/46

    Page 34

    9.2.1 Convex Simplex Method

    The application of same algorithm to linearizedform of a non-linear objective function:

    The relative-cost factor:

  • 7/29/2019 ch_9_1_2

    35/46

    Page 35

    Example 9.4

  • 7/29/2019 ch_9_1_2

    36/46

    Page 36

    Example 9.4

  • 7/29/2019 ch_9_1_2

    37/46

    Page 37

    Example 9.4

    The relative-cost factor:

  • 7/29/2019 ch_9_1_2

    38/46

    Page 38

    Example 9.4

    The nonbasic variable to enter will be , since

    The basic variable to leave will be , since

  • 7/29/2019 ch_9_1_2

    39/46

    Page 39

    Example 9.4

    The new point is thus

    A line search between and is now required

    to locate minimum of . Note that

    remains at 0, while changes as given by

  • 7/29/2019 ch_9_1_2

    40/46

    Page 40

    Example 9.4

  • 7/29/2019 ch_9_1_2

    41/46

    Page 41

    Convex Simplex Algorithm

  • 7/29/2019 ch_9_1_2

    42/46

    Page 42

    Convex Simplex Algorithm

  • 7/29/2019 ch_9_1_2

    43/46

    Page 43

    9.2.2 Reduced Gradient Method

    The nonbasic variable direction vector:

    This definition ensures that when for all

    i, the Kuhn-Tucker conditions are satisfied.

  • 7/29/2019 ch_9_1_2

    44/46

    Page 44

    9.2.2 Reduced Gradient Method

    In the first case, the limiting value will be given by

    If all , then set

    In the second case,

    If all , then set

  • 7/29/2019 ch_9_1_2

    45/46

    Page 45

    Reduced Gradient Algorithm

  • 7/29/2019 ch_9_1_2

    46/46