+ All Categories
Home > Documents > Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Date post: 12-Jan-2016
Category:
Upload: brook-fields
View: 224 times
Download: 0 times
Share this document with a friend
149
Process O ptim ization
Transcript
Page 1: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Process Optimization

Page 2: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Tier I: Mathematical Methods of Optimization

Section 2:

Linear Programming

Page 3: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Linear Programming (LP)

• Linear programming (linear optimization) is the area of optimization problems with linear objective functions and constraints

Example:

minimize: f(x) = 6x1 + 5x2 + 2x3 + 7x4

subject to: 2x1 + 8x3 + x4 ≥ 20

x1 – 5x2 – 2x3 + 3x4 = -5

Page 4: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Linear Programming con’t

• None of the variables are multiplied by another variable, raised to a power, or used in a nonlinear function

• Because the objective function and constraints are linear, they are convex. Thus, if an optimal solution to an LP problem is found, it is the global optimum

Page 5: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

LP Standard Form

• LP Standard form:

minimize: f = cx

subject to: Ax = b

xi ≥ 0; i = 1, …, n

where c is called the cost vector (1 by n), x is the vector of variables (n by 1), A is the coefficient matrix (m by n), and b is a m by 1 vector of given constants.

Page 6: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Standard Form Basics

• For a maximization problem, we can transform using:

max(f(x)) min(-f(x))

• For inequality constraints, use “slack” variables:

2x1 + 3x2 ≤ 5 2x1 + 3x2 + s1 = 5where s1 ≥ 0

Page 7: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Using Slack Variables

When we transform the equation

2x1 + 3x2 ≤ 5 to 2x1 + 3x2 + s1 = 5

If the left-hand side (LHS) (2x1 + 3x2) is less than the right-hand side (RHS) (5), then s1 will take a positive value to make the equality true. The nearer the value of the LHS is to the RHS, the smaller the value of s1 is. If the LHS is equal to the RHS, s1 = 0. s1 cannot be negative because the LHS cannot be greater than the RHS.

Page 8: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Standard Form Example

Example:

Write in Standard Form:

maximize: f = x1 + x2

subject to: 2x1 + 3x2 ≤ 6

x1 + 7x2 ≥ 4

x1 + x2 = 3

x1 ≥ 0, x2 ≥ 0

Define slack variables x3 ≥ 0 & x4 ≥ 0

Page 9: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Example Problem Rewritten

The problem now can be written:

minimize: g = –x1 – x2

subject to: 2x1 + 3x2 + x3 = 6

x1 + 7x2 – x4 = 4

x1 + x2 = 3

x1 ≥ 0, x2 ≥ 0, x3 ≥ 0, x4 ≥ 0

0011 c

3

4

6

b

0011

1071

0132

A

Page 10: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Linear Algebra Review

• The next few slides review several concepts from linear algebra that are the basis of the methods used to solve linear optimization problems

Page 11: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Vectors & Linear Independence

• Vectors– A k-vector is a row or column array of k

numbers. It has a dimension of k.

• Linear Independence (LI)– A collection of vectors a1, a2, …, ak, each of

dimension n, is called linearly independent if the equation

means that for j=1, 2, …, k

k

jjj

1

0a

0j

Page 12: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Linear Independence con’t

• In other words, a set of vectors is linearly independent if one vector cannot be written as a combination of any of the other vectors.

• The maximum number of LI vectors in a n-dimensional space is n.

Page 13: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Linear Independence con’t

For example, in a 2-dimension space:

The vectors & are not

linearly independent because x2 = 5x1.

are LI because there is

no constant you can multiply one by to get the other.

5

41x

25

202x

2

01x

1

32x&

Page 14: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Spanning Sets

k

jjj

1

ab

• A set of vectors a1, a2, …, ak in a n-dimensional space is said to span the space if any other vector in the space can be written as a linear combination of the vectors

• In other words, for any vector b, there must exist scalars 1, 2, …, k such that

Page 15: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Bases

• A set of vectors is said to be a basis for a n-dimensional space if:

1. The vectors span the space

2. If any of the vectors are removed, the set will no longer span the space

• A basis of a n-dimensional space must have exactly n vectors

• There may be many different bases for a given space

Page 16: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Bases con’t

• An example of a basis is the coordinate axis of a graph. For a 2-D graph, you cannot remove one of the axes and still form any line with just the remaining axis.

• Or, you cannot have three axes in a 2-D plot because you can always represent the third using the other two.

Page 17: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Systems of Equations

• Linear Algebra can be used to solve a system of equations

Example:

2x1 + 4x2 = 8& 3x1 – 2x2 = 11

This can be written as an augmented matrix:

1123

842],[ bA

Page 18: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Systems of Equations con’t

• Row operations may be performed on the matrix without changing the result

• Valid row operations include the following:– Multiplying a row by a constant– Interchanging two rows– Adding one row to another

Page 19: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Solving SOE’s

• In the previous example, we want to change the A matrix to be upper triangular

multiply top

row by ½

add -3 times the

top row to the

bottom row

1123

842

1123

421

180

421

Page 20: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Solving SOE’s con’t

multiply bottom row by -1/8

• From the upper triangular augmented matrix, we can easily see that x2 = 1/8 and use this to get x1

x1 = 4 – 2 . 1/8 = 15/4

180

421

8110

421

421 21 xx

81

21 10 xx

Page 21: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Matrix Inversion

• The inverse of a matrix can be found by using row operations

Example:

Form the augmented matrix (A, I)

Transform to (I, A-1)

using row operations

211

121

112

A

100211

010121

001112

12

5

12

3

12

112

3

12

3

12

312

1

12

3

12

5

100

010

001

Page 22: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Optimization Equations

• We have seen that the constraints can be written in the form .

• We should have more variables than equations so that we have some degrees of freedom to optimize.– If the number of equations are more than or

equal to the number of variables, the values of the variables are already specified.

bAx

Page 23: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

General Solution to SOE’s

• Given a system of equations in the form

– Assume m (number of equations) < n (number of variables) underspecified system

• We can split the system into (n-m) independent variables and (m) dependent variables. The values of the dependent variables will depend on the values we choose for the independent variables.

bAx

Page 24: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

General Solution con’t

• We call the dependent variables the basic variables because their A-matrix coefficients will form a basis. The independent variables will be called the nonbasic variables.

• By changing the variables in the basis, we can change bases. It will be shown that this allows examining different possible optimum points.

Page 25: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

General Solution con’t

Separate the A matrix in the following way:

Or,

baaa nn

mm xxx ......1

1

baa

n

mjj

jm

ii

i xx11

Page 26: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

General Solution con’t

Define matrices B & N as the following:

where B is a m by m matrix, N is a m by (n-m) matrix, & aj is the jth column of the A matrix

• B is called the “basic matrix” and N is called the “nonbasic matrix”

maaaB ...21 nmm aaaN ...21

Page 27: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

General Solution con’t

• The B matrix contains the columns of the A-matrix that correspond to the x-variables that are in the basis. Order must be maintained.– So, if x4 is the second variable of the basis, a4

must be the second column of the B-matrix

• The N matrix is just the columns of the A-matrix that are left over.

Page 28: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

General Solution con’t

Similarly, define

&

We will see later how to determine which variables to put into the basis. This is an important step to examine all possible optimal solutions.

TmB xxx ]...[ 21x T

nmmN xxx ]...[ 21 x

Page 29: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

General Solution con’t

Now, we have

Multiply both sides by B-1:

So,

bNxBx NB

)()( 11NNNB xNxBbBxxx

bBNxBx 11 NB

Page 30: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Basic Solution

• We can choose any values for (n-m) variables (the ones in xN) and then solve for the remaining m variables in xB

• If we choose xN = 0, thenThis is called a “basic solution” to the system

Basic Solution:

bBx 1B

)()( 1 0bBxxx NB

Page 31: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Basic Feasible Solutions

Now we have a solution to Ax = b. But that was just one of two sets of constraints for the optimization problem. The other was: xi ≥ 0, i = 1, …, n (non-negativity)

• A basic feasible solution (BFS) is a basic solution where every x is non-negative

A BFS satisfies all of the constraints of the optimization problem

Page 32: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Extreme Points

• A point is called an extreme point (EP) if it cannot be represented as a strict (0 < < 1) convex combination of two other feasible points.

• Remember: a convex combination of two points is a line between them.

• So, an EP cannot be on a line of two other feasible points.

Page 33: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Extreme Points (Graphical)

• Given a feasible region, an extreme point cannot lie on a line between two other feasible points (it must be on a corner)

• In a n-dimensional space, an extreme point is located at the intersection of n constraints

FeasibleRegion

Extreme Point

Not Extreme Points

Page 34: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Optimum & Extreme Points

• We have a maximization problem, so we want to go as far in the direction of the c (objective function) vector as we can

• Can we determine anything about the location of the optimum point?

Starting Point

c

Page 35: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Optimum & Extreme Points

• If we start on a line, we can move along that line in the direction of the objective function until we get to a corner

• In fact, for any c vector, the optimum point will always be on a corner

c

Page 36: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Basic Feasible Solutions

• In a n-dimensional space, a BFS is formed by the intersection of n equations.

• In 2-D:

Constraint 1

Constraint 2

Basic Feasible Solution

• But, we just saw that an extreme point is also a corner point. So, a BFS corresponds to an EP.

Page 37: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Tying it Together

• We just saw that a basic feasible solution corresponds to an extreme point.

• This is very important because for LP problems, the optimum point is always at an extreme point.

• Thus, if we can solve for all of the BFS’s (EP’s), we can compare them to find the optimum.

Unfortunately, this takes too much time.

Page 38: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Simplex Method Introduction

• The simplex method is the most common method for solving LP problems.

• It works by finding a BFS; determining whether it is optimal; and if it isn’t, it moves to a “better” BFS until the optimal is reached.

• This way, we don’t have to calculate every solution.

Page 39: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Simplex Method Algebra

Recall:

NNBBf xcxccx

bBNxBx 11 NBbBaBx 11

Nj

jj

B x

Sum over all non-basic variables

Nj Nj

jjjj

B xcxf )( 11 aBbBc

substitute

Nj

jj

B xaBbBx 11

Objective Function:

into above equation:

Page 40: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Simplex Method Algebra

Nj

jjjB xzcf )(1bBc

jBjz aBc 1where

Nj

jj

BjB xcf )( 11 aBcbBc

Multiply through and collect xj terms:

Page 41: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Simplex Method Equations

So, minimize

If (cj – zj) ≥ 0 for all j N, then the current BFS is optimal for a minimization problem.

Because, if it were < 0 for some j, that nonbasic variable, xj, could enter the basis and reduce the objective function.

Nj

jjjB xzcf )(1bBc

bBaBx 11

Nj

jj

B xSubject to:

Page 42: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Entering Variables

• A nonbasic variable may enter the basis and replace one of the basic variables

• Since xN = 0, and we have non-negativity constraints, the entering variable must increase in value.

• The entering variable’s value will increase, reducing the objective function, until a constraint is reached.

Page 43: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Entering Variable Equation

• The equation to determine which variable enters is: . Calculate for all nonbasic indices j

• For a minimization problem, choose the index j for which cj - zj is the most negative

– If cj - zj ≥ 0 for all j, the solution is optimal

• For a maximization problem, choose the index j for which cj - zj is the most positive

– If cj - zj ≤ 0 for all j, the solution is optimal

jjjT

Bj zcc aBc 1

Page 44: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Leaving Variables

• As the value of the entering variable increases, the value of at least one basic variable will usually decrease – If not, the problem is called “unbounded” and

the value of the minimum objective function is -

• The variable whose value reaches zero first will be the variable that leaves the basis

Page 45: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Entering & Leaving Variables

• Example: x1 is entering the basis while x2, x3 & x4 are the current basic variables

As soon as x2 reaches zero, we must stop because of the non-negativity constraints. But, now x2 = 0, so it is a nonbasic variable and x1 > 0, so it is a basic variable. So, x2 leaves the basis & x1 enters the basis.

x1

x2

x3

x4

Page 46: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Leaving Variable Equation

• Let j be the index of the variable that is entering the basis and i* be the index of the variable that is leaving the basis

Meaning, for every index i that is in the basis and has , calculate . The index of the value that is the minimum is the index of the leaving variable.

0)()(

)(argmin 1

1

1*

ij

ij

i aBaB

bBi

ij

i

)(

)(1

1

aB

bB

0)( 1 i

jaB

Page 47: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Leaving Variable Equation

The previous expression is obtained from the equation:

which applies when a constraint is reached

0aBbBx j

jB x11

Page 48: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

The Example Revisited

• x2, x3, & x4 start out at (B-1b)i ; (i=2, 3, 4) and have

slopes of (–B-1aj)i ; (i=2, 3, 4) where j=1 because 1 is the index of the entering variable (x1)

• Thus, the distance we can go before a basic variable reaches zero is for B-1a1 > 0. But, if (B-1a1)i < 0 (like x3), it won’t ever reach zero.

x1

x2

x3

x4

i

i

)()(

11

1

aBbB

Page 49: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

The Example Revisited

• We can also see how if none of the variables decreased, we could keep increasing x1 and improving the objective function without ever reaching a constraint –This gives an unbounded solution

x1

x2

x3

x4

Page 50: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Example Problem

Minimize f = -x1 – x2

Subject to: x1 + x2 ≤ 5

2x1 – x2 ≤ 4

x1 ≤ 3 ; x1, x2 ≥ 0

Given: The starting basis is x1, x2, & x3.

Insert slack variables x3, x4, & x5.

Page 51: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Example

Minimize f = -x1 – x2

Subject to: x1 + x2 + x3 = 5

2x1 – x2 + x4 = 4

x1 x5 = 3

x1, x2, x3, x4, x5 ≥ 0

10001

01012

00111

A

3

4

5

b 00011 c

Page 52: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Example

1st Iteration:

001

012

111321 aaaB

0

2

3

3

4

5

311

210

1001bBxB

311

210

100

001

012

1111

1B

5

0

2

3

011

BBf xc

Page 53: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Example

Now, check optimality

x4:

x5:

110

0

1

0

311

210

100

0110414

aBcBc

3)3(0

1

0

0

311

210

100

0110515

aBcBc

< 0

> 0

Page 54: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Example

So, x4 enters the basis since its optimality indicator is < 0.

0

2

3

)( 1bB 3)( 11 bB 2)( 2

1 bB 0)(& 31 bB

1

1

0

0

1

0

311

210

100

4 411 aBaB jj

Page 55: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Example

So, x3 is the leaving variable

310//argmin0)(

)(

)(argmin 1

1

1*

ANANaBaB

bBi i

j

ij

i

3)( 11 bB 2)( 2

1 bB 0)( 31 bB

0)( 141 aB 1)( 2

41 aB 1)( 341 aB

0 0 0

Page 56: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Example

2nd Iteration:

001

112

011

B

0

2

3

3

4

5

311

101

1001bBxB

311

101

1001B

5

0

2

3

011

BBf xc

a4 has been substituted for a3

Page 57: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Example

Optimality Indicators:

x3:

x5:

01)1(0

0

0

1

311

101

100

0110313

aBcBc

0000

1

0

0

311

101

100

0110515

aBcBc

Page 58: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Example Solution

All of the optimality indicators are ≥ 0, so this is the optimal solution.

So, 5* f

0

2

3

4

2

1*

x

x

x

Bx

0

0

5

3*

x

xNx

Page 59: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Simplex Algorithm Steps

1. With the chosen basis, get B and solve xB = B-1b and f = cBxB.

2. Calculate cj – zj for all nonbasic variables, j.

– For a min. problem, if all cj – zj’s are ≥ 0, the current solution is optimal. If not, choose the index with the most negative cj – zj.

– For a max. problem, if all cj – zj’s are ≤ 0, the current solution is optimal. If not, choose the index with the most positive cj – zj.

Page 60: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Simplex Algorithm Steps

3. Using the equation choose the leaving variable.

– If all (B-1aj)i’s are ≤ 0, then the solution is unbounded

4. Let xj enter the basis and xi* leave the basis. Obtain the new B matrix and start again with step 1.

0)()(

)(argmin 1

1

1*

ij

ij

i aBaB

bBi

Page 61: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Choosing A Starting Basis

• In the example, we were given a starting basis. How to come up with one on our own?

• Case #1: max (or min) problem with 1. Ax ≤ b (all ≤ inequalities) and

2. all entries of the b vector ≥ 0.

Insert slack variables into constraint equations and use the resulting identity

matrix as the starting basis

Page 62: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Choosing A Starting Basis

Let s = vector of slack variables

The problem will become

0scx

fmin

max

bIsAx ,0x 0s

Subject to

Where I = The Identity Matrix

10...00

01...00

00...10

00...01

. . .

. . .

. . .

Page 63: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Choosing A Starting Basis

Choose the slack variables to be the starting basis

The starting basis matrix (B) is the coefficients of the slack variables. This happens to be the identity matrix.

We can see that the starting basis is feasible

(xB ≥ 0):IB

0bIbbIbBx 11B

Page 64: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Example Problem #2

Minimize -x1 – 3x2

Subject to 2x1 + 3x2 ≤ 6

-x1 + x2 ≤ 1 x1, x2 ≥ 0

Insert slack variables:

2x1 + 3x2 + x3 = 6

-x1 + x2 + x4 = 1

x1, x2, x3, x4 ≥ 0

1011

0132A

1

6b 0031 c

Identity Matrix

0

Page 65: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Example #2

Use the slacks as the starting basis:

10

0143 aaB

10

01

10

011

1B

1

6

1

6

10

011bBxB 01

600

BBf xc

&

Page 66: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Example #2

Optimality Indicators:

j=1:

j=2:

1011

2

10

0100111

111

aBcczc B

3031

3

10

0100321

222

aBcczc B

c2 - z2 is the most negative, so x2 enters the basis

Page 67: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Example #2

1

6)( 1bB 6)( 3

1 bB 1)(& 41 bB

1

3

1

3

10

012 211 aBaB jj

x2 is entering the basis

Page 68: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Example #2

412argmin11

36argmin0)(

)(

)(argmin 1

1

1*

ij

ij

i aBaB

bBi

6)( 31 bB 1)( 4

1 bB

So, x4 is the leaving variable.

3)( 321 aB 1)( 4

21 aB

Page 69: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Example #2

2nd Iteration:

10

3123 aaB

10

31

10

311

1B

1

3

1

6

10

311bBxB 31

330

BBf xc

x2 replaced x4

Page 70: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Example #2

Optimality Indicators:

j=1:

j=4:

So, x1 enters the basis

4311

2

10

3130111

111

aBcczc B

3)3(01

0

10

3130041

444

aBcczc B

Page 71: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Example #2

Leaving Variable:

So, x3 leaves the basis and x1 replaces it.

1

3)( 1bB 3)( 3

1 bB 1)(& 21 bB

1

5

1

2

10

311 1 jj aB

3/53argmin0)(

)(

)(argmin 1

1

1*

ANaBaB

bBi i

j

ij

i

0

Page 72: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Example #2

3rd Iteration:

11

3221 aaB

4.02.0

6.02.0

11

321

1B

6.1

6.0

1

6

4.02.0

6.02.01bBxB

4.56.1

6.031

BBf xc

Page 73: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Example #2

Optimality Indicators:

j=3:

j=4:

Both cj-zj’s are ≥ 0, so the current solution is optimal

8.0)8.0(00

1

4.02.0

6.02.031031

333

aBcczc B

6.0)6.0(01

0

4.02.0

6.02.031041

444

aBcczc B

006.16.0* x 4.5* f

Page 74: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Example #2

This graph shows the path taken.

The dashed lines are perpendicular to the cost vector, c.

0

0

1

0

6.1

6.0

x1

x2

Increasing c

Page 75: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Example #2

• Since we were minimizing, we went in the opposite direction as the cost vector

0

0

1

0

6.1

6.0

x1

x2

Increasing c

Page 76: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

More on Starting Bases

• Case #2: max (or min) problem with:1. Ax ≥ b (at least some ≥ constraints) &

2. All entries of the b vector ≥ 0

Add slacks to make the problem become

Ax – Is = b x, s ≥ 0.

We cannot do the same trick as before because now we would have a negative identity matrix as our B matrix.

Page 77: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Case #2 con’t

• 2-Phase Method:

Introduce “artificial variables” (y) where needed to get an identity matrix. If all constraints were ≥, the problem will become:

Ax – Is + Iy = b x, s, y ≥ 0

Page 78: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Artificial Variables

• Artificial variables are not real variables.

• We use them only to get a starting basis, so we must get rid of them.

• To get rid of them, we solve an extra optimization problem before we start solving the regular problem.

Page 79: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

2-Phase Method

Phase 1:

Solve the following LP starting with B = I and xB = y = b:

Minimize y

Subject to: Ax – Is + Iy = b x, s, y ≥ 0

If y 0 at the optimum, stop – the problem is infeasible. If y = 0, then use the current basis and continue on to phase 2.

Page 80: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

2-Phase Method con’t

Phase 2:

Using the objective function from the original problem, change the c vector and continue solving using the current basis.

Minimize (or Maximize) cx

Subject to: Ax – Is = b x, s ≥ 0

Page 81: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Artificial vs. Slack Variables

• Slack variables are real variables that may be positive in an optimum solution, meaning that their constraint is a strict (< or >) inequality at the optimum.

• Artificial variables are not real variables. They are only inserted to give us a starting basis to begin the simplex method. They must become zero to have a feasible solution to the original problem.

Page 82: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Artificial Variable Example 1

• Consider the constraints: x1 + 2x2 ≥ 4

-3x1 + 4x2 ≥ 5

2x1 + x2 ≤ 6 x1, x2 ≥ 0

• Introduce slack variables: x1 + 2x2 – x3 = 4

-3x1 + 4x2 – x4 = 5

2x1 + x2 + x5 = 6

Page 83: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

AV Example 1

We can see that we cannot get an identity matrix in the coefficients and positive numbers on the right-hand side. We need to add artificial variables:

x1 + 2x2 – x3 + y1 = 4

-3x1 + 4x2 – x4 + y2 = 5

2x1 + x2 + x5 = 6

Page 84: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

AV Example 1

Now we have an identity matrix, made up of the coefficient columns of y1, y2, & x5.

We would solve the problem with the objective of minimizing y1 + y2 to get rid of the artificial variables, then use whatever basis we had and continue solving, using the original objective function.

Page 85: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Artificial Variable Example 2

• Consider the constraints:x1 + 2x2 – 5x3 ≥ -4

3x1 – x2 + 3x3 ≤ 2

-x1 + x2 + x3 = -1 x1, x2, x3 ≥ 0

• Introduce slack variables:x1 + 2x2 – 5x3 – x4 = -4

3x1 – x2 + 3x3 + x5 = 2

-x1 + x2 + x3 = -1

Page 86: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

AV Example 2

We don’t have to add an artificial variable to the 1st constraint if we multiply by -1.

When we multiply the last constraint by -1 and add an artificial variable, we have:

-x1 – 2x2 + 5x3 + x4 = 4

3x1 – x2 + 3x3 + x5 = 2

x1 – x2 – x3 + y1 = 1

x1, x2, x3, x4, x5, y1 ≥ 0

Page 87: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Constraint Manipulation

So, after adding slacks, we must make the right-hand side numbers positive. Then we add artificial variables if we need to.

Page 88: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Artificial Variable Example 3

• Consider the problem:

Maximize -x1 + 8x2

Subject to: x1 + x2 ≥ 1

-x1 + 6x2 ≤ 3

x2 ≤ 2 x1, x2 ≥ 0

Page 89: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

AV Example 3

Insert slacks:

x1 + x2 – x3 = 1

-x1 + 6x2 + x4 = 3

x2 + x5 = 2

So, we need an artificial variable in the 1st constraint.

Page 90: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

AV Example 3

Insert artificial variable:

x1 + x2 – x3 + y1 = 1

-x1 + 6x2 + x4 = 3

x2 + x5 = 2

Page 91: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

AV Example 3

So, Phase 1 is:Minimize y1

Subject to: x1 + x2 – x3 + y1 = 1

-x1 + 6x2 + x4 = 3

x2 + x5 = 2

Our starting basis is: y1, x4, & x5.

Page 92: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

AV Example 3

100

010

001546 aaaB

100

010

001

100

010

0011

1B

2

3

1

2

3

1

100

010

0011bBxB 1

2

3

1

001

BBf xc

&

100000c

2

3

1

b

Page 93: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

AV Example 3

Optimality Indicators:

j=1:

j=2:

j=3:

110

0

1

1

100

010

001

001011111

aBcczc B

110

1

6

1

100

010

001

001021222

aBcczc B

110

0

0

1

100

010

001

001031333

aBcczc B

Page 94: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

AV Example 3

It’s a tie between x1 & x2 – pick x1 to enter the basis

2

3

1

)( 1bB1)( 6

1 bB3)( 4

1 bB

0

1

1

0

1

1

100

010

001

1 111 aBaB jj

x1 is entering the basis

2)(& 51 bB

Page 95: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

AV Example 3

6//1

1argmin0)(

)(

)(argmin 1

1

1*

ANANaBaB

bBi i

j

ij

i

1)( 61 bB 3)( 4

1 bB

So, x1 replaces y1 in the basis

1)( 621 aB 1)( 4

21 aB

2)( 51 bB

0)( 521 aB

0 00

Page 96: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

AV Example 3

100

011

001541 aaaB

100

011

001

100

011

0011

1B

2

4

1

2

3

1

100

011

0011bBxB 0

2

4

1

000

BBf xc

Page 97: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

AV Example 3

Optimality Indicators:

j=2:

j=3:

j=6: 000

0

1

1

100

011

001

000011111

aBcczc B

000

1

6

1

100

011

001

000021222

aBcczc B

000

0

0

1

100

011

001

000031333

aBcczc B

Page 98: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

AV Example 3

All of the optimality indicators are ≥ 0, so this is an optimum solution.

So, we keep this basis and change the objective function to the original one:

Maximize –x1 + 8x2

Our basis is still x1, x4, & x5.

Page 99: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

AV Example 3

Back to original problem:

100

011

0011B

2

4

11bBxB

1

2

4

1

001

BBf xc

00081c

The basis remains the same

Page 100: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

AV Example 3

Optimality Indicators:

j=2:

j=3:

718

1

6

1

100

011

001

001821222

aBcczc B

110

0

0

1

100

011

001

001031333

aBcczc B

Since we are maximizing now, we want the most positive. So, x2 enters the basis.

Page 101: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

AV Example 3

2

4

1

)( 1bB1)( 1

1 bB4)( 4

1 bB

1

7

1

1

6

1

100

011

001

2 211 aBaB jj

x2 is entering the basis

2)(& 51 bB

Page 102: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

AV Example 3

41

2

7

4

1

1argmin0)(

)(

)(argmin 1

1

1*

ij

ij

i aBaB

bBi

1)( 51 bB 4)( 4

1 bB

1)( 121 aB 7)( 4

21 aB

2)( 51 bB

1)( 521 aB

MinimumSo x4 leaves the basis

Page 103: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

AV Example 3

110

061

011521 aaaB

1143.0143.0

0143.0143.0

0143.0856.01B

429.1

571.0

429.0

2

3

1

1143.0143.0

0143.0143.0

0143.0856.01bBxB

143.4

429.1

571.0

429.0

081

BBf xc

Page 104: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

AV Example 3

Optimality Indicators:

j=3:

j=4: 000

0

1

0

1143.0143.0

0143.0143.0

0143.0857.0

001041444

aBcczc B

857.0

0

0

1

1143.0143.0

0143.0143.0

0143.0857.0

001031333

aBcczc B

Page 105: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Artificial Variable Example 3

All of the optimality indicators are ≤ 0, so this is the optimum solution:

429.1

0

0

571.0

429.0

*x 143.4* f

Page 106: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

KKT Conditions

• The Karush-Kuhn-Tucker (KKT) Conditions can be used to see optimality graphically

• We will use them more in nonlinear programming later, but we can use a simplified version here

Page 107: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

KKT Conditions for LP

• Change the constraints so that they are all ≥ constraints.

• The optimum point is the point where the gradient of the objective function lies within the cone formed by the vectors normal to the intersecting constraints.

Page 108: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

KKT Conditions

• Reminder: – The gradient () of a function f with n

variables is calculated:

nx

f

x

f

x

ff ...,,

21

Example:32

21 5)(3 xxxf

231 5,5,6 xxxf

Page 109: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

KKT Constraints Example

• Example: In the example problem #2, we had the problem:

Minimize f = -x1 – 3x2

Subject to: 2x1 + 3x2 ≤ 6

-x1 + x2 ≤ 1

x1, x2 ≥ 0

The gradient of the cost function, -x1 – 3x2 is: 3,1 cf

Page 110: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

KKT Example

Previously, we saw that this problem looks like:

Constraint 1

Constraint 2

x1

x2

Extreme Points

(0, 0)

(3/5, 8/5)

(0, 1)

(3, 0)

Page 111: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

KKT Example

Change to constraints to all ≥:

g1: -2x1 – 3x2 ≥ -1

g2: x1 – x2 ≥ -1

g3: x1 ≥ 0

g4: x2 ≥ 0

Page 112: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

KKT Example

The gradients of the four constraints (counting the non-negativity constraints), g1, …, g4 are:

3,21 g

1,12 g 1,04 g

0,13 g

Page 113: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

KKT Example

The graph of the problem with the normals of the constraints becomes:

x1

x2

(0, 0)

(3/5, 8/5)

(0, 1)

(3, 0)

g1g2

g3

g2

g3

g4

g1

g4

Constraint 1

Constraint 2 The gradient corresponding to each constraint (gi) is perpendicular to the constraint i.

Page 114: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

KKT Example

c = (-1, -3) looks like this:

So, whichever cone this vector fits into corresponds to the optimal extreme point.

Page 115: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

KKT Example

x1

x2

(0, 0)

(3/5, 8/5)

(0, 1)

(3, 0)

g1g2

g3

g2

g3

g4

g1

g4

Doesn’t FitDoesn’t Fit

Doesn’t Fit

It Fits!

Page 116: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

KKT Example

• So, we get the same optimal point as when we used the simplex method

• This can also be done for problems with three variables in a 3-D space

• With four or more variables, visualization is not possible and it is necessary to use the mathematical definition

Page 117: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Mathematical Definition of KKT Conditions for LP

Given a LP minimization problem:

Modify the constraints so that we have:

,0)( xig mi 1

Where gi(x) is the linear constraint equation i. The bi that was on the right side of the inequality sign is moved to the left side and included in gi.

Page 118: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Mathematical Definition of KKT Conditions for LP

If there exists a solution for x* & the i’s for the conditions below, then x* is the global optimum

)()( **11 xx mm ggcf

,0)( * xii g

,0)( * xig

,0i

mi 1

mi 1

mi 1

Equation 1

Equation 2

Equation 3

Equation 4

Page 119: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Explanation of Equation 1

• Equation 1 mathematically states that the objective function vector must lie inside the cone formed by the vectors normal to the active constraints at the optimal point

Page 120: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Explanation of Equation 2

• Equation 2 forces i to be zero for all of the inactive constraints – called the “complementary slackness” condition– If the constraint is active, gi(x*) = 0, so i may

be positive and gi will be part of the cone in Equation 1.

– If the constraint is inactive, gi(x*) 0, so i must be zero. gi will not be included in the cone in Equation 1 because it will be multiplied by zero.

Page 121: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Explanation of Equations 3 & 4

• Equation 3 ensures that x* is feasible

• Equation 4 ensures that the direction of the cone is correct.– If the i’s were negative, the cone would be in

the opposite direction. So, this equation prevents that from happening.

Page 122: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

KKT Conditions Summary

• The KKT Conditions are not very useful in solving for optimal points, but they can be used to check for optimality and they help us visualize optimality

• We will use them frequently when we deal with nonlinear optimization problems in the next section

Page 123: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Automated LP Solvers

• There are many software programs available that will solve LP problems numerically

• Microsoft Excel is one program that solves LP problems– To see the default Excel examples for

optimization problems, search for and open the file “solvsamp.xls” (it should be included in a standard installation of Microsoft Office)

Page 124: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Excel LP Example #1

Let’s solve the first example problem in this chapter with Excel

The problem was:

Minimize f = -x1 – x2

Subject to: x1 + x2 ≤ 5

2x1 – x2 ≤ 4

x1 ≤ 3 ; x1, x2 ≥ 0

Page 125: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Excel LP Example #1

Here is the Excel spreadsheet with the necessary data:

x1 x20 0

value limitObjective Function: =-A2-B2

Constraint 1: =A2+B2 5Constraint 2: =2*A2-B2 4Constraint 3: =A2 3

In the spreadsheet, A2 is the cell reference for x1 & B2 is the reference for x2

Page 126: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Excel LP Example #1

You can see that under the “value” heading for the constraints & objective function, we simply use the given functions to calculate the value of the function

x1 x20 0

value limitObjective Function: =-A2-B2

Constraint 1: =A2+B2 5Constraint 2: =2*A2-B2 4Constraint 3: =A2 3

Page 127: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Excel LP Example #1

On the right side of the constraints, in the “limit” column, we write the value of the “bi” for that constraint

Obviously, the objective function doesn’t have a limit

x1 x20 0

value limitObjective Function: =-A2-B2

Constraint 1: =A2+B2 5Constraint 2: =2*A2-B2 4Constraint 3: =A2 3

Page 128: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Excel LP Example #1

So, the spreadsheet looks like this:

Page 129: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Excel LP Example #1

• Now, we need to use the Excel solver feature

• Look under the Tools menu for “solver”– If it is not there, go to “Add-Ins” under Tools

and select the Solver Add-In

Page 130: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Excel LP Example #1

The Solver toolbox should look something like this:

Page 131: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Excel LP Example #1

• This is a minimization problem, so select “Min” and set the target cell as the objective function value

• The variables are x1 & x2, so in the “By Changing Cells” box, select A2 & B2

Page 132: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Excel LP Example #1

• Now add the constraints: – For the “Cell Reference,” use the value of the

constraint function and for the “Constraint,” use the number in the Limit column

– The constraints are all ≤, so make sure that “<=“ is showing between the Cell Reference and Constraint boxes

Page 133: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Excel LP Example #1

• Now, the Solver window should look like this:

Page 134: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Excel LP Example #1

• Finally, click the Options button

• All of the variables are specified as being positive, so check the “Assume Non-Negative” box

Page 135: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Excel LP Example #1

• Since this is an LP problem, check the “Assume Linear Model” box

• Finally, the default tolerance of 5% is usually much too large. Unless the problem is very difficult, a tolerance of 1% or even 0.1% is usually fine

Page 136: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Excel LP Example #1

• Click “Solve” and the Solver Results box should appear

• Under “Reports,” select the Answer Report and click OK

• A new worksheet that contains the Answer Report is added to the file

Page 137: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Excel LP Example #1

• The spreadsheet with the optimum values should look like this:

Page 138: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Excel LP Example #1

• The values for x1 & x2 are the same as when we solved the problem using the simplex method

• Also, if you look under the Answer Report, you can see that all of the slack variables are equal to zero, which is also what we obtained with the simplex method

Page 139: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Excel Example #2

Let’s solve one more LP problem with Excel:

Maximize 5x1 – 2x2 + x3

Subject to: 2x1 + 4x2 + x3 ≤ 6

2x1 + x2 + 3x3 ≥ 2

x1, x2 ≥ 0

x3 unrestricted in sign

Page 140: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Excel Example #2

Entering the equations into the spreadsheet should give:

x1 x2 x30 0 0

Value: Limit:Objective Function: =5*A2-2*B2+C2

Constraint 1: =2*A2+4*B2+C2 6Constraint 2: =2*A2+B2+3*C2 2

Page 141: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Excel Example #2

Unlike last time, all the variables are not specified as being positive, so we cannot use the “Assume Non-negative” option for all of the variables.

So we have to manually specify x1 & x2 to be non-negative by adding two more constraints

Page 142: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Excel Example #2

Now, the formulas in the spreadsheet should look like this:

x1 x2 x30 0 0

Value: Limit:Objective Function: =5*A2-2*B2+C2

Constraint 1: =2*A2+4*B2+C2 6Constraint 2: =2*A2+B2+3*C2 2Constraint 3: =A2 0Constraint 4: =B2 0

Page 143: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Excel Example #2

Now, open the solver toolbox and specify:

• The Target Cell,

• The range of variable cells,

• Maximization problem

• The constraints – The first is ≤ and the rest are ≥ constraints.

Page 144: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Excel Example #2

Click the Options button and check the “Assume Linear Model” box.

Remember, since x3 is unrestricted in sign, do not check the “Assume Non-negative” box

You can reduce the tolerance if you would like

Page 145: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Excel Example #2

The Solver window should look like this:

Page 146: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Excel Example #2

After solving, the spreadsheet should look like this:

Page 147: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Excel Example #2

• Notice that because x3 was unrestricted in sign, it was able to have a negative value and this improved the solution

• To see how much of a difference this made in the solution, re-solve the problem with the “Assume Non-negative” option selected

Page 148: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

Solving LP Problems With Excel

• From these examples, you can see that Excel can be an efficient tool to use for solving LP optimization problems

• The method for solving the problems that was outlined here is obviously just one way and the user should feel free to experiment to find their own style

Page 149: Tier I: Mathematical Methods of Optimization Section 2: Linear Programming.

References

• Linear Programming and Network Flows; Bazaraa, Mokhtar; John Jarvis; & Hanif Sherali.

• Optimization of Chemical Processes 2nd Ed.; Edgar, Thomas; David Himmelblau; & Leon Lasdon.

• Pollution Prevention Through Process Integration; El-Halwagi, Mahmoud


Recommended