Date post: | 07-Aug-2018 |
Category: |
Documents |
Upload: | tushar-gupta |
View: | 222 times |
Download: | 0 times |
of 10
8/20/2019 Notes 3 Print
1/23
Notes 3: From Geometry to Simplex Method
IND E 513
IND E 513 Notes 3 Slide 1
8/20/2019 Notes 3 Print
2/23
Polyhedra in Standard Form
Definition
A polyhedron in standard form is given by {x ∈ R n
|Ax = b , x ≥ 0},where A is an m × n matrix and b is a vector in R m.
There are m equality constraints in n non-negative variables.
No loss of generality (LOG) in assuming rows of A are linearlyindependent for a non-empty standard form polyhedron (page57). We will make this assumption all through. (m ≤ n).
How can we tell whether a vector is a basic solution?
TheoremA vector y ∈ R n is a basic solution if and only if Ay = b and there exist indices B (1), . . . , B (m) such that
1. AB (1), AB (2), . . . , AB (m) are linearly independent;
2. If i = B (1), B (2), . . . , B (m), then y i = 0.
This also hints at a procedure for constructing basic solutions.IND E 513 Notes 3 Slide 2
8/20/2019 Notes 3 Print
3/23
Constructing a Basic Solution for Standard Form Polyhedra
1. Choose m linearly independent columns
AB (1), AB (2), . . . , AB (m).2. Let x i = 0 for all i = B (1), B (2), . . . , B (m).
3. Solve the system of m equations
AB (1)
x B (1)
+ AB (2)
x B (2)
+ . . . + AB (m)x B
(m) = b
for m unknowns x B (1), . . . , x B (m).
Example
1 1 2 1 0 0 00 1 6 0 1 0 01 0 0 0 0 1 00 1 0 0 0 0 1
x =
81246
IND E 513 Notes 3 Slide 3
8/20/2019 Notes 3 Print
4/23
Terminology
If x is a basic solution
AB (1), AB (2), . . . , AB (m) are called basic columns. They arelinearly independent and form a basis for R m.
x B (1), x B (2), . . . , x B (m) are called basic variables; the remainingvariables nonbasic.
By arranging the m basic columns next to each other, weobtain an m × m matrix B called a basis matrix. B isinvertible.
B =
| | |
AB (1) AB (2) · · · AB (m)| | |
, x B = x B (1)
...x B (m)
x B = B −1b
IND E 513 Notes 3 Slide 4
8/20/2019 Notes 3 Print
5/23
Visualizing Standard Form Polyhedra
Can we visualize and draw standard form polyhedra for n ≥ 3?Yes, if n − m = 2, the feasible region can be drawn as a
two-dimensional region defined by n linear inequality constraints.
Examplex 1 + x 2 + x 3 = 1x 1, x 2, x 3 ≥ 0
IND E 513 Notes 3 Slide 5
8/20/2019 Notes 3 Print
6/23
Existence of Extreme Points
Back to general polyhedra {x ∈ R n|Ax ≥ b }. Does a polyhedronalways have an extreme point, i.e., a basic feasible solution, i.e, a
vertex?Of course not! Example: a halfspace in R n for n > 1.
DefinitionA polyhedron P ⊂ R n contains a line if there exist a vector x ∈ P and a nonzero vector d ∈ R n such that x + λd ∈ P for all scalars λ.
IND E 513 Notes 3 Slide 6
8/20/2019 Notes 3 Print
7/23
Theorem
Suppose that the polyhedronP = {x ∈ R n|ai x ≥ b i , i = 1, 2, . . . , m} is nonempty. Then, the following are equivalent
1. P has at least one extreme point.
2. P does not contain a line.
3. There exist n vectors out of a1, a2, . . . , am, which are linearly independent.
Corollary
Every nonempty bounded polyhedron and every nonempty polyhedron in standard form has at least one extreme point.
IND E 513 Notes 3 Slide 7
8/20/2019 Notes 3 Print
8/23
Partial Proof of the Above Theorem
Proof that (1) ⇒ (3): If P has an extreme point x , then x is also a
basic feasible solution. Hence there exist n constraints that areactive at x and the corresponding vectors ai are linearlyindependent .Proof that (3) ⇒ (2): by contradiction. Suppose (WLOG) thatvectors a1, a2, . . . , an are linearly independent and yet P contains
some line x + λd for some vector d = 0. Hence ai (x + λd ) ≥ b i for all i and all scalars λ. This implies that ai d = 0 for all i .Specifically,
n
i =1
d i ai = 0.
But this implies that d = 0 because the vectors a1, . . . , an arelinearly independent .Proof that (2) ⇒ (1): (page 63).
IND E 513 Notes 3 Slide 8
8/20/2019 Notes 3 Print
9/23
Optimality of Extreme Points
TheoremConsider the linear programming problem of minimizing c x over apolyhedron P. Suppose that P has at least one extreme point and that there exists an optimal solution. Then, there exists an optimal solution that is an extreme point of P.
Proof: Let Q be the set of optimal solutions (assumed to benon-empty). Suppose P = {x ∈ R n|Ax ≥ b } and let v be theoptimal value of the cost c x . ThenQ = {x ∈ R n|Ax ≥ b , c x = v }, which is also a polyhedron. Since
Q ⊂ P and P contains no line, Q contains no line and hence hasat least one extreme point. Let x ∗ be an extreme point of Q .Claim: x ∗ is also an extreme point of P .
IND E 513 Notes 3 Slide 9
8/20/2019 Notes 3 Print
10/23
Optimality of Extreme Points contd.
Proof of Claim: by contradiction. Suppose x ∗ is not an extremepoint of P . Then there exist y , z ∈ P such that y = x ∗, z = x ∗,and some λ ∈ [0, 1] such that x ∗ = λy + (1 − λ)z . Thenv = c x ∗ = λc y + (1 − λ)c z . Furthermore, since v is the optimalcost, c y ≥ v and c z ≥ v . This implies that c y = v and c z = v and therefore y ∈ Q , z ∈ Q . However, this contradicts the factthat x ∗ is an extreme point of Q .To conclude, x ∗ is optimal to
min c
x x ∈ P
and is an extreme point of P .
IND E 513 Notes 3 Slide 10
8/20/2019 Notes 3 Print
11/23
Optimality of Extreme Points contd.
Theorem
Consider the linear programming problem of minimizing c x over apolyhedron P. Suppose that P has at least one extreme point.Then, either the optimal cost is equal to −∞, or there exists anextreme point which is optimal.
This implies that if P has an extreme point, and the optimal costis finite, then the problem has an extreme point optimal solution.Since a non-empty standard form polyhedron always has anextreme point, it has an extreme point optimal solution if theoptimal cost is finite.
Corollary
Consider the linear programming problem of minimizing c x over anonempty polyhedron P. Then, either the optimal cost is −∞ or there exists an optimal solution.
IND E 513 Notes 3 Slide 11
8/20/2019 Notes 3 Print
12/23
Degeneracy
DefinitionA basic solution x ∈ R n is said to be degenerate if more than n of the constraints are active at x. x is said to be non-degenerate otherwise.
Example
Suppose P ⊂ R 3 is given by
x 1 + x 2 + 2x 3 ≤ 8
x 2 + 6x 3 ≤ 12
x 1 ≤ 4
x 2 ≤ 6
x 1, x 2, x 3 ≥ 0
(4, 0, 2) is a degenerate basic feasible solution.(2, 6, 0) is a non-degenerate basic feasible solution.
IND E 513 Notes 3 Slide 12
8/20/2019 Notes 3 Print
13/23
C is a degenerate basic feasible solution, D is a degenerate basicsolution (infeasible), E is a nondegenerate basic feasible solution
IND E 513 Notes 3 Slide 13
8/20/2019 Notes 3 Print
14/23
Degeneracy in Standard Form Polyhedra
DefinitionA basic solution x to a standard form polyhedron is degenerate if more than n − m of the components of x are zero.
Example
1 1 2 1 0 0 00 1 6 0 1 0 01 0 0 0 0 1 00 1 0 0 0 0 1
x =
81246
n = 7, m = 4. A1, A2, A3, A7 are linearly independent. So setx 4, x 5, x 6 to zero, and solve the system Ax = b to getx = (4, 0, 2, 0, 0, 0, 6), a degenerate basic feasible solution.
IND E 513 Notes 3 Slide 14
8/20/2019 Notes 3 Print
15/23
Optimality Conditions
Feasible Directions:Many optimization algorithms move from one feasible point to
another until they find an optimal solution. At a point x ∈ P , weare contemplating moving away from it in the direction of vectord ∈ R n. We should consider those choices of d that do not take usoutside the feasible region.
DefinitionLet x be an element of polyhedron P. A vector d ∈ R n is said to be a feasible direction at x, if there exists a positive scalar θ for which x + θd ∈ P.
IND E 513 Notes 3 Slide 15
8/20/2019 Notes 3 Print
16/23
Basic Directions for Standard Form Problems
Throughout this section, we will focus on standard form problems.
min c
x Ax = b , x ≥ 0.Let x be a basic feasible solution, and B (1), . . . , B (m) be theindices of the basic variables. Let B = [AB (1), . . . , AB (m)] be thecorresponding basis matrix. Thus, x i = 0 for i = B (1), . . . , B (m),
while x B = (x B (1), . . . , x B (m)) is given byx B = B
−1b .
We consider the possibility of moving away from x , to x + θd , byselecting a nonbasic variable x j (which is currently at zero level),
and increasing it to a positive value θ, while keeping the remainingnonbasic variables at zero. Thus, d j = 1, and d i = 0 for everynonbasic index i other than j .The vector of basic variables x B changes to x B + θd B , whered B
= (d B (1)
, . . . , d B (m)
).
IND E 513 Notes 3 Slide 16
8/20/2019 Notes 3 Print
17/23
We have Ax = b and we also want A(x + θd ) = b for θ > 0(why?). Thus, we must have Ad = 0.
0 = Ad =n
i =1
Ai d i =mi =1
AB (i )d B (i ) + A j = Bd B + A j
⇒ d B = −B −1A j .
The direction d we just constructed is called the j th basic direction.We ensured that the equality constraints hold. How aboutnon-negativity? Need to worry only about basic variables (why?)
1. If x is nondegenerate, x B > 0, and x B + θd B ≥ 0 forsufficiently small θ.
2. some complications if x is degenerate (see picture below).
IND E 513 Notes 3 Slide 17
8/20/2019 Notes 3 Print
18/23
E F
G
x1=0
x2=0
x3=0
x4=0
x5=0
IND E 513 Notes 3 Slide 18
Ch i C Al B i Di i
8/20/2019 Notes 3 Print
19/23
Change in Cost Along Basic Directions
Rate of cost change along basic direction d : c (x + d ) − c x = c d .
c d =n
i =1
c i d i =mi =1
c B (i )d B (i ) + c j = c
B d B + c j = c j − c
B B −1A j ,
where c B = (c B (1), . . . , c B (m)).
DefinitionLet x be a basic solution, let B be an associated basis matrix, and let c B be the vector of costs of the basic variables. For each j, we
define reduced cost c̄ j of the variable x j according to the formula
c̄ j = c j − c
B B −1A j .
IND E 513 Notes 3 Slide 19
R d d C t f B i V i bl A Z
8/20/2019 Notes 3 Print
20/23
Reduced Costs of Basic Variables Are Zero
Let x B (i ) be a basic variable. Since B = [AB (1), . . . , AB (m)], wehave B −1[AB (1), . . . , AB (m)] = I . I is the m × m identity matrix.In particular, B −1AB (i ) is the i th column of I , denoted e i . Then
we have
c̄ B (i ) = c B (i ) − c
B B −1AB (i ) = c B (i ) − c
B e i = c B (i ) − c B (i ) = 0.
IND E 513 Notes 3 Slide 20
O ti lit C diti
8/20/2019 Notes 3 Print
21/23
Optimality Conditions
TheoremConsider a basic feasible solution x associated with a basis matrix
B, and let c̄ be the corresponding vector of reduced costs.1. If c̄ ≥ 0, then x is optimal.
2. If x is optimal and nondegenerate , then c̄ ≥ 0.
IND E 513 Notes 3 Slide 21
Proof of Optimality Conditions
8/20/2019 Notes 3 Print
22/23
Proof of Optimality Conditions
Proof of the first part.
y be an arbitrary feasible solution. We show that c
y ≥ c
x , i.e.,c (y − x ) ≥ 0. Let d = y − x . We need to show c d ≥ 0.We must have Ad = 0 (why?). Therefore, Bd B +
i ∈N
Ai d i = 0,
where N is the set of indices of the nonbasic variables
corresponding to B . This yields d B = − i ∈N B
−1
Ai d i .
c d = c B d B +i ∈N
c i d i =i ∈N
(c i − c
B B −1Ai )d i =
i ∈N
c̄ i d i .
Now, for any i ∈ N , x i = 0. Moreover, y i ≥ 0 for all i and inparticular, for i ∈ N . Thus, d i ≥ 0 for i ∈ N . In addition, c̄ i ≥ 0for all i , and in particular for i ∈ N . Thus, c d =
i ∈N
c̄ i d i ≥ 0 .
IND E 513 Notes 3 Slide 22
Proof of Optimality Conditions contd
8/20/2019 Notes 3 Print
23/23
Proof of Optimality Conditions contd.
Proof of the second part (by contrapositive).Suppose x is a nondegenerate basic feasible solution and thatc̄ j