The Primal-Dual Method for Approximation Algorithmsjochen/docs/pd.pdf · Examples: Dijkstra’s...

Post on 15-Mar-2020

9 views 0 download

transcript

The Primal-Dual Method for ApproximationAlgorithms

Jochen Konemann

www.math.uwaterloo.ca/∼jochen

Department of Combinatorics & Optimization

University of Waterloo

– p.1/70

What is the primal-dual method?

Originally proposed by Dantzig, Ford, and Fulkerson in 1956as an alternate method to solve linear programs exactly

– p.2/70

What is the primal-dual method?

Originally proposed by Dantzig, Ford, and Fulkerson in 1956as an alternate method to solve linear programs exactly

Method did not survive... but: Revised version of it has becomeimmensely popular for solving combinatorial optimizationproblems.

– p.2/70

What is the primal-dual method?

Originally proposed by Dantzig, Ford, and Fulkerson in 1956as an alternate method to solve linear programs exactly

Method did not survive... but: Revised version of it has becomeimmensely popular for solving combinatorial optimizationproblems.

Examples: Dijkstra’s shortest path algorithm, Ford andFulkerson’s network flow algorithm, Edmond’s non-bipartitematching method, Kuhn’s assignment algorithm, ...

– p.2/70

What is the primal-dual method?

Originally proposed by Dantzig, Ford, and Fulkerson in 1956as an alternate method to solve linear programs exactly

Method did not survive... but: Revised version of it has becomeimmensely popular for solving combinatorial optimizationproblems.

Examples: Dijkstra’s shortest path algorithm, Ford andFulkerson’s network flow algorithm, Edmond’s non-bipartitematching method, Kuhn’s assignment algorithm, ...

Main feature: Reduce weighted optimization problems toeasier unweighted ones.

– p.2/70

What is the primal-dual method?

All of the previous problems are in P. Can we extend thismethod to NP-hard problems?

– p.3/70

What is the primal-dual method?

All of the previous problems are in P. Can we extend thismethod to NP-hard problems?

Yes! Bar-Yehuda and Even use this in 1981 in theirapproximation algorithm for vertex-cover.

– p.3/70

What is the primal-dual method?

All of the previous problems are in P. Can we extend thismethod to NP-hard problems?

Yes! Bar-Yehuda and Even use this in 1981 in theirapproximation algorithm for vertex-cover.

Goemans and Williamson formalize this approach in 1992.Result is a general toolkit for developing approximationalgorithms for NP-hard optimization problems.

The last 10 years have seen literally hundreds of papers thatuse the primal-dual framework.

– p.3/70

Goals of this class

Introduce primal-dual technique for approximation algorithms

Provide you with all knowledge to use it yourself

Show some sophisticated examples that impress you. :-)

– p.4/70

A roadmap for this talk

Part I Motivation and an introduction to LP-DualityPart II Primal-dual: First stepsPart III Steiner TreesPart IV Facility-Location

– p.5/70

Linear Programming: Intro

Foundations of linear programming:George Dantzig Simplex method in 1947John v. Neumann Duality theory in the same year.

– p.6/70

Linear Programming: Intro

Foundations of linear programming:George Dantzig Simplex method in 1947John v. Neumann Duality theory in the same year.

Dantzig worked in a military planning unit during WWII.He solved problems in troop logistics and supplies. The termprogramming is of military origin.

– p.6/70

Linear Programming: Intro

Foundations of linear programming:George Dantzig Simplex method in 1947John v. Neumann Duality theory in the same year.

Dantzig worked in a military planning unit during WWII.He solved problems in troop logistics and supplies. The termprogramming is of military origin.

Today: Many real-world optimization problems can beformulated as linear programs. Just a few examples:1. Airline crew scheduling2. Telecommunication network design3. Oil refining and blending4. Portfolio selection5. ....

– p.6/70

If one would take statistics about which mathematicalproblem is using up most of the computer time in theworld, then ... the answer would be linear programming!

László Lovász

– p.7/70

If one would take statistics about which mathematicalproblem is using up most of the computer time in theworld, then ... the answer would be linear programming!

László Lovász

... for those of you who did not know that OR-people are badprogrammers.

– p.7/70

Integer Programming: Basics

An ILP instance consists of the following elements:

1. Goal: minimize or maximize

2. Variables: x1, . . . , xn

3. Objective function:∑n

j=1 cjxj

4. Constraints: For 1 ≤ i ≤ m:

n∑

j=1

aijxj op bi

where op ∈ {≤,=,≥}.

5. Feasible solution: {x}1≤j≤n is feasible if it is integral andsatisfies all constraints.

– p.8/70

An example: Vertex Cover

Goal: Find a minimum subset of the vertices C such that

e ∩ C 6= ∅

for all edges e.

– p.9/70

An example: Vertex Cover

Goal: Find a minimum subset of the vertices C such that

e ∩ C 6= ∅

for all edges e.

Here’s a vertex cover of size 6.– p.9/70

An ILP for Vertex Cover

Variables:

– p.10/70

An ILP for Vertex Cover

Variables: For node i have variable xi.Want

xi =

{

1 : Node i in vertex cover

0 : Otherwise.

Variables like this are called indicator vari-ables.

– p.10/70

An ILP for Vertex Cover

Variables: For node i have variable xi.Want

xi =

{

1 : Node i in vertex cover

0 : Otherwise.

Variables like this are called indicator vari-ables.Constraints: Each edge needs to becovered.

– p.10/70

An ILP for Vertex Cover

Variables: For node i have variable xi.Want

xi =

{

1 : Node i in vertex cover

0 : Otherwise.

Variables like this are called indicator vari-ables.Constraints: Each edge needs to be cov-ered.Example: Edge (0, 5)

x0 + x5 ≥ 1

– p.10/70

An ILP for Vertex Cover

Variables: For node i have variable xi.Want

xi =

{

1 : Node i in vertex cover

0 : Otherwise.

Variables like this are called indicator vari-ables.Constraints: Each edge needs to be cov-ered.Example: Edge (0, 5)

x0 + x5 ≥ 1

Objective function: Minimize cardinalityof vertex cover.

– p.10/70

An ILP for Vertex Cover

Variables: For node i have variable xi.Want

xi =

{

1 : Node i in vertex cover

0 : Otherwise.

Variables like this are called indicator vari-ables.Constraints: Each edge needs to be cov-ered.Example: Edge (0, 5)

x0 + x5 ≥ 1

Objective function: Minimize cardinalityof vertex cover.

minimize9

j=0

xj– p.10/70

In summary: The vertex cover ILP

minimize∑

v

xv

s.t. xu + xv ≥ 1 ∀ edges (u, v)

xu ∈ {0, 1} ∀ vertices u

– p.11/70

In summary: The vertex cover ILP

minimize∑

v

xv

s.t. xu + xv ≥ 1 ∀ edges (u, v)

xu ∈ {0, 1} ∀ vertices u

Goal: Compute a feasible solution of minimum cost.

Can we do this efficiently?

– p.11/70

LP Relaxations

... no :-(Solving integer programs to optimality is NP-hard!

What can we do?

– p.12/70

Two options

1. Use exact methods. Solve the integer program using exacttechniques (e.g., branch-and-bound, branch-and-cut,...)

– p.13/70

Two options

1. Use exact methods. Solve the integer program using exacttechniques (e.g., branch-and-bound, branch-and-cut,...)

2. Use the LP relaxation. Weaken the integrality constraints:

minimize∑

v

xv

s.t. xu + xv ≥ 1 ∀ edges (u, v)

0 ≤ xu ≤ 1 ∀ vertices u

– p.13/70

Two questions

1. Can we solve linear programs efficiently?

2. LP solutions can be fractional. Can we still use this to obtaingood integral solutions?

– p.14/70

Solving LP’s

Simplex

Developed by Dantzig and von Neumann in 1947

Very efficient in practice

Exponential worst-case running time

Ellipsoid Method

Developed by Khachiyan in 1979

Worst-case running time is polynomial!

Can deal with exponential number of constraints in some cases

Very slow in practice

– p.15/70

Solving LP’s (2)

Interior Point Method

Developed by Karmarkar in 1984

Worst-case running time is polynomial

Practical implementations of good performance available

Simplex is still the method of choice in many state-of-the-art LPsolvers.

– p.16/70

Solving LP’s (2)

Interior Point Method

Developed by Karmarkar in 1984

Worst-case running time is polynomial

Practical implementations of good performance available

Simplex is still the method of choice in many state-of-the-art LPsolvers.

... will not go into details of solving LP’s in this class.

– p.16/70

Back to vertex cover

minimize∑

v

xv

s.t. xu + xv ≥ 1 ∀ edges (u, v)

0 ≤ xu ≤ 1 ∀ vertices u

Q: Is this the best vertex cover for our example?

– p.17/70

Back to vertex cover

minimize∑

v

xv

s.t. xu + xv ≥ 1 ∀ edges (u, v)

0 ≤ xu ≤ 1 ∀ vertices u

Q: Is this the best vertex cover for our example?

A: Turns out that the answer is yes.

– p.17/70

Valid inequalities

Let {x}v be a feasible solution tovertex-cover LP

– p.18/70

Valid inequalities

Let {x}v be a feasible solution to vertex-cover LP

Has to satisfy

xu + xv ≥ 1

for all edges (u, v).

– p.18/70

Valid inequalities

Let {x}v be a feasible solution to vertex-cover LP

Has to satisfy

xu + xv ≥ 1

for all edges (u, v).

Observation: x also has to satisfy any linearcombination {yuv}uv of constraints!

(u,v)

yu,v · (xu + xv) ≥∑

(u,v)

yu,v

– p.18/70

Valid inequalities

Let {x}v be a feasible solution to vertex-cover LP

Has to satisfy

xu + xv ≥ 1

for all edges (u, v).

Observation: x also has to satisfy any linearcombination {yuv}uv of constraints!

(u,v)

yu,v · (xu + xv) ≥∑

(u,v)

yu,v

Example: y0,5 = y1,6 = y2,7 = y3,8 = y4,9 = 1.

(x0 + x5) + (x1 + x6) + (x2 + x7) + (x3 + x8) + (x4 + x9) ≥ 5

– p.18/70

A sense of duality

A lower-bound on the cardinality of any feasible vertex-cover:

v

xv = (x0 + x5) + (x1 + x6) + (x2 + x7) + (x3 + x8) + (x4 + x9) ≥ 5

What is special about this linear combination?

– p.19/70

A sense of duality

A lower-bound on the cardinality of any feasible vertex-cover:

v

xv = (x0 + x5) + (x1 + x6) + (x2 + x7) + (x3 + x8) + (x4 + x9) ≥ 5

What is special about this linear combination?

The resulting left-hand side is the LP objective function!

– p.19/70

A sense of duality

A lower-bound on the cardinality of any feasible vertex-cover:

v

xv = (x0 + x5) + (x1 + x6) + (x2 + x7) + (x3 + x8) + (x4 + x9) ≥ 5

What is special about this linear combination?

The resulting left-hand side is the LP objective function!

Duality: Derive valid inequality

v

βvxv ≥ γ

with

1. βv ≤ 1 for all nodes v, and

2. γ as large as possible– p.19/70

A sense of duality

A lower-bound on the cardinality of any feasible vertex-cover:

v

xv = (x0 + x5) + (x1 + x6) + (x2 + x7) + (x3 + x8) + (x4 + x9) ≥ 5

What is special about this linear combination?

The resulting left-hand side is the LP objective function!

Duality: Derive valid inequality

v

xv≥∑

v

βvxv ≥ γ

with

1. βv ≤ 1 for all nodes v, and

2. γ as large as possible– p.19/70

The dual LP

Surprise: We can formulate this as a linear program!

maximize γ =∑

uv

yuv

s.t.∑

(u,v)

yu,v ≤ 1 ∀ vertices v

yv ≥ 0 ∀ vertices v

– p.20/70

Weak duality

Our argument shows....

Weak Duality - Vertex Cover

Let x be a feasible vertex cover and let y be a linear combination ofedge constraints. We must have

uv

yuv ≤∑

v

xv.

– p.21/70

Weak duality – general case

(Primal)

minimizen

j=1

cj · xj

s.t.n

j=1

aij · xj ≥ bi ∀1 ≤ i ≤ m

x ≥ 0

(Dual)

maximizem

i=1

bi · yi

s.t.m

i=1

aij · yi ≤ cj ∀1 ≤ j ≤ n

y ≥ 0

Weak Duality

Let x and y be feasible primal and dual solutions. We must have

i

bi · yi ≤∑

j

cj · xj .

– p.22/70

Strong duality

(Primal)

minimizen

j=1

cj · xj

s.t.n

j=1

aij · xj ≥ bi ∀1 ≤ i ≤ m

x ≥ 0

(Dual)

maximizem

i=1

bi · yi

s.t.m

i=1

aij · yi ≤ cj ∀1 ≤ j ≤ n

y ≥ 0

Strong Duality

Suppose that (Primal) and (Dual) are feasible and have solutions offinite value. Then there must be a pair x∗ and y∗ of feasible primaland dual solutions such that

i

bi · y∗i =

j

cj · x∗j .

– p.23/70

Complementary Slackness

(Primal)

minimizen

j=1

cj · xj

s.t.n

j=1

aij · xj ≥ bi ∀1 ≤ i ≤ m

x ≥ 0

(Dual)

maximizem

i=1

bi · yi

s.t.m

i=1

aij · yi ≤ cj ∀1 ≤ j ≤ n

y ≥ 0

Complementary Slackness

Suppose that (Primal) and (Dual) are feasible and have optimalsolutions x∗ and y∗ of finite value. We then must have

1. x∗j > 0 iff

∑mi=1 aij = cj , and

2. y∗i > 0 iff∑n

j=1 aij = bi.

– p.24/70

Integrality gap

Can we use weak-duality to prove that thereis no vertex cover of size less than 6?

– p.25/70

Integrality gap

.5

.5

.5

.5

.5

.5

.5

.5.5

.5

Can we use weak-duality to prove that thereis no vertex cover of size less than 6?No! Picture shows a feasible primal solu-tion of value 5.Quiz: Why does this show that weak-duality cannot be used to derive a lower-bound of 6?

– p.25/70

Integrality gap

.5

.5

.5

.5

.5

.5

.5

.5.5

.5

Can we use weak-duality to prove that thereis no vertex cover of size less than 6?No! Picture shows a feasible primal solu-tion of value 5.Quiz: Why does this show that weak-duality cannot be used to derive a lower-bound of 6?Answer: Weak duality tells us that no linearcombination of edge-inequalities

u,v

yuv · (xu + xv) ≥∑

uv

yuv

can have right-hand side bigger than 5.

– p.25/70

Integrality gap

Vertex-cover instance I. Use the following notation

opt IP (I) smallest size integral vertex cover for I

opt LP (I) smallest size fractional vertex cover for I

– p.26/70

Integrality gap

Vertex-cover instance I. Use the following notation

opt IP (I) smallest size integral vertex cover for I

opt LP (I) smallest size fractional vertex cover for I

Integrality gap of vertex-cover LP:

maxI

opt IP (I)

opt LP (I).

– p.26/70

Integrality gap

Vertex-cover instance I. Use the following notation

opt IP (I) smallest size integral vertex cover for I

opt LP (I) smallest size fractional vertex cover for I

Integrality gap of vertex-cover LP:

maxI

opt IP (I)

opt LP (I).

Our example: Integrality gap of vertex cover LP is at least 6/5.

– p.26/70

Integrality gap

Vertex-cover instance I. Use the following notation

opt IP (I) smallest size integral vertex cover for I

opt LP (I) smallest size fractional vertex cover for I

Integrality gap of vertex-cover LP:

maxI

opt IP (I)

opt LP (I).

Our example: Integrality gap of vertex cover LP is at least 6/5.

There are instances that show that the integrality gap is closeto 2.

– p.26/70

Primal-dual technique: Main ideas

Construct integral primal and dual feasible solution at the sametime: x and y

– p.27/70

Primal-dual technique: Main ideas

Construct integral primal and dual feasible solution at the sametime: x and y

Show that∑

j

cj · xj ≤ α ·∑

i

bi · yi

For some α.

– p.27/70

Primal-dual technique: Main ideas

Construct integral primal and dual feasible solution at the sametime: x and y

Show that∑

j

cj · xj ≤ α ·∑

i

bi · yi

For some α.

Prove a worst-case upper-bound on α.

– p.27/70

Primal-dual technique: Main ideas

Construct integral primal and dual feasible solution at the sametime: x and y

Show that∑

j

cj · xj ≤ α ·∑

i

bi · yi

For some α.

Prove a worst-case upper-bound on α.

Result: For every instance we compute ...1. an integral and feasible primal solution x, and2. a proof that its value is within a factor of α of the best

possible solution

– p.27/70

Vertex-cover: A PD-Algorithm

1: yuv ← 0 for all edges (u, v)2: C ← ∅3: while C is not a vertex cover do4: Choose uncovered edge (u, v)5: Increase yuv until

z

ywz = 1

for z ∈ {u, v}6: C ← C ∪ {z}7: end while

– p.28/70

Petersen graph example

Primal solution:Dual solution:

– p.29/70

Petersen graph example

Primal solution:Dual solution:

– p.29/70

Petersen graph example

Primal solution:Dual solution:

– p.29/70

Petersen graph example

Primal solution:Dual solution:

– p.29/70

Petersen graph example

Primal solution:Dual solution:

– p.29/70

Petersen graph example

Primal solution:Dual solution:

– p.29/70

Petersen graph example

Primal solution:Dual solution:

– p.29/70

Petersen graph example

Primal solution:Dual solution:

– p.29/70

Petersen graph example

Primal solution:Dual solution:Primal solution: 8Dual solution: 5

– p.29/70

Petersen graph example

Primal solution:Dual solution:Primal solution: 8Dual solution: 5

Q: How bad can the ratio between primal and dual solution get?

– p.29/70

Petersen graph example

Primal solution:Dual solution:Primal solution: 8Dual solution: 5

Q: How bad can the ratio between primal and dual solution get?

A: Not too bad. The primal solution has at most two vertices perdual edge. Hence, we always have

|C| ≤ 2 ·∑

e

ye– p.29/70

Approximation algorithms

The previous algorithm for vertex-cover is a 2-approximationalgorithm.We also say: The performance guarantee of the algorithm is 2.

Formally: The algorithm return a solution that has size at mosttwice the optimum for all instances!

– p.30/70

Approximation algorithms

The previous algorithm for vertex-cover is a 2-approximationalgorithm.We also say: The performance guarantee of the algorithm is 2.

Formally: The algorithm return a solution that has size at mosttwice the optimum for all instances!

Q: Given a problem and an LP with LP/IP-gap α > 1.Can there be a primal-dual approximation algorithm using thisLP with performance guarantee less than α?

– p.30/70

Approximation algorithms

The previous algorithm for vertex-cover is a 2-approximationalgorithm.We also say: The performance guarantee of the algorithm is 2.

Formally: The algorithm return a solution that has size at mosttwice the optimum for all instances!

Q: Given a problem and an LP with LP/IP-gap α > 1.Can there be a primal-dual approximation algorithm using thisLP with performance guarantee less than α?A: No! Cannot find a good dual lower-bound for IP/LP-gapinstance.

– p.30/70

Conjecture [Vazirani]

Given a combinatorial optimization problem togetherwith its LP formulation with gap α ≥ 1. There existsa primal-dual α-approximation algorithm for the givenproblem.

– p.31/70

How to hit sets...

Let us define a general problem called Hitting Sets.

Input: Ground set E and m subsets

Ti ⊆ E

for 1 ≤ i ≤ m. Also have cost ce for all e ∈ E.

Goal: Find H ⊆ E of minimum total cost, s.t.

H ∩ Ti 6= ∅

for all 1 ≤ i ≤ m.

– p.32/70

How to hit sets...

Let us define a general problem called Hitting Sets.

Input: Ground set E and m subsets

Ti ⊆ E

for 1 ≤ i ≤ m. Also have cost ce for all e ∈ E.

Goal: Find H ⊆ E of minimum total cost, s.t.

H ∩ Ti 6= ∅

for all 1 ≤ i ≤ m.

Q: How can we formulate vertex-cover as a hitting-set problem?

– p.32/70

How to hit sets...

Let us define a general problem called Hitting Sets.

Input: Ground set E and m subsets

Ti ⊆ E

for 1 ≤ i ≤ m. Also have cost ce for all e ∈ E.

Goal: Find H ⊆ E of minimum total cost, s.t.

H ∩ Ti 6= ∅

for all 1 ≤ i ≤ m.

Q: How can we formulate vertex-cover as a hitting-set problem?A: The ground-set E is the set of all vertices.

Each edge (u, v) corresponds to a set {u, v} that needs tobe hit.

– p.32/70

How to hit sets...

Let us define a general problem called Hitting Sets.

Input: Ground set E and m subsets

Ti ⊆ E

for 1 ≤ i ≤ m. Also have cost ce for all e ∈ E.

Goal: Find H ⊆ E of minimum total cost, s.t.

H ∩ Ti 6= ∅

for all 1 ≤ i ≤ m.

Q: Give a hitting-set formulation for shortest s, t-path in undirectedgraphs!

– p.32/70

How to hit sets...

Let us define a general problem called Hitting Sets.

Input: Ground set E and m subsets

Ti ⊆ E

for 1 ≤ i ≤ m. Also have cost ce for all e ∈ E.

Goal: Find H ⊆ E of minimum total cost, s.t.

H ∩ Ti 6= ∅

for all 1 ≤ i ≤ m.

Q: Give a hitting-set formulation for shortest s, t-path in undirectedgraphs!

A: Ground set E is set of all edges. Sets to hit are all s, t-cuts.Convince yourself that the shortest s, t is the minimum-cost solu-tion! – p.32/70

Hitting Set: Primal LP

Have a variable for each element

xe =

{

1 : e in hitting set H

0 : otherwise.

– p.33/70

Hitting Set: Primal LP

Have a variable for each element

xe =

{

1 : e in hitting set H

0 : otherwise.

Objective function:∑

e∈E ce · xe

– p.33/70

Hitting Set: Primal LP

Have a variable for each element

xe =

{

1 : e in hitting set H

0 : otherwise.

Objective function:∑

e∈E ce · xe

Constraints:∑

e∈Tixe ≥ 1 for all 1 ≤ i ≤ m

– p.33/70

Hitting Set: Primal LP

minimize∑

e∈E

ce · xe

s.t.∑

e∈Ti

xe ≥ 1 ∀1 ≤ i ≤ m

xe ≥ 0 ∀e ∈ E

– p.34/70

Hitting Set: Dual LP

Have a variable yi for each primal set-constraint

e∈Ti

xe ≥ 1

– p.35/70

Hitting Set: Dual LP

Have a variable yi for each primal set-constraint

e∈Ti

xe ≥ 1

Recall: Want to find a linear combination of primal constraintsthat lower-bounds the value of the primal objective function

e∈E

ce · xe.

– p.35/70

Hitting Set: Dual LP

Have a variable yi for each primal set-constraint

e∈Ti

xe ≥ 1

Recall: Want to find a linear combination of primal constraintsthat lower-bounds the value of the primal objective function

e∈E

ce · xe.

Have one constraint for each element e ∈ E:∑

Ti:e∈Ti

yi ≤ ce

– p.35/70

Hitting Set: Dual LP

Have a variable yi for each primal set-constraint

e∈Ti

xe ≥ 1

Recall: Want to find a linear combination of primal constraintsthat lower-bounds the value of the primal objective function

e∈E

ce · xe.

Have one constraint for each element e ∈ E:∑

Ti:e∈Ti

yi ≤ ce

Objective function: maximize∑m

i=1 yi– p.35/70

Hitting Set: Primal and Dual

minimize∑

e∈E

ce · xe (P )

s.t.∑

e∈Ti

xe ≥ 1 ∀1 ≤ i ≤ m

xe ≥ 0 ∀e ∈ E

maximizem

i=1

yi (D)

s.t.∑

Ti:e∈Ti

yi ≤ ce ∀e ∈ E

yi ≥ 0 ∀1 ≤ i ≤ m

– p.36/70

Hitting Set: Primal-Dual algorithm

Want to use the LP formulation and its dual to develop aprimal-dual algorithm.

Ideas, suggestions, ... anybody?

– p.37/70

Hitting Set: Primal-Dual algorithm

Want to use the LP formulation and its dual to develop aprimal-dual algorithm.

Ideas, suggestions, ... anybody?

Recall from vertex-cover PD-algo:

Keep feasible dual solution y and include vertex v into coveronly if

(u,v)

yuv = 1

– p.37/70

Hitting Set: Primal-Dual algorithm

Want to use the LP formulation and its dual to develop aprimal-dual algorithm.

Ideas, suggestions, ... anybody?

Recall from vertex-cover PD-algo:

Keep feasible dual solution y and include vertex v into coveronly if

(u,v)

yuv = 1

Idea: Let H be the current hitting set, y be the correspondingdual, and let Ti be a set that has not been hit.Increase yi until

Tj :e∈Tj

yj = ce

for some e ∈ Ti. Include e into H . – p.37/70

Analysis

Let H be the final feasible hitting set and y the correspondingfeasible dual solution.

e∈H

ce = ?

– p.38/70

Analysis

Let H be the final feasible hitting set and y the correspondingfeasible dual solution.

e∈H

ce =∑

e∈H

Ti:e∈Ti

yi

Since ce =∑

Ti:e∈Tiyi = ce when e is included into H .

– p.38/70

Analysis

Let H be the final feasible hitting set and y the correspondingfeasible dual solution.

e∈H

ce =∑

e∈H

Ti:e∈Ti

yi

=m

i=1

|Ti ∩H| · yi

– p.38/70

Analysis

Let H be the final feasible hitting set and y the correspondingfeasible dual solution.

e∈H

ce =∑

e∈H

Ti:e∈Ti

yi

=m

i=1

|Ti ∩H| · yi

Vertex cover: Ti’s correspond to edges. Hence: |Ti ∩H| ≤ 2!

– p.38/70

Set Cover

Problem input: Elements U and subsets S1, . . . , Sn of U .Goal: Select smallest number of sets that cover U .

– p.39/70

Set Cover

Problem input: Elements U and subsets S1, . . . , Sn of U .Goal: Select smallest number of sets that cover U .

Theorem [Feige]

There is no (log(n) − ε)-approximation for the set-coverproblem and any ε > 0 unless NP = P.

– p.39/70

Set Cover

Problem input: Elements U and subsets S1, . . . , Sn of U .Goal: Select smallest number of sets that cover U .

Theorem [Feige]

There is no (log(n) − ε)-approximation for the set-coverproblem and any ε > 0 unless NP = P.

Q: But what if each element occurs in at most f sets?

– p.39/70

Set Cover

Problem input: Elements U and subsets S1, . . . , Sn of U .Goal: Select smallest number of sets that cover U .

Theorem [Feige]

There is no (log(n) − ε)-approximation for the set-coverproblem and any ε > 0 unless NP = P.

Q: But what if each element occurs in at most f sets?

A: Hitting set analysis gives f -approximation since each ele-ment e ∈ U corresponds to a set in hitting set instance and

|Te ∩H| ≤ f

for all e ∈ U .

– p.39/70

Part IIIPrimal-dual algorithm for Steiner trees

– p.40/70

Steiner trees: Intro

Roots of problem can be traced back to Gauss: He mentionsthe problem in a letter to Schumacher.

Input:Undirected graph G = (V,E)

Terminals R ⊆ V

Steiner nodes V \R

Edge costs ce ≥ 0 for all e ∈ E.

Goal: Compute min-cost tree T in G that spans all nodes in R

– p.41/70

Steiner trees: Example

Terminal nodes:Steiner nodes:

– p.42/70

Steiner trees: Example

Terminal nodes:Steiner nodes:

– p.42/70

Steiner trees: Primal LP

Variables: xe = 1 if e is in Steiner tree, 0 otherwise

Constraints?

– p.43/70

Steiner trees: Primal LP

Variables: xe = 1 if e is in Steiner tree, 0 otherwise

Constraints? Steiner cuts: Subsets of nodes that contain somebut not all terminals.

Any feasible Steiner tree must contain at least one of the rededges!

– p.43/70

Steiner trees: Primal LP

Steiner cuts: U = {U ⊆ V : U ∩R 6= ∅, R 6⊆ U}

– p.44/70

Steiner trees: Primal LP

Steiner cuts: U = {U ⊆ V : U ∩R 6= ∅, R 6⊆ U}

δ(U): Set of edges with exactly one endpoint in U

– p.44/70

Steiner trees: Primal LP

Steiner cuts: U = {U ⊆ V : U ∩R 6= ∅, R 6⊆ U}

δ(U): Set of edges with exactly one endpoint in U

One constraint for each Steiner cut U ∈ U :∑

e∈δ(U)

xe ≥ 1

– p.44/70

Steiner trees: Primal LP

Steiner cuts: U = {U ⊆ V : U ∩R 6= ∅, R 6⊆ U}

δ(U): Set of edges with exactly one endpoint in U

One constraint for each Steiner cut U ∈ U :∑

e∈δ(U)

xe ≥ 1

Objective function: minimize∑

e∈E ce · xe

– p.44/70

Steiner trees: Primal LP

minimize∑

e∈E

ce · xe

s.t.∑

e∈δ(U)

xe ≥ 1 ∀U ∈ U

xe ≥ 0 ∀e ∈ E

– p.45/70

Steiner trees: Dual LP

Variables: yU for Steiner cut U ∈ U

Constraints?

– p.46/70

Steiner trees: Dual LP

Variables: yU for Steiner cut U ∈ U

Constraints? One for each edge:

U :e∈δ(U)

yU ≤ ce

– p.46/70

Steiner trees: Dual LP

Variables: yU for Steiner cut U ∈ U

Constraints? One for each edge:

U :e∈δ(U)

yU ≤ ce

Objective function: maximize∑

U∈U yU

– p.46/70

Steiner trees: Primal and dual LP’s

minimize∑

e∈E

ce · xe (P )

s.t.∑

e∈δ(U)

xe ≥ 1 ∀U ∈ U

xe ≥ 0 ∀e ∈ E

maximize∑

U∈U

yU (D)

s.t.∑

U :e∈δ(U)

yU ≤ ce ∀e ∈ E

yU ≥ 0 ∀U ∈ U

– p.47/70

Steiner trees: Moats

Can think of yU as moat around U of radius yU .Example: Nodes u and v, edge (u, v) with cost 4

– p.48/70

Steiner trees: Moats

Can think of yU as moat around U of radius yU .Example: Nodes u and v, edge (u, v) with cost 4

v u

yu = yv = 0

– p.48/70

Steiner trees: Moats

Can think of yU as moat around U of radius yU .Example: Nodes u and v, edge (u, v) with cost 4

v u

yu = yv = 1

– p.48/70

Steiner trees: Moats

Can think of yU as moat around U of radius yU .Example: Nodes u and v, edge (u, v) with cost 4

v u

yu = yv = 2

– p.48/70

Steiner trees: Algorithm

Algorithm always keeps:Infeasible forest F

Feasible dual solution y

– p.49/70

Steiner trees: Algorithm

Algorithm always keeps:Infeasible forest F

Feasible dual solution y

Connected components of F are Steiner cuts

– p.49/70

Steiner trees: Algorithm

Algorithm always keeps:Infeasible forest F

Feasible dual solution y

Connected components of F are Steiner cuts

Increase duals corresponding to connected components of F .Stop increasing as soon as

e∈P

U :e∈δ(U)

yU =∑

e∈P

ce

for some path P connecting terminals in different connectedcomponents.

– p.49/70

Steiner trees: Algorithm

Algorithm always keeps:Infeasible forest F

Feasible dual solution y

Connected components of F are Steiner cuts

Increase duals corresponding to connected components of F .Stop increasing as soon as

e∈P

U :e∈δ(U)

yU =∑

e∈P

ce

for some path P connecting terminals in different connectedcomponents.

At this point add the edges of P to F .

– p.49/70

An example

– p.50/70

An example

– p.50/70

An example

– p.50/70

An example

– p.50/70

An example

– p.50/70

Steiner trees: Analysis

Notion of time:Algorithm starts at time 0

Duals grow by one in one time unit

– p.51/70

Steiner trees: Analysis

Notion of time:Algorithm starts at time 0

Duals grow by one in one time unit

Let U be a connected component of forest F at time t.Suppose that y is the current dual solution.

– p.51/70

Steiner trees: Analysis

Notion of time:Algorithm starts at time 0

Duals grow by one in one time unit

Let U be a connected component of forest F at time t.Suppose that y is the current dual solution.

Let TU be the tree inside U .

Want to show:c(TU ) ≤ 2

S⊆U

yS − 2t

– p.51/70

Steiner tree:Analysis

Want to show:c(TU ) ≤ 2

S⊆U

yS − 2t (I)

for all connected components U of forest F at time t. By inductionon time t!

[t = 0] Clear since F = ∅.

– p.52/70

Steiner tree:Analysis

Want to show:c(TU ) ≤ 2

S⊆U

yS − 2t (I)

for all connected components U of forest F at time t. By inductionon time t!

[t = 0] Clear since F = ∅.

[t ≥ 0] Suppose (I) holds at time t. Let t′ > t and nomerger in time interval [t, t′].

– p.52/70

Steiner tree:Analysis

Want to show:c(TU ) ≤ 2

S⊆U

yS − 2t (I)

for all connected components U of forest F at time t. By inductionon time t!

[t = 0] Clear since F = ∅.

[t ≥ 0] Suppose (I) holds at time t. Let t′ >t and no merger in time interval [t, t′].

t

– p.52/70

Steiner tree:Analysis

Want to show:c(TU ) ≤ 2

S⊆U

yS − 2t (I)

for all connected components U of forest F at time t. By inductionon time t!

[t = 0] Clear since F = ∅.

[t ≥ 0] Suppose (I) holds at time t. Let t′ >t and no merger in time interval [t, t′].

t′

– p.52/70

Steiner tree:Analysis

Want to show:c(TU ) ≤ 2

S⊆U

yS − 2t (I)

for all connected components U of forest F at time t. By inductionon time t!

[t = 0] Clear since F = ∅.

[t ≥ 0] Suppose (I) holds at time t. Let t′ > t and nomerger in time interval [t, t′].LHS of (I) changes by: 0RHS of (I) changes by: 2(t′ − t)− 2(t′− t) = 0

– p.52/70

Steiner trees:Analysis

Want to show:c(TU ) ≤ 2

U⊆M

yU − 2t (I)

for all connected components U of forest F at time t. By inductionon time t!

[t ≥ 0] Suppose (I) holds at time t. Suppose algorithmmerges two components U ′ and U ′′ at time t.

v uP

U ′ U ′′

– p.53/70

Steiner trees:Analysis

Want to show:c(TU ) ≤ 2

U⊆M

yU − 2t (I)

for all connected components U of forest F at time t. By inductionon time t!

[t ≥ 0] Suppose (I) holds at time t. Suppose algorithmmerges two components U ′ and U ′′ at time t.

v uP

U ′ U ′′

Main observation: Path P costs at most 2t! – p.53/70

Steiner trees:Analysis

Want to show:c(TU ) ≤ 2

U⊆M

yU − 2t (I)

for all connected components U of forest F at time t. By inductionon time t!

[t ≥ 0] Suppose (I) holds at time t. Suppose algorithmmerges two components U ′ and U ′′ at time t.Main observation: Path P costs at most 2t!

c(TU ′∪U ′′) = c(TU ′) + c(TU ′′) + c(P )

≤ 2∑

U⊆U ′∪U

yU − 4t + 2t

= 2∑

U⊆U ′∪U

yU − 2t

– p.53/70

Steiner trees: Wrapping up

Suppose the algorithm finishes at time t∗ with tree T ∗

Previous proof shows

c(T ∗) ≤ 2 ·∑

U∈U

yU − 2t∗ (*)

– p.54/70

Steiner trees: Wrapping up

Suppose the algorithm finishes at time t∗ with tree T ∗

Previous proof shows

c(T ∗) ≤ 2 ·∑

U∈U

yU − 2t∗ (*)

At any time 0 ≤ t ≤ t∗ at most |R| moats grow. Hence,

U∈U

yU ≤ |R| · t∗

– p.54/70

Steiner trees: Wrapping up

Suppose the algorithm finishes at time t∗ with tree T ∗

Previous proof shows

c(T ∗) ≤ 2 ·∑

U∈U

yU − 2t∗ (*)

At any time 0 ≤ t ≤ t∗ at most |R| moats grow. Hence,

U∈U

yU ≤ |R| · t∗

Together with (*) we obtain

c(T ∗) ≤ (2− 2/|R|) ·∑

U∈U

yU .

– p.54/70

Prominent open problem

The Steiner tree problem becomes harder when we work indirected graphs.Q: Do you see why?

– p.55/70

Prominent open problem

The Steiner tree problem becomes harder when we work indirected graphs.Q: Do you see why?A: We can formulate any set-cover instance as a directedSteiner-tree problem.

– p.55/70

Prominent open problem

The Steiner tree problem becomes harder when we work indirected graphs.Q: Do you see why?A: We can formulate any set-cover instance as a directedSteiner-tree problem.

Known: O(i · (i− 1)|R|1/i) approximation in time O(|V |i|R|2i)for any i.Open: Find a polynomial-time O(log |V |) approximation!

– p.55/70

Part IVPrimal-dual algorithm for facility location

– p.56/70

Facility Location: Intro

Central problem in operations research since 1960

Input:A graph G = (V,E),Facilities F ⊆ V ,Customers C ⊆ F ,Metric cost function ce on edges,Opening costs fi for all i ∈ F

– p.57/70

Facility Location: Intro

Central problem in operations research since 1960

Input:A graph G = (V,E),Facilities F ⊆ V ,Customers C ⊆ F ,Metric cost function ce on edges,Opening costs fi for all i ∈ F

Goal:Open subset F ⊆ F of facilitiesAssign each customer j to a facility i(j) ∈ F

Minimize total placement cost + total assignment cost

i∈F

fi +∑

j∈C

ci(j),j

– p.57/70

Facility Location: Example

35

3

2

23

1

32

2

3

2

2

Facilities:Clients:

– p.58/70

Facility Location: Example

35

3

2

23

1

32

2

3

2

2

Facilities:Clients:

Total Cost: 8

– p.58/70

Facility Location: Example

35

3

2

21

1

12

2

3

2

2

Facilities:Clients:

Total Cost: 8 + 13 = 21

– p.58/70

Facility Location: LP

Variables:For i ∈ F : yi = 1 if i is open, yi = 0 otherwise,

For i ∈ F and j ∈ C: xij = 1 if client j is assigned tofacility i, xij = 0 otherwise

– p.59/70

Facility Location: LP

Variables:For i ∈ F : yi = 1 if i is open, yi = 0 otherwise,

For i ∈ F and j ∈ C: xij = 1 if client j is assigned tofacility i, xij = 0 otherwise

LP formulation due to Balinski in 1966:

minimize∑

i∈F

fiyi +∑

i∈F ,j∈C

cijxij

s.t.∑

i∈F

xij ≥ 1 ∀j ∈ C

xij ≤ yi ∀i ∈ F , j ∈ C

x, y ≥ 0

– p.59/70

Facility Location: Dual

Variable αj for the connectivity constraint∑

i∈F xij ≥ 1 ofj ∈ C.Interpretation: Amount client j is willing to pay in total

Variable βij for constraints xij ≤ yi

Interpretation: Client j’s share of facility i’s opening cost

– p.60/70

Facility Location: Dual

Variable αj for the connectivity constraint∑

i∈F xij ≥ 1 ofj ∈ C.Interpretation: Amount client j is willing to pay in total

Variable βij for constraints xij ≤ yi

Interpretation: Client j’s share of facility i’s opening cost

Objective function: maximize∑

j∈C αj

– p.60/70

Facility Location: Dual

Variable αj for the connectivity constraint∑

i∈F xij ≥ 1 ofj ∈ C.Interpretation: Amount client j is willing to pay in total

Variable βij for constraints xij ≤ yi

Interpretation: Client j’s share of facility i’s opening cost

Objective function: maximize∑

j∈C αj

Constraints:[Budget-limit] For all clients j ∈ C and facilities i ∈ F :

αj ≤ cij + βij

[Facility-price] For all facilities i:

j∈C

βij ≤ fi

– p.60/70

Primal and dual LP’s

minimize∑

i∈F

fiyi +∑

i∈F ,j∈C

cijxij

s.t.∑

i∈F

xij ≥ 1 ∀j ∈ C

xij ≤ yi ∀i ∈ F , j ∈ C

x, y ≥ 0

maximize∑

j∈C

αj

s.t. αj ≤ cij + βij ∀i ∈ F , j ∈ C∑

j∈C

βij ≤ fi ∀i ∈ F

α, β ≥ 0– p.61/70

Properties of optimal solutions

Let (x∗, y∗) and (α∗, β∗) be primal and dual optimum feasiblesolutions.

Consequences from complementary slackness:

Open facilities are fully paid for:

y∗i > 0 =⇒∑

j

β∗ij = fi

– p.62/70

Properties of optimal solutions

Let (x∗, y∗) and (α∗, β∗) be primal and dual optimum feasiblesolutions.

Consequences from complementary slackness:

Open facilities are fully paid for:

y∗i > 0 =⇒∑

j

β∗ij = fi

If client j pays for facility i then j is connected to i:

β∗ij > 0 =⇒ x∗

ij = y∗i

– p.62/70

Properties of optimal solutions

Let (x∗, y∗) and (α∗, β∗) be primal and dual optimum feasiblesolutions.

Consequences from complementary slackness:

Open facilities are fully paid for:

y∗i > 0 =⇒∑

j

β∗ij = fi

If client j pays for facility i then j is connected to i:

β∗ij > 0 =⇒ x∗

ij = y∗i

If client j is assigned to facility i then αj can be split intoconnect cost and facility cost

x∗ij > 0 =⇒ α∗

j = cij + β∗ij

– p.62/70

Facility Location: PD - First shot

Start with empty facility set T end αj = 0 for all j ∈ C

– p.63/70

Facility Location: PD - First shot

Start with empty facility set T end αj = 0 for all j ∈ C

Grow payments αj for clients j that are not connected yet

– p.63/70

Facility Location: PD - First shot

Start with empty facility set T end αj = 0 for all j ∈ C

Grow payments αj for clients j that are not connected yet

When αj ≥ cij for some facility i, start growing βij such that

αj ≤ cij + βij

– p.63/70

Facility Location: PD - First shot

Start with empty facility set T end αj = 0 for all j ∈ C

Grow payments αj for clients j that are not connected yet

When αj ≥ cij for some facility i, start growing βij such that

αj ≤ cij + βij

When∑

j βij = fi for some i ∈ F , declare facility i open

Connect all unconnected clients j to i that have αj ≥ cij

Let ij be facility that j connects to first

– p.63/70

Facility Location: PD - First shot

Start with empty facility set T end αj = 0 for all j ∈ C

Grow payments αj for clients j that are not connected yet

When αj ≥ cij for some facility i, start growing βij such that

αj ≤ cij + βij

When∑

j βij = fi for some i ∈ F , declare facility i open

Connect all unconnected clients j to i that have αj ≥ cij

Let ij be facility that j connects to first

Don’t grow αj for clients that are connected!

– p.63/70

Facility Location: PD - First shot

Start with empty facility set T end αj = 0 for all j ∈ C

Grow payments αj for clients j that are not connected yet

When αj ≥ cij for some facility i, start growing βij such that

αj ≤ cij + βij

When∑

j βij = fi for some i ∈ F , declare facility i open

Connect all unconnected clients j to i that have αj ≥ cij

Let ij be facility that j connects to first

Don’t grow αj for clients that are connected!

Great: In the end every client j ∈ C is connected to some openfacility!

– p.63/70

Facility Location: PD - First shot

Start with empty facility set T end αj = 0 for all j ∈ C

Grow payments αj for clients j that are not connected yet

When αj ≥ cij for some facility i, start growing βij such that

αj ≤ cij + βij

When∑

j βij = fi for some i ∈ F , declare facility i open

Connect all unconnected clients j to i that have αj ≥ cij

Let ij be facility that j connects to first

Don’t grow αj for clients that are connected!

Great: In the end every client j ∈ C is connected to some openfacility!

Problem: The set of facilities is too expensive. Each clientpotentially has many open facilities around it! – p.63/70

Facility Location: An Example

35

3

2

23

1

32

2

3

2

2

Facilities:Clients:

– p.64/70

Facility Location: PD - Cleanup

T : Set of facilities from previous (naive) algorithm

– p.65/70

Facility Location: PD - Cleanup

T : Set of facilities from previous (naive) algorithm

For each client j let ij ∈ T be the facility it first connects to: ijis the connecting witness of j

– p.65/70

Facility Location: PD - Cleanup

T : Set of facilities from previous (naive) algorithm

For each client j let ij ∈ T be the facility it first connects to: ijis the connecting witness of j

Goal of cleanup: Construct set T ∗ ⊆ T such that for all j ∈ Cthere is at most one φj ∈ T ∗ with αj > cφj ,j

– p.65/70

Facility Location: PD - Cleanup

T : Set of facilities from previous (naive) algorithm

For each client j let ij ∈ T be the facility it first connects to: ijis the connecting witness of j

Goal of cleanup: Construct set T ∗ ⊆ T such that for all j ∈ Cthere is at most one φj ∈ T ∗ with αj > cφj ,j

Then split αj = αcj + αf

j . If For j ∈ C where φj exists:

αcj = cφj ,j

αfj = αj − cφj ,j = βφj ,j

– p.65/70

Facility Location: PD - Cleanup

T : Set of facilities from previous (naive) algorithm

For each client j let ij ∈ T be the facility it first connects to: ijis the connecting witness of j

Goal of cleanup: Construct set T ∗ ⊆ T such that for all j ∈ Cthere is at most one φj ∈ T ∗ with αj > cφj ,j

Then split αj = αcj + αf

j . If For j ∈ C where φj exists:

αcj = cφj ,j

αfj = αj − cφj ,j = βφj ,j

For j ∈ C where φj does not exist:

αcj = αj

αfj = 0

– p.65/70

Facility Location: Analysis

For all i ∈ T ∗ have∑

j

αfj = fi

from the way we open facilities in the first phase.

– p.66/70

Facility Location: Analysis

For all i ∈ T ∗ have∑

j

αfj = fi

from the way we open facilities in the first phase.

For j ∈ C where φj is defined: αcj = cφj ,j

– p.66/70

Facility Location: Analysis

For all i ∈ T ∗ have∑

j

αfj = fi

from the way we open facilities in the first phase.

For j ∈ C where φj is defined: αcj = cφj ,j

For j ∈ C where φj is not defined, want to show that canconnect to facility φj at distance at most 3 · αj .

– p.66/70

Facility Location: Analysis

For all i ∈ T ∗ have∑

j

αfj = fi

from the way we open facilities in the first phase.

For j ∈ C where φj is defined: αcj = cφj ,j

For j ∈ C where φj is not defined, want to show that canconnect to facility φj at distance at most 3 · αj .

Consequence:

i∈T ∗

fi +∑

j∈C

cφj ,j ≤∑

j

(αfj + 3αc

j) ≤ 3 ·∑

j

αj

A 3-approximation!

– p.66/70

How to clean-up?

Goals: Construct T ∗ ⊂ T such that1. For each j ∈ C there is at most one i ∈ T ∗ with αj > cij

2. For each j ∈ C there is some i ∈ T ∗ with cij ≤ 3 · αj

– p.67/70

How to clean-up?

Goals: Construct T ∗ ⊂ T such that1. For each j ∈ C there is at most one i ∈ T ∗ with αj > cij

2. For each j ∈ C there is some i ∈ T ∗ with cij ≤ 3 · αj

Construct auxiliary graph on facilities:

Add edge (i1, i2) if ∃j ∈ C with

αj > max{ci1,j , ci2,j}.

– p.67/70

How to clean-up?

Goals: Construct T ∗ ⊂ T such that1. For each j ∈ C there is at most one i ∈ T ∗ with αj > cij

2. For each j ∈ C there is some i ∈ T ∗ with cij ≤ 3 · αj

Construct auxiliary graph on facilities:

Add edge (i1, i2) if ∃j ∈ C with

αj > max{ci1,j , ci2,j}.

Pick maximal independent set T ∗ in this graph

– p.67/70

Properties of T ∗

Look at client j ∈ C and two facilities i1, i2 ∈ T ∗:

Cannot have bothβi1,j > ci1,j

andβi2,j > ci2,j .

– p.68/70

Properties of T ∗

Look at client j ∈ C and two facilities i1, i2 ∈ T ∗:

Cannot have bothβi1,j > ci1,j

andβi2,j > ci2,j .

Assume j ∈ C got connected to ij in growing phase ...butij ∈ T \ T

∗!

j′ j

iji

Must exist i ∈ T ∗ and j′ ∈ C with αj′ > max{cij′ , cijj′}.– p.68/70

Properties of T ∗

Look at client j ∈ C and two facilities i1, i2 ∈ T ∗:

Cannot have bothβi1,j > ci1,j

andβi2,j > ci2,j .

Assume j ∈ C got connected to ij in growing phase ...butij ∈ T \ T

∗!

≤ αj

j′ j

iji

Must exist i ∈ T ∗ and j′ ∈ C with αj′ > max{cij′ , cijj′}.– p.68/70

Properties of T ∗

Assume j ∈ C got connected to ij in growing phase ...butij ∈ T \ T ∗!

≤ αj

≤ αj

≤ αj

j′ j

iji

Must exist i ∈ T ∗ and j′ ∈ C with αj′ > max{cij′ , cijj′}.

Observation: αj′ stops growing when j is connected to ij

Hence: αj′ ≤ αj

– p.68/70

Goals revisited

Goals: Construct T ∗ ⊂ T such that1. For each j ∈ C there is at most one φj ∈ T ∗ with βij > cij

2. For each j ∈ C there is some i ∈ T ∗ with cij ≤ 3 · αj

– p.69/70

Goals revisited

Goals: Construct T ∗ ⊂ T such that1. For each j ∈ C there is at most one φj ∈ T ∗ with βij > cij

2. For each j ∈ C there is some i ∈ T ∗ with cij ≤ 3 · αj

If φj ∈ T ∗ exists: αfj = βφj ,j − cφj ,j and notice that

j

αfj =

i∈T ∗

fi

– p.69/70

Goals revisited

Goals: Construct T ∗ ⊂ T such that1. For each j ∈ C there is at most one φj ∈ T ∗ with βij > cij

2. For each j ∈ C there is some i ∈ T ∗ with cij ≤ 3 · αj

If φj ∈ T ∗ exists: αfj = βφj ,j − cφj ,j and notice that

j

αfj =

i∈T ∗

fi

Let αcj = βij − αf

j . Can assign j to facility at distance at most3 · αc

j .

– p.69/70

Goals revisited

Goals: Construct T ∗ ⊂ T such that1. For each j ∈ C there is at most one φj ∈ T ∗ with βij > cij

2. For each j ∈ C there is some i ∈ T ∗ with cij ≤ 3 · αj

If φj ∈ T ∗ exists: αfj = βφj ,j − cφj ,j and notice that

j

αfj =

i∈T ∗

fi

Let αcj = βij − αf

j . Can assign j to facility at distance at most3 · αc

j .

Total solution cost bounded by∑

j(αfj + 3αc

j)

– p.69/70

Facility Location: State of the art

This is not the only approach for facility location!1. Direct LP-rounding2. Local-Search3. Greedy algorithms combined with dual fitting

Current best is Greedy: Mahdian et al. apx = 1.52

Lower bound on apx unless P = NP: 1.48

No LP-based algorithm for capacitated facility location withstrict capacities

– p.70/70