+ All Categories
Home > Documents > [ACM Press the twenty-fifth annual ACM symposium - San Diego, California, United States...

[ACM Press the twenty-fifth annual ACM symposium - San Diego, California, United States...

Date post: 08-Dec-2016
Category:
Upload: noam
View: 212 times
Download: 0 times
Share this document with a friend
10
A Parallel Approximation Algorithm for Positive Linear Programming Michael Luby * Abstract We introduce a fast parallel approximation al- gorithm for the positive linear programming optimization problem, i.e. the special case of the linear programming optimization problem where the input constraint matrix and con- straint vector consist entirely of positive en- tries. The algorithm is elementary, and has a simple parallel implementation that runs in polylog time using a linear number of proces- sors. 1 Introduction The positive linear programming optimization problem (hereafter referred to as the positive problem) is the special case of the linear pro- gramming optimization problem where the in- put constraint matrix and constraint vector consist entirely of non-negative entries. We in- troduce an algorithm that takes as input the *International Computer Science Institute and UC Berkeley. Research supported in part by NSF Grant CCR-9016468 and grant No. 89-00312 from the United States-Israel Binational Science Foundation (BSF), Jerusalem, Israel. tHebrew University, Jerusalem, Israel. Supported by USA-Israel BSF 89-00126 and by a Wolfson research award. Research partially done while visiting the In- ternational Computer Science Institute. Permission to copy without fee all or part of this material is granted provided that tha copies are not mada or distributed for dirsct commercial advantage, the ACM copyright notice and tha title of the publication and its date appaar, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee Noam Nisant description of a problem and an error param- eter e and produces both a primal feasible so- lution and a dual feasible solution, where the values of these two solutions are within a mul- tiplicative factor of 1+ e of each other. Be- cause the optimal values for the primal and dual problems are equal, this implies that the primal and dual feasible solutions produced by the algorithm have a value within e (with re- spect to relative error) of an optimal feasible solution. Let iV be the number of non-zero coefficients associated with an instance of the problem. Our algorithm can be implemented on a paral- lel machine using O(fV) processors with a run- ning time polynomial in log(IV)/e. The algo- rithm is elementary and has a simple parallel implementation. Note that the problem of ap- proximating the value of a general linear pro- gram to within a constant factor is P-complete. This can be shown by a reduction from the circuit value problem to a linear programming problem where all coefficients are small con- stants and the linear programming problem has exactly one feasible solution with value ei- ther O or 1 depending upon the answer to the circuit value problem (see, e.g., [5, Ja Ja]). Previously, [8, Plotkln, Shmoys, Tardos] have developed fast sequential algorithms for both the primal and dual versions of the posi- tive problem which they call fractional packing and covering problems, (as well as for some generalizations of this duce algorithms that far superior in terms known algorithms for problem). They intro- are much simpler and of running times than the general linear pro- and/or specific permission. 25th ACM STOC ‘93-51931CA,W3A Q 1993 ACM 0-89791 -591 -71931000510448 . ..S1 .50 448
Transcript

A Parallel Approximation Algorithm

for

Positive Linear Programming

Michael Luby *

Abstract

We introduce a fast parallel approximation al-

gorithm for the positive linear programming

optimization problem, i.e. the special case of

the linear programming optimization problem

where the input constraint matrix and con-

straint vector consist entirely of positive en-

tries. The algorithm is elementary, and has

a simple parallel implementation that runs in

polylog time using a linear number of proces-

sors.

1 Introduction

The positive linear programming optimization

problem (hereafter referred to as the positive

problem) is the special case of the linear pro-

gramming optimization problem where the in-

put constraint matrix and constraint vector

consist entirely of non-negative entries. We in-

troduce an algorithm that takes as input the

*International Computer Science Institute and UCBerkeley. Research supported in part by NSF GrantCCR-9016468 and grant No. 89-00312 from the UnitedStates-Israel Binational Science Foundation (BSF),Jerusalem, Israel.

tHebrew University, Jerusalem, Israel. Supported

by USA-Israel BSF 89-00126 and by a Wolfson researchaward. Research partially done while visiting the In-ternational Computer Science Institute.

Permission to copy without fee all or part of this material is

granted provided that tha copies are not mada or distributed for

dirsct commercial advantage, the ACM copyright notice and tha

title of the publication and its date appaar, and notice is given

that copying is by permission of the Association for Computing

Machinery. To copy otherwise, or to republish, requires a fee

Noam Nisant

description of a problem and an error param-

eter e and produces both a primal feasible so-

lution and a dual feasible solution, where the

values of these two solutions are within a mul-

tiplicative factor of 1 + e of each other. Be-

cause the optimal values for the primal and

dual problems are equal, this implies that the

primal and dual feasible solutions produced by

the algorithm have a value within e (with re-

spect to relative error) of an optimal feasible

solution.

Let iV be the number of non-zero coefficients

associated with an instance of the problem.

Our algorithm can be implemented on a paral-lel machine using O(fV) processors with a run-

ning time polynomial in log(IV)/e. The algo-

rithm is elementary and has a simple parallel

implementation. Note that the problem of ap-

proximating the value of a general linear pro-

gram to within a constant factor is P-complete.

This can be shown by a reduction from the

circuit value problem to a linear programming

problem where all coefficients are small con-

stants and the linear programming problem

has exactly one feasible solution with value ei-

ther O or 1 depending upon the answer to the

circuit value problem (see, e.g., [5, Ja Ja]).

Previously, [8, Plotkln, Shmoys, Tardos]

have developed fast sequential algorithms forboth the primal and dual versions of the posi-

tive problem which they call fractional packing

and covering problems, (as well as for some

generalizations of this

duce algorithms that

far superior in terms

known algorithms for

problem). They intro-

are much simpler and

of running times than

the general linear pro-and/or specific permission.

25th ACM STOC ‘93-51931CA,W3A

Q 1993 ACM 0-89791 -591 -71931000510448 . ..S1 .50

448

gramming optimization problem. However,

the algorithms in [8] do not have fast parallel

implementations. The algorithm we introduce

is competitive with their algorithms in terms of

running times when implemented sequentially.

We first introduce an elementary (but unim-

plementable) continuous algorithm that pro-

duces optimal primal and dual feasible solu-

tions to the problem, and based on this we de-

scribe a fast parallel approximation algorithm.

We use ideas that were previously employed in

similar contexts by [1, Berger, Rompel, Shor],

[8, Plotkin, Shmoys, Tardos] and [2, Chazelle,

Friedman]. We use the general idea also used

in [I] of incrementing the values of many vari-

ables in parallel, and we use the general idea

also used in [8] and [2] of changing the weight

function on the constraints by an amount ex-

ponential in the change in the variables. The

overall method we introduce is novel in sev-

eral respects, including the details of how to

increment the values of the variables at each

iteration (this choice reflects a carefully cho-

sen tradeoff between the overall running time

of the algorithm and the quality of the solu-

tion it produces) and the way to normalize the

solution output at the end.

Positive linear programs are strong enough

to represent several combinatorial problems.

The first example is matching in a bipartite

graph. From matching theory we know that

relaxing the {O, 1} program that defines the

largest matching to a linear program does not

change the optimal value. This program is pos-

itive and thus our algorithm can be used to

approximate the size of the largest matching

in a bipartite graph. This essentially matches

the results of [3, Cohen], except that we don’t

know how to get the matching itself without

using some of Cohen’s techniques.

The second example is that of set-cover. In

this case it is known that relaxing the O – 1

program that defines the minimum set-cover

to a linear program can decrease the optimum

by at most a factor of log(A), where A is the

maximum degree in the set system. This pro-

gram is positive and thus our algorithm can be

used to approximate the size of the set cover

to within a factor of (1 + c) log(A). This is

essentially optimal (up to NP-completeness [7,

Lund, Yannakakis]) and matches results of [1,

Berger, Rompel, Shor].” In this case finding

the set cover itself is also possible using e.g.

ideas found in [9, Raghavan].

2 The problem

Throughout this paper, n is the number of vari-

ables and m is the number of constraints (not

including constraints of the form Zi z O), We

use i to index variables, and whenever it is un-

specified we assume that z ranges over all [n].

We use j to index the non-zero constraints, and

whenever it is unspecified we assume that j

ranges over all [m]. Unless otherwise specified,

ifx=(zl, ..., zn) then sum(x) is defined as

~i Zi. (In a few places, sum(x) is defined as a

weighted sum.)

We consider linear problems in the following

standard form:

The Primal Problem : The objective is to

find z=(zl,..., Zn) that minimizes

subject to the following constraints:

● For all Z, Zi > ().

● For all j, ~ici,j “ Zt > bj.

We say z is primal feasible if z satisfies all the

constraints. Let opt (z) be an optimal solution,

i.e. opt (z) is a primal feasible solution such

that

sum(opt(z)) = min {sum(z)}.z is primal feasible

*Our results are a slight improvement over thosein [I] in the sense that our multiplicative constant is1 + e, where c is an input parameter, whereas theirmultiplicative constant is fixed to something like 2.

449

We consider also the dual of such problems:

The Dual Problem : The objective is to

find q=(ql,... , q~) that maximizes

sum(q) = ~ bj . qj

j

subject to the following constraints:

● For all j, qj z O.

● For all i, ~j ca,j - qj < da.

We say q is dual feasible if q satisfies all the

constraints. Let opt (q) be an optimal solution,

i.e. opt (q) is a dual feasible solution such that

sum(opt(q)) =q i* ci%%asibl~sum(q)]”

Description of Positive Problem : We

say that a linear program is positive if all the

coefficients are non-negative. I.e.

● For all i, di >0.

● For all j, bj >0.

● For all z and j, c~,j ~ O.

(It can easily be seen that restricting di and bj

to be strictly positive instead of non-negative

causes no loss of generality.)

We can look at the primal problem as try-

ing to put weights on the xi’s such that each j

is covered with weight at least bj, where each

unit of weight on xi puts ci, j weight on each j.

Similarly, we can look at the dual case as trying

to put weights on the qj’s such that we pack at

most di weight into each i. For these reasons

a positive linear program in the primal form

is sometimes called a fractional covering prob-lem, and in the dual case a fractional packing

problem.

In this paper we develop an approximation

algorithm for both these problems with the fol-

lowing properties. On input e > 0 and the

description of the problem, the algorithm pro-

duces a primal feasible solution z and a dual

feasible solution q such that

sum(z) s sum(q) . (1 + e).

The algorithm consists of

(~log(n). log(m/e)~4

)

iterations. Each iteration can be executed

in parallel using O(lV) processors in time

O(log(IV)) on a EREW PRAM, where N is

the number of entries (i, j) where c~,~ >0.

2.1 A special form

In the appendix we show that without loss of

generality we may assume that the linear pro-

gram is in the following special form:

Input to special form : For all (i, j), the

input is ai,j, such that either ai,j = O or 1 z

aa,j ~ l/~, where ~ = ~.

Special Form Primal Problem : The ob-

jective is to find z = (,zl,..., Zn) that mini-

mizes

sum(z) = ~ z~

i

subject to the following constraints:

● For all i, Zi ~ O.

● For all j, ~iai,j “ Zt 2 1.

Special Form Dual Problem : The objec-tive is to find q = (ql,. . . . qm) that maximizes

sum(q) = ~ qj

subject to the following constraints:

● For all j, qj ~ O.

● For all i, ~j ai,j . qj < 1.

450

2.2 The algorithm value. The indices i for which Di N DmaX are

all incremented.

Given a problem instance in special form, the

algorithm we develop below has the following

properties. Let a~,j be the coefficients for the

input problem, and let

T = min {sum(z)}z is primal feasible

=q is .K5%m3ibk{sum(q)}o

On input e >0 and the a,,~, the output is a

primal feasible solution

z=(q,. ... zn)

and a dual feasible solution

q=(!ll,...,%n)

such that

sum(z) < sum(q) . (1 + 6).

Since sum(z) 2 ~ 2 sum(q), this immediately

implies that sum(z) < 7 0 (1 + e) and that

sum(q) 2 ~/(1 + 6).

Notation : The values of all variables are

defined in terms of x = (z1,..., Zn). For each

j, define aj = xi a~,j . zi, ~~im = minj {~j}

and ~j = exp(–aj). For all i, define Di =

~j ai,j . ~j and Din..= maxi{Di}.

Note that ~j is the the number of times j

is covered with respect to x and ~~in is the

minimal coverage over all j, Thus x/~~in is

a primal feasible solution as long as ~~in >

0. Notice also that y/D_ is a dual feasible

solution as long as D_ > 0. Intuitively, the

algorithms start with x = O and proceeds by

increasing x in a direction that tries to increase

the coverage on those j that are covered theleast by x so far by a voting process that works

as follows: yj is a weight for j which drops off

exponentially as j is covered more and more;

and this is the weight by which j votes to have

its coverage increased; the value Di is the total

weighted vote that xi receives to increase its

The parallel algorithm we present below can

be viewed as a parallel discretization of the fol-

lowing simple (but unimplementable) continu-

ous algorithm. Based on the analysis of the

parallel algorithm given below, it is not hard

to see that the continuous algorithm produces

optimal primal and dual solutions.

Continuous Algorithm : The algorithm

moves along a line x(t) that is parametri-

zed by t > 0 and that lies within the pos-

itive orthant of $?n. It starts at the origin

at time zero, i.e., x(0) = O. At time t, let

l?(t) = {i : Di(t) = DmaX(t)}, i.e., B(t) is

the set of z with D, maximum at time t.The

direction of the line x(t) at time t is defined

by the solution to the [B(t)/times lB(t)I lin-

ear system of equations expressing the fact

that, for all z E B(t),all Di (t) should de-

crease at exactly the same rate (and thus re-

main equal) as t increases. Thus, x(t) moves

along a line with a well-defined partial deriva-

tive until the time t’when there is a j @B(t’)

such that Dj (t’) = DmaX(t’). At time t’the

index j is added to B(t’)and, for t 2 t’, x(t)

continues along the curve defined by the aug-

mented B (t’).The optimal primal solution is

limt+m ~(t)/~min(t)and the optimal dual SO-

lution is defined y(t’)/D_(t’) where

sum(y(t’))

Dm.x(t’) ‘Sup’{sn%’)}.

We now describe the parallel (and imple-

mentable) version of this continuous algorithm.

The input is a description of the problem to-

gether with an error parameter e. We let

@=el=62=@=e4 = E/5. We use the

convention that x, is the value assigned to i at

the beginning of an iteration within a phase,

z; is the value after the end of the iteration,

and x; is the value after termination of the fi-

nal phase. This same convention applies to all

the other variables, i.e. D~.X, gj, ~j, ~~in, etc.

451

Initially, for all i, set xa := O. Each phase is

indexed by an integer k and the index of the

next phase is one smaller than the index of the

previous phase. The index of the start phase

is the smallest positive integer k~ such that the

initiaI value of D~.X satisfies

(1+ 6rJ~’ ~ Dm.x < (1 + E,)~’+’.

At the beginning of an iteration in phase k, let

B = {z : D, 2 (1 +eo)~}.

The algorithm maintains the property that, for

all z c l?, (1 + eo)~ S Di < (1 + eO)k+l. The

final phase executed is the first phase where

sum(y) 5 * is true at the end of the phase.

An iteration within phase k works aa follows.

For any 06 {=, <,>, ~,z} and A 20, let

{ }S“(A)= j:h~ai,joq .

KB

Let

If B = 0 then the phase ends and the next

smaller indexed phase begins. Otherwise, com-

pute A. so that

(a) For at least a fraction 1- ~Z of the i c B,

D~(Ao) ~ (1 – e3) . D2.

(b) For at least a fraction q of the i c B,

D:(Ao) > rs3 . Di.

Finally, for all z G B, increase xi by Ao.

Primal Feasible Solution Output : For

all i, zi = z: /a~in.

Consider the iteration of the algorithm when

sum(y) /D~.X is maximum among all iterations

of the algorithm. Let ~ =@,..., jm) be y

from this iteration and let DmaX be DmaX from

thk iteration.

Dual Feas~ble Solution Output : For all

j, qj = yj/Dn...

2.3 Running time analysis

Caveat : In the analysis given below, for

simplicity we are slightly inaccurate in the

following sense. We say for example that

(l+eo).(l-tel) = (l+eO+cl) and & = 1+6..

Each time we do this we introduce a small er-

ror, but we only do thk a constant number of

times overall.

We first determine an upper bound on the

number of phases. After the initialization,

D < m. This is because ai,j ~ 1 for allInax _(i, j) and because initially gj = 1 for all j. Let

k, be the index of the first phase. To guaran-

tee that DmaX < (1+ CO)ks+l at the start of this

phase, it is sufficient that k, satisfy

(1+ 6J$’+1 > m,

and this is true when

k, = O(log(m)/c).

Note that D m.x 2 sum(y)/~ because each jcontributes at least ~j /V to xi Di. Thus, if

D . . . S * then sum(y) S ~. The al-

gorithm never reaches phase –k if (1 + eo)-~ s

*“ Thus, the last phase occurs before

phase –kf where

‘f=O(log$’TThus, the total number of phases is

‘rog!woWe now analyze the number of iterations per

phase. At least a fraction e2 of the i E B have

the property that D: (Ao) >63. Di. For each

j 6 S2 (Ao), yj goes down by at least a factor

of exp(–q) N 1 – Cl. Let D: be the value of

Di at the end of the iteration. From the above,

a fraction of at least e2 of the i ~ B have the

property that

452

Note that after the value of D, drops by a fac-

tor of at least 1 – Co, i is removed from B.

From this it follows that the number of itera-

tions during phase k is at most

where Bk is the set B at the beginning of phase

k.

ShCe, lBk I

total number

o

~ n. for all k, it follows that the

of iterations overall is

(

log(n) . log(7n/E)~4

)

2.4 Computing AO

The most difficult part in each iteration is the

computation of Jo. This can be done as fol-

lows. For each j 6 [m], compute

?)j ,= zi~~ai,j

and find a permutation T of [m] that satisfies

AT(l)2 “ “ “ 2 AT(m)

by sorting the set

{(vj,~) ‘~ ~ [ml}.For fixed j, let j- is the smallest index and j+

is the largest index for which

@x(j-) = ‘i%(j) = ‘@m(j+).

From these definitions, it follows that

● S=(ur(j)) = {7r(j-), . . .,7r(j+)}.

● ‘<(’#m(j)) = {~(l),. . . ,7T(j- – l)}.

● S>(@m(j)) = {7T(~+ +1), . . ..n(m)}.

For each z 6 B, for each j ~ [m], compute

● ~~ (~~(.j)) := ~k~jt %,~(~) “ yd~)

● Dz~(Vn(j)) := zk~j– ai,7+) “ Ydj)

using a parallel prefix computation. Note that

● D;(Y%(I)) < . ..< D;(@r(m)).

● D;(@T(l)) >...> Dt2(@T(m)).

Then, for each i e B, compute an index v(i) e

[m] for which

● D~(@n(v(i))) > (1 _ C3) . Di

. D; (@n(v(i))) > C3. D2

using binary search on j. There is such an

index v(i) because, for all i, j,

Then, find an ordering (bl,..., bl~l) of B that

satisfies

by sorting the set

{AT(7)(O)‘ ~ ~ B}.

Note that, for all 1< z <i’ < IBI,

Compute Z. so that

using binary search. Finally, set

~o := @n(v(b,,)).

It is not hard to verify that A. satisfies the

conditions specified in the description of the

algorithm.

Each iteration consists of a constant number

of parallel prefix, parallel sorting, and parallel

binary search operations. By the results de-

scribed in [6, Ladner, Fischer] and [4, Cole],

each iteration can be executed in parallel us-

ing O(N) processors in time O(log(lV)) on a

EREW PRAM.

453

2.5 Feasibility

Primal : We prove that the solution z output

by the algorithm is a primal feasible solution.

Because sum(y” ) < 1 at termination, cx~in 2

ln(l/sum(y*)) >0. Since .Z2= z~/aJim it fol-

lows that each j is covered at least czj /c&in 21

times at the end of the algorithm. This implies

that z is a primal feasible solution.

Dual : Because fi_ is the maximum over all

i of ~z ,Ait follows that each i is covered a total

of ~~ /D_ S 1 with respect to q. It follows

that q is a dual feasible solution.

2.6 Optimality

Let6’=eO+el+e2+c3.

Lemma 1 :

sum(y’) < sum(y) - AI . Din.. . [Bl o(1 – 0).

PROOF: Let

( (-AO”zaz’))rj=yj–y~=~j. l–exp

We want to show that

E Tj ~ Ao . Dmax . ]Bl . (1 – @).

j

Note that

and noting that j c S (Ao) implies that @ s

q, this implies that

Thus,

For at least a fraction (1 – e2) of the i G B,

Since D, 2 D_/(1 + CO), it follows this sum

is at least

(1 -Q). (1 - e,). Dmax .1111

1+60

Putting all thk together yields

Lemma 2:

sum(x) < sum(q) oln(rn/sum(y))—

1–0 “

PROOF: From Lemma 1 it follows that

‘-exp(-A0”zaij)2A0”za’j”(l-’1)%g’;s ,-AO”Dm:u$y-’)This inequality is because of the following: For

all @ ~ 1, 1 – exp(–@ ~ O . (1 – @. Letting (<exp –A. . D~aX . [Bl c (1 - 6)

sum(y) )

(~exp –Ao.ll?l .(1-o)

)sum(q) “

454

The last inequality is because

sum(q) z sum(y)/D_.

Because

and because initially sum(y) = m, it follows

that at any point in time,

(

-~tz, (l-o)sum(y) < m. exp

)sum(q) “

This implies that

sum(x) ~ sum(q) oln(m/sum(y)) .

l–e ‘

Theorem : sum(z) < sum(q) . (1 + 6).

PROOF: From Lemma 2 and the definition of

z it follows that

sum(z) ~ sum(q) . ln(m/sum(y*))

a:,” .(1 - e) “

Since ln(l/sum(y*)) < a~in it follows that

In(m/sum(y*))sum(z) s sum(q) . (1 +6) . ~n(l,sum(y*)) .

Since1

sum(Y*) S z

at the termination of the algorithm, the theo-

rem follows. ■

3 Acknowledgments

The authors would like to thank Richard Karp,

Seffi Naor, Serge Plotkln and Eva Tardos for

pointing out relationships between this work

and previous work. The first author would like

to thank Chu-Cheow Lim for discussions which

clarified the description of the implementation.

[1]

[2]

[3]

[4]

[5]

[6]

[7]

[8]

[9]

4

Berger, B., Rompel, J., Shor, P., “Effi-

cient NC Algorithms for Set Cover with

Applications to Learning and Geometry”,

30th Annual Symposium on Foundationsof Computer Science, pp. 54-59, 1989.

Chazelle, B., Friedman, J., “A Determin-

istic View of Random Sampling and Its

Use in Geometry”, Princeton Technical

Report No. CS-TR-.&6, September 1988.

A preliminary version appears in FOCS

1988.

Cohen, E., “Approximate max flow on

small depth networks”, FOCS, 1992, pp.

648-658.

Cole, R., “Parallel merge-sort”, SIAM J.

Comp., 17(4), pp. 770-785, 1988.

Ja Ja, J., An Introduction to Parallel

Algorithms, Addkon Wesley, 1992

Ladner, R., Fischer, M., “Parallel prefix

computation”, JACM, 27(4), pp. 831-838,

1980

Lund, C., Yannalmkis, M., “On the

Hardness of Approximating Minimization

Problems”, preprint,

Plotkln, S., Shmoys, D., Tardos, E., “Fast

Approximation Algorithms for Fractional

Packing and Covering Problems”, Stan-ford Technical Report No. STAN-CS-92-

1419, February 1992.

Raghavan, P., “Probabilistic construction

of deterministic algorithms: approximat-

ing packing integer programs”, JCSS, Oc-

tober 1988, Vol. 37, pp 130-143.

Appendix: Transform-

ing to special form

Given an instance of a positive problem, we

first perform a normalization step to eliminate

455

the weights d~ and the constraint bounds b~.

Normalized problem :

● For all (z, j), let c~,j = ~.J“ *

● For all i, define variable z: = da . .zi.

The objective is to find z’ = (z~,..., .zj) that

minimizes sum(z’) subject to the following

constraints:

● For all i, z; 20.

● For all j, ~ic~,j . z{ z 1.

There is a one-to-one correspondence between

primal feasible solutions z’ to the normalized

problem and primal feasible solutions z to the

primal positive problem with the property that

sum(z) = sum(z’).

The next step to transforming the problem

to the special form is to limit the range of the

coefficients c~,j. This step will introduce an

error of at most e. This turns out to be im-

portant for the analysis of the approximation

algorithm we develop. For all j, let

Oj = m~{cl,j},

and let

/3= m~{~j}.

Fact : m/@ z sum(opt(z’)) > 1/0.

The transformation consists of forming the

new set of coefficients c~~ ~aa follows.

● Ford (i, j), ifc~j > ~ thenc[j = ~.

● For all (z, j), if c; ~ < ~ then Cfj = O.

● For all (z, j), if # ~ C:,J s ~ then c~j =!

%”

The objective is to find z“ = (z:,. ... z;) that

minimizes sum(z”) subject to the following

constraints:

● For all i, zf z O.

● For allj, ~ic~j .zf ~ 1.

Let

t = ~~{C[j}

and let

b = IIliIl{C[j : C~j > 0}.M

Fix

~=$

The main point of this transformation is that

the ratio t/b is not too large, i.e. t/b s ~.

Lemma A :

(1)

(2)

Any primal feasible solution z“ to the

transformed problem is also a primal fea-

sible solution to the normalized problem.

Let opt (z”) be an optimal solution to the

transformed problem. Then,

sum(opt(z”)) ~ sum(opt(z))

PROOF:

(1+6).

(of part (l)) : This follows immediately be-

cause, for all (i, j), C[j < c~,j. Thus, if Z“

covers each j at least once with respect to d’,

then z“ covers each j at least once with respect

to c’.

(of part (2)) : Let opt(z’) be an op-

timal solution to the normalized problem,

i.e. sum(opt(z’)) = sum(opt(z)). Let ~ =sum(opt (z’)). For all j, let mj ~ [n] be an

index such that c~, ,j is maximal, i.e. for all i,

~, ,j ~ c~,j . Letc’

1 = {’mj : ~ G [m]}.

Define z“ as follows.

456

● For all z # 1, let z: = opt(z’)t.

● For all z c 1, let z: = opt(z’)i + ~.

Let [1[ denote the cardinality of 1. Since Ill g

m, it is easy to see that

sum(z”) < T - (1 + e).

We now verify that z“ is a primal feasible so-

lution to the transformed problem. The only

concern with respect to feasibility is that some

j is no longer covered at least once by z“, and

the only reason this can happen is because of

the lowering of the coefficients from c’ to c“.

We show that this loss in coverage is compen-

sated for by the increase in z“ above opt (z’).

c

For

Suppose j is such that for at least one in-

dex i it is the case that c: ~ > ~. By

definition of m3 and c;, ,j, ‘it follows that

C~3 ,j = ~. since ~J ~ I,

By the fact on page 9, ~ ~ W, and thus

the coverage of j is at least z:, . c;, ,j ~ 1.

Suppose j is such that for all indices z it is

the case that c; j s ~. The decrease in

the coverage of ‘j is caused by (i, j) where

c: j < @. Because c~J is set to O and

b~cause%um(opt(z’)) ~ r, the total loss

in coverage at j is at most ~. On the

other hand, by definition of/3 and mj and

cII cII%,~ ~

~, ,j ~ ~, and z:, ,j is larger than

opt(z’)m,,j by ~. Thus, the coverage of

j increases by at least ~. ■

convenience. we normalize the largest co-

efficient as follows.

Special form :

&● For all (i, j), let ai,j = ~.

● For all Z, define variable Zt = z: . t.

The objective is to find x = (xl,..., Zn) that

minimizes sum(x) subject to the following con-

straints:

Note that maxz,j {ai,~ } = 1 and min,,~ {ai,j :

C&,j > O} ~ l/~.

There is a one-to-one correspondence be-

tween primal feasible solutions x to the special

form problem and primal feasible solutions z“

to the transformed problem with the property

that sum(z”) = sum(x) /t. Lemma B is the

culmination of the above development.

Lemma B : Let x be a primal feasible solu-

tion to a special form problem derived as de-

scribed above with error parameter c from a

primal positive problem. A primal feasible so-

lution z to the primal positive problem can be

easily derived from x with the property that

sum(x)sum(z) s (1 +6) .

sum(opt (x)). sum(opt(z)).

A similar transformation is possible to pre-

serve the quality of the the dual positive prob-

lem, but is omitted from this paper due to lack

of space.

457


Recommended