Date post: | 22-Dec-2015 |
Category: |
Documents |
View: | 213 times |
Download: | 0 times |
Priority Models
Sashka Davis
University of California, San Diego
June 1, 2003
2
Goal
Define priority models, which are a formal framework of greedy algorithms
Develop a technique for proving lower bounds for priority algorithms
3
The big picture
Divide and Conquer
GreedyDynamic Programming
Hill-climbing
Polynomial Time Algorithms
Can we build a formal model for the different algorithmic design paradigms?Evaluate the limitations of each technique?Classify the kinds of problems on which the different heuristics perform well
Are the known algorithms optimal or can they be improved?
4
Greedy heuristics
Priority algorithms are a formal model for greedy algorithms.
ADAPTIVE
FIXED
ShortPathPRIORITY ALGORITHMS
5
Common structure of greedy algorithms
They sort items (edges, intervals, etc.)
Consider each item once and either add it to the solution or throw it away
6
Interval scheduling on a single machine
Instance:
Set of intervals I=(i1, i2,…,in), j ij=[rj, dj] Problem: schedule intervals on a single
machine Solution: S I Objective function: maxiS (dj - rj)
7
A simple solution (LPT)
Longest Processing Time algorithm input I=(i1, i2,…,in)
1. Initialize S ← 2. Sort the intervals in decreasing order (dj – rj)
3. while (I is not empty) Let ik be the next in the sorted order
If ik can be scheduled then S ← S U {ik};
I ← I \ {ik}
4. Output S
8
LPT is a 3-approximation
LPT sorts the intervals in decreasing order according to their length
3 LPT ≥ OPT
OPT OPT OPT
LPT
ri di
9
The minimum cost spanning tree problem
Instance:
Edge weighted graph
Problem:
Find a tree of edges that spans V
Objective function
Minimize the cost of the tree
( , ); :G V E E R
T E
min ( )e T
w e
10
A solution for MST problem
Kruskal’s algorithmInput (G=(V,E), w: E →R)1. Initialize empty solution T2. L = Sorted list of edges in increasing order
according to their weight3. while (L is not empty)
e = next edge in L Add the edge to T, as long as T remains a forest
and remove e from L4. Output T
11
Another solution to the MST problem
Prim’s algorithm
Input G=(V,E), w: E →R
1. Initialize an empty tree T ← ; S ← 2. Pick a vertex u; S={u};
3. for (i=1 to |V|-1) (u,v) = min(u,v)cut(S, V-S)w(u,v) S←S {v}; T←T{(u,v)}
Output T
12
Classification of the example algorithms
ADAPTIVE
FIXED
Prim
PRIORITY ALGORITHMS
KruskalLPT
13
Talk outline
1. History of priority algorithms
2. Priority algorithm framework for scheduling problems
3. Priority algorithms for facility location
4. General framework of priority algorithms
5. Future research
14
Results [BNR02]
Defined fixed and adaptive priority algorithms
Proved that fixed priority algorithms are less powerful than adaptive priority algorithms
Considered variety of scheduling problems and proved many non-trivial upper and lower bounds
15
Results [AB02]
Proved tight bounds on performance of
Adaptive priority algorithms for facility location in arbitrary spaces
Fixed priority algorithms for uniform metric facility location
Adaptive priority algorithms for set cover
16
Results [DI02]
Defined a general model of priority algorithms
Proved a strong separation between the classes of fixed and adaptive priority algorithms
Proved a separation between the class of memoryless adaptive priority algorithms and adaptive priority algorithms with memory
Proved tight bound for performance of adaptivepriority algorithms for weighted vertex cover problem
17
Talk outline
1. History of priority algorithms
2. Priority algorithm framework for scheduling problems [BNR02]
3. Priority algorithms for facility location
4. General framework of priority algorithms
5. Future research
18
The defining characteristics of greedy algorithms [BNR02]
The order in which data items are considered is determined by a priority function, which orders all possible data items
The algorithm sees one input at a time
Decision made for each data item is irrevocable
19
Priority models
Difference: How the next data item is chosen
Fixed PriorityAlgorithms
Adaptive PriorityAlgorithms
Ø
20
Fixed priority algorithms [BNR02]
Input: a set of jobs S
Ordering: Determine without looking at S a total ordering of all possible jobs
while (S is not empty) Jnext - next job in S according to the order above
Decision: Make irrevocable decision for Jnext
S ← S \ {Jnext }
21
Adaptive priority algorithms [BNR02]
Input: a set of jobs S
while (not empty S)
Ordering: Determine without looking at S a total ordering of all possible jobsJnext - next job in S according to the order above
Decision: Make irrevocable decision for Jnext
S ← S \ {Jnext }
22
Separation between fixed and adaptive priority algorithms [BNR02]
Theorem: No fixed priority algorithm can achieve an approximation ratio better than 3 for the interval scheduling problem on multiple machine configuration
Theorem: CHAIN-2 is an adaptive priority algorithm achieving an approximation ratio 2, for interval scheduling on a two machine configuration
23
Online algorithms
1. Must service each request before the next request is received
2. Several alternatives in servicing each request
3. Online cost is determined by the options selected
24
Connection between online and priority algorithms
Similarities The instance is viewed one input at a time Decision is irrevocable
Difference The order of the data items
25
Competitive analysis of online algorithms
t=1; I=∅ Round t
Adversary picks a data item t; I ← I U {t}
Algorithm makes a decision σt for t: A ← A U { (t, σt )} Adversary chooses whether to end the game.
If not, the next round begins: t ←t+1
Adversary picks a solution B for I offline Algorithm is awarded payoff value(A) / value(B)
26
Fixed priority game
Adversary selects a finite set of data items S0; I←;t ←1 Algorithm picks a total order on S0
Adversary restricts the remaining data items: S1 S0
Round t Let t St be next data item in the order Algorithm makes a decision σt for t : A ← A U { (t, σt )} Adversary restricts the set St+1 St – t; I ← I U {t}; Adversary chooses whether to end the game. If
not, the next round begins: t ←t+1 Adversary picks a solution B for I Algorithm is awarded payoff value (A) / value (B)
27
Example lower bound [BNR02]
Theorem1: No priority algorithm can achieve an approximation ratio better than 3 for the interval scheduling problem with proportional profit for a single machine configuration
28
Proof of Theorem 1
Adversary’s move
Algorithm’s move: Algorithm selects an ordering
Let i be the interval with highest priority
3
2
1
q
q-1q-11
2
3
e
29
Adversary’s strategy
If Algorithm decides not to schedule i During next round Adversary removes all
remaining intervals and schedules interval i
3
2
1
i
jk1
2
3
i
30
Adversary’s strategy
If i = and Algorithm schedules i During next round the Adversary restricts the
sequence:
3
2
1
i
jk1
2
3
i
i
kj
31
Adversary’s strategy
If i = and Algorithm schedules i During next round Adversary restricts the
sequence:
3
2
1
m
jk1
2
3
i
i
m
32
Conclusion
Adversary can pick (q, e) so that the advantage gained is arbitrarily close to 3
No priority algorithm (fixed or adaptive) can achieve an approximation ratio better than 3
LPT achieves an approximation ratio 3
LPT is optimal within the class of priority algorithms
33
Talk outline
1. History of priority algorithms
2. Priority algorithm framework for scheduling problems
3. Priority algorithms for facility location [AB02]
4. General framework of priority algorithms
5. Future research
34
[AB02] work on priority algorithms
[AB02] proved lower bounds on performance of adaptive and fixed priority algorithms for the facility location problem in metric and arbitrary spaces, and the set cover problems
35
[AB02] result
Theorem: No adaptive priority algorithm can achieve an approximation ratio better than log(n) for facility location in arbitrary spaces
36
Adaptive priority game
Adversary selects a finite set of data items S0; I←;t ←1
Round t Algorithm picks a data item t and a decision σt for t :
A ← A U { (t, σt )}
Adversary restricts the set St+1 St – t ; I ← I U {t}; Adversary chooses whether to end the game, if not next
round begins t ← t+1
Adversary picks a solution B for I Algorithm is awarded payoff value(A)/value(B)
37
Facility location problem
Instance is a set of cities and set of facilities The set of cities is C={1,2,…,n} Each facility fi has an opening cost cost(fi) and connection
costs for each city: {ci1, ci2,…, cin}
Problem: open a collection of facilities such that each city is connected to at least one facility
Objective function: minimize the opening and connection costs min(ΣfScost(fi) + ΣjCmin fiScij )
38
Adversary presents the instance:
Cities: C={1,2,…,n}, where n=2k
Facilities: Each facility has opening cost n City connection costs are 1 or ∞ Each facility covers exactly n/2 cities cover(fj) = {i | i C,cji=1}
Cu denotes the set of cities not yet covered by the solution of the Algorithm
39
Adversary’s strategy
At the beginning of each round t The Adversary chooses St to consist of
facilities f such that fSt iff |cover(f) ∩ Cu| = n/(2t)
The number of uncovered cities Cu is n/(2t-1)
Two facilities are complementary if together they cover all cities in C. For any round t St consists of complementary facilities
40
The game
Uncovered cities Cu
41
End of the game
Either Algorithm opened log(n) facilities or failed to produce a valid solution
Cost of Algorithm’s solution is n.log(n)+n
Adversary opens two facilities incurs total cost 2n+n
42
Conlusion
The Adversary has a winning strategy
No adaptive priority algorithm can achieve an approximation ratio better than log(n)
43
Talk outline
1. History of priority algorithms
2. Priority algorithm framework for scheduling problems
3. Priority algorithms for facility location
4. General framework of priority algorithms [DI02]
5. Future research
44
Fixed priority algorithms [DI02]
Input: instance ={1,2,…, n},
Output: solution
1. Determine an ordering function
2. Order I according to
3. Repeat Let the next data item in the ordering be Make a decision Update the partial solution S
until (decisions are made for all data items)
4. Output
1 2{ , ,..., }dI I {( , ) | 1,..., }i iS i d
+ { } R
i
{( , ) |1 }S i d
( )
45
Adaptive priority algorithms [DI02]
Input: instance ={1,2,…, n},
Output: solution
1. Initialization
2. Repeat Determine an ordering function Pick the highest priority data item according to Make an irrevocable decision Update:
until (decisions are made for all data items; )
3. Output
1 2{ , ,..., }dI I {( , ) | 1,..., }i iS i d
* + { }S It
R
{( , ) |1 }S i d
*; ; ; 1U I S I t
(t t U
* *\{ }; {( , )}; { }; 1t t t tU U S S I I t t U
46
Strong separation between fixed and adaptive priority algorithms
Theorem: No fixed priority algorithm can achieve any constant approximation ratio for the ShortPath problem
Dijkstra algorithm for the Single source shortest path problem solves the ShortPath problem exactly.
47
The ShortPath problem
Instance: Given an edge-weighted directed graph G=(V,E) and two nodes s and t
Problem: Find a directed tree of edges, rooted at s
Objective function: Minimize the combined weight of the edges on the path from s to t
48
Adversary’s strategy
t
b
s
a u(k)
w(k)
x(1)
v(1)
y(1)
z(1)
0 , , , , ,S u v w x y z
49
Algorithm selects an order on S0
If then the Adversary presents: ( ) ( )y z
t
b
s
a u(k)
w(k)
x(1)
v(1)
y(1)
z(1)
1 , , ,S u x y z
51
Adversary’s strategy
Wait until Algorithm considers edge Y(1).
Y(1) will be considered before Z(1)
Adversary can remove data items not yet considered
52
Case 1: y(1) is taken
t
u(k)
x(1)
y(1)
z(1)
b
a
s
The Algorithm constructs a path {u,y}
The Adversary outputs solution {x,z}
1
2
Alg k
Adv
53
Case 2: y(1) is rejected
The Algorithm has failed to construct a path. The Adversary outputs a solution {u,y} and wins the game.
t
u(k)
x(1)
y(1)
z(1)
b
a
s
54
The outcome of the game:
Algorithm fails to output a solution or Algorithm achieves an approximation ratio (k+1)/2
The Adversary can set k arbitrarily large and thus can force the Algorithm to claim arbitrarily large approximation ratio
55
Conclusion
No fixed priority algorithm can achieve any constant approximation ratio
Dijkstra’s algorithm for the SSSP can be classified as an adaptive priority algorithm problem solves the ShortPath problem exactly
56
Future work
Improve upper and lower bounds for Priority algorithms
Define extended models of priority algorithms
Beyond greedy algorithms
57
Close the gaps
Metric Steiner Tree problem: The known 2-approximation belongs to the class
of fixed priority algorithms Current lower bound for adaptive priority algorithm
is 1.18, for space with distances {1,2}
Can we close the gap? Lower bound for metric space with arbitrary
distances?
58
Priority algorithms for other problems
What kind of lower bounds can we prove for weighted independent set problem?
What kind of lower bounds can we prove for graph coloring?
59
Extended priority models
Global information: Suppose the algorithm knows the length of the instance ( |V| or |E| or number of jobs, etc.)
There are ‘greedy algorithms’ that use this information
What kinds of lower bounds can we prove for this model?
60
More extensions of the model
Local information - information encoded in a single data item
What if the algorithm is allowed to see the neighborhood of the current vertex?
Or the two highest priority jobs, in case of job scheduling problems?
What kinds of lower bounds can we prove for this model?
61
Beyond greedy
Define similar framework for backtracking and dynamic programming algorithms
What are the limits of these techniques?
62
Defining characteristics of backtracking algorithms
Backtracking algorithms build a depth-first search pruning tree
Leaves of the search tree are solutions
Children of internal node represent the choices for a given data item
63
n n nnn
Fixed backtracking algorithms
22
1 2 n 1
64
Fixed backtracking algorithms
Algorithm orders the universe of data items
Decision is irrevocable, algorithm commits to a set of options for each data item
Want to relate the quality of the solution with the fraction of leaves inspected