+ All Categories
Home > Documents > Algorithm Design Methods (II) Fall 2003 CSE, POSTECH.

Algorithm Design Methods (II) Fall 2003 CSE, POSTECH.

Date post: 17-Jan-2016
Category:
Upload: whitney-alexandra-lane
View: 224 times
Download: 0 times
Share this document with a friend
37
Algorithm Design Methods (II) Fall 2003 CSE, POSTECH
Transcript
Page 1: Algorithm Design Methods (II) Fall 2003 CSE, POSTECH.

Algorithm Design Methods (II)

Fall 2003

CSE, POSTECH

Page 2: Algorithm Design Methods (II) Fall 2003 CSE, POSTECH.

Quick Sort

Quicksort can be seen as a variation of mergesort in which front and back are defined in a different way.

Page 3: Algorithm Design Methods (II) Fall 2003 CSE, POSTECH.

Quicksort Algorithm

Partition anArray into two non-empty parts.– Pick any value in the array, pivot.– small = the elements in anArray < pivot– large = the elements in anArray > pivot– Place pivot in either part,

so as to make sure neither part is empty. Sort small and large by recursively calling QuickSort. You could use merge to combine them, but because th

e elements in small are smaller than elements in large, simply concatenate small and large, and put the result into anArray.

Page 4: Algorithm Design Methods (II) Fall 2003 CSE, POSTECH.

Quicksort: Complexity Analysis

Like mergesort,a single invocation of quicksort on an array of size phas complexity O(p):– p comparisons = 2*p accesses– 2*p moves (copying) = 4*p accesses

Best case: every pivot chosen by quicksort partitions the array into equal-sized parts. In this case quicksort is the same big-O complexity as mergesort – O(n log n)

Page 5: Algorithm Design Methods (II) Fall 2003 CSE, POSTECH.

Quicksort: Complexity Analysis

Worst case: the pivot chosen is the largest or smallest value in the array. Partition creates one part of size 1 (containing only the pivot), the other of size p-1.

n

1 n-1

n-21

1

1

Page 6: Algorithm Design Methods (II) Fall 2003 CSE, POSTECH.

Quicksort: Complexity Analysis

Worst case:There are n-1 invocations of quicksort (not counting base cases) with arrays of size:p = n, n-1, n-2, …, 2

Since each of these does O(p),the total number of accesses isO(n) + O(n-1) + … + O(1) = O(n2)

Ironically the worst case occurs when the list is sorted (or near sorted)!

Page 7: Algorithm Design Methods (II) Fall 2003 CSE, POSTECH.

Quicksort: Complexity Analysis

The average case must be betweenthe best case O(n log n) and the worst case is O(n2).

Analysis yields a complex recurrence relation. The average case number of comparisons turns

out to be approximately 1.386*n*log n – 2.846*n.

Therefore the average case time complexity isO(n log n).

Page 8: Algorithm Design Methods (II) Fall 2003 CSE, POSTECH.

Quicksort: Complexity Analysis

Best case O(n log n) Worst case O(n2) Average case O(n log n) Note that the quick sort is inferior to insertion sort

and merge sort if the list is sorted, nearly sorted, or reverse sorted.

Page 9: Algorithm Design Methods (II) Fall 2003 CSE, POSTECH.

Dynamic Programming

Sequence of decisions

Problem state

Principle of optimality

Page 10: Algorithm Design Methods (II) Fall 2003 CSE, POSTECH.

Sequence of Decisions

As in the greedy method,the solution to a problem is viewed as the result of a sequence of decisions.

Unlike the greedy method,decisions are not made in a greedy manner.

Examine the decision sequence to see whether an optimal decision sequence contains optimal decision subsequences.

Page 11: Algorithm Design Methods (II) Fall 2003 CSE, POSTECH.

Example: Matrix Chain Product

M1 X M2 where M1 is n x m and M2 is m x q Then the total number of multiplications is n x m x q

When there is a matrix chain product such asMCP(n) = M1 X M2 X M3 X ….. X Mn,

Find a sequence of matrix products that results in the least number of matrix multiplications.

Example : M1 X M2 X M3 X M4– Possible sequence : (M1 X (M2 X (M3 X M4))),

((M1 X M2) X (M3 X M4)),

(((M1 X M2) X M3) X M4),

((M1 X (M2 X M3)) X M4),

(M1 X ((M2 X M3) X M4),

Page 12: Algorithm Design Methods (II) Fall 2003 CSE, POSTECH.

Matrix Chain Product

Decision based space decomposition – Compute M1 X M2 first then compute with others or– Compute M2…Mn first then compute with M1

Subspace for M1 X M2– Compute M3… Mn

Subspace for M2..Mn– Compute M3.. Mn first then compute with M2 or – Compute M2 X M3 first then compute with others

Recursively break down the space by having a decision Final decision is based on the number of computations req

uired (in a recursive implementation). But, subspaces are repeating.

Use bottom-up approach to avoid the repetitive computation

Page 13: Algorithm Design Methods (II) Fall 2003 CSE, POSTECH.

Matrix Chain Product

Non-recursive computation of MCP– Compute (M1: M2), (M2: M3), ….. , (Mn-1: Mn)– Compute (M1:M3), (M2:M4), .... , (Mn-2:Mn) using lower computati

on results Ex: (M1:M3) =min( (M1 X(M2:M3)) , ((M1:M2) XM3) )

– Compute (M1:M4), (M2:M5), ….. , (Mn-3:Mn)– …..– Finally compute (M1:M5)

= min( (M1X(M2:Mn)), ((M1:M2)X(M3:Mn), …., ((M1:Mn-1)XMn) )

Time Complexity– 1st : O(n), 2nd: O((n-1)X2), 3rd: O((n-3)X3) ….. N-th: O(1X(n-1))– Total time complexity O(n**2)

Space complexity : O(n**2)

Page 14: Algorithm Design Methods (II) Fall 2003 CSE, POSTECH.

Back Tracking

Systematic way to search for the solution to a problem Begin by defining a solution space for the problem.

– Example: Rat in a maze => solution space: all possible paths to the destination 0/1 Knapsack => solution space: all possible 0/1 combinations

Organize the solution space so that it can be searched easily.– Graph or tree

Search the space in depth-first manner beginning at the start node.

There is no more space to search in dfs, backtrack to the previous live node and expand over there.

The search terminates when the destination reached or there is no more live node.

Page 15: Algorithm Design Methods (II) Fall 2003 CSE, POSTECH.

BackTracking: Example

0/1 Knapsack– n=3, c = 15, w = [8, 6, 9], p = [5, 4, 6]

Search space– (0,-,-),..,(0,0,0), (0,0,1), (0,1,0), …. , (1,1,1)

Search start from (1,-,-) to (0,0,0) (1,1,1) gives a violation of capacity, so backtrack to (1,1,-)

and choose (1,1,0) that satisfies the constraints

feasible solution with profit 9 Continue search using other live nodes

(0,1,1) is another feasible solution with more profit.

optimal solution with profit 10

Page 16: Algorithm Design Methods (II) Fall 2003 CSE, POSTECH.

Time Complexity

Exhaustive search– O(2**n)

Can speed up the search for an optimal solution by introducing bounding function:

“Whether a newly reached node can lead to a solution better than the best found so far.”

The solution space needed for the search is the path information from the start node to the current expansion node. O(longest path)

Page 17: Algorithm Design Methods (II) Fall 2003 CSE, POSTECH.

Branch and Bound

Another systematic way to search for the solution to a problem Each live node becomes E (expansion)-node exactly once. When a node becomes an E-node, all new nodes that can be

reached using a single move are generated. Generated nodes that cannot possibly lead to a (optimal)

feasible solution are discarded. The remaining nodes are added to the list of live nodes and then

one node from the list is selected to become the next E-node. The expansion process is continued until either the answer is

found or the list of live nodes become empty. Selection method from the list of live nodes

– First-in First-out (BFS)– Least Cost or Max Profit

Page 18: Algorithm Design Methods (II) Fall 2003 CSE, POSTECH.

Branch and Bound: Example

0/1 Knapsack– n=3, c = 15, w = [8, 9, 6], p = [5, 6, 4]

Search space– (1,-,-),…(0,0,0), (0,0,1), (0,1,0), …. , (1,1,1)

Search start from (-,-,-) and expand– (1,-,-), (0,-,-): both live nodes.

Select (1,-,-) by FIFO selection and expand– (1,1,-), (1,0,-): (1,1,-) is infeasible (bound) but (1,0,-) is feasible

Select (0,-,-) by FIFO selection and expand– (0,1,-), (0,0,-): both feasible

Select (1,0,-) by FIFO selection and expand– (1,0,1), (1,0,0): (1,0,0) is feasible but less profit– (1,0,1) is feasible solution with profit 9

Page 19: Algorithm Design Methods (II) Fall 2003 CSE, POSTECH.

Branch and Bound: Example

0/1 Knapsack– n=3, c = 15, w = [8, 9, 6], p = [5, 4, 6]

Select (0,1,-) by FIFO selection and expand– (0,1,1), (0,1,0): (0,1,1) feasible with more profit (10)

Select (0,0,-) by FIFO selection and expand– (0,0,1) but less profit.

optimal solution (0,1,1) with profit 10

Page 20: Algorithm Design Methods (II) Fall 2003 CSE, POSTECH.

Time Complexity

Exhaustive search– O(2**n) : depends on bounding function

Page 21: Algorithm Design Methods (II) Fall 2003 CSE, POSTECH.

Supplement slides for the dynamic programming

Page 22: Algorithm Design Methods (II) Fall 2003 CSE, POSTECH.

Principle of Optimality

An optimal solution satisfies the following property:No matter what the first decision is,the remaining decisions are optimal with respect tothe state that results from this decision.

Dynamic programming may be usedonly when the principle of optimality holds.

Page 23: Algorithm Design Methods (II) Fall 2003 CSE, POSTECH.

0/1 Knapsack Problem

Suppose that decisions are made in the order x1, x2, x3, …, xn.

Let x1=a1, x2=a2, …, xn=an be an optimal solution.

If a1 = 0,then following the first decision the state is (2,c).

a2, a3 …, an must be an optimal solution to the knapsack instance given by the state (2,c).

Page 24: Algorithm Design Methods (II) Fall 2003 CSE, POSTECH.

x1 = a1 = 0

maximize Sigma(i=2…n) pixi.subject to Sigma(i=2…n) wixi <= cand xi = 0 or 1 for all i.

If not, this instance has a better solution b2, b3, …, bn.Sigma(i=2,…n) pibi > Sigma(i=2…n) piai.

Page 25: Algorithm Design Methods (II) Fall 2003 CSE, POSTECH.

x1 = a1 = 0

x1=a1, x2=b2, x3=b3 …, xn=bn

is a better solution to the original instance than isx1=a1, x2=a2, x3=a3 …, xn=an.

So x1=a1, x2=a2, x3=a3 …, xn=an cannot be an optim

al solution … a contradiction with the assumption thatit is optimal.

Page 26: Algorithm Design Methods (II) Fall 2003 CSE, POSTECH.

x1 = a1 = 1

Next, consider the case a1 = 1.Following the first decision the state is (2,c-w1).

a2, a3 …, an must be an optimal solution

to the knapsack instance given by the state (2,c-w1).

Page 27: Algorithm Design Methods (II) Fall 2003 CSE, POSTECH.

x1 = a1 = 1

maximize Sigma(i=2…n) pixi.subject to Sigma(i=2…n) wixi <= c-w1

and xi = 0 or 1 for all i.

If not, this instance has a better solution b2, b3, …,

bn.Sigma(i=2,…n) pibi > Sigma(i=2…n) piai.

Page 28: Algorithm Design Methods (II) Fall 2003 CSE, POSTECH.

x1 = a1 = 1

x1=a1, x2=b2, x3=b3 …, xn=bn

is a better solution to the original instance than isx1=a1, x2=a2, x3=a3 …, xn=an.

So x1=a1, x2=a2, x3=a3 …, xn=an cannot be an optimal solution … a contradiction with the assumption thatit is optimal.

Page 29: Algorithm Design Methods (II) Fall 2003 CSE, POSTECH.

0/1 Knapsack Problem

Therefore, no matter what the first decision is,the remaining decisions are optimal with respect tothe state that results from this decision.

The principle of optimality holds anddynamic programming may be applied.

Page 30: Algorithm Design Methods (II) Fall 2003 CSE, POSTECH.

Dynamic Programming Recurrence

f(n,y) is the value of the optimal solutionto the knapsack instance defined by the state (n,y).– Only item n is available.– Available capacity is y.

If wn <= y, f(n,y) = pn.

If wn > y, f(n,y) = 0.

Page 31: Algorithm Design Methods (II) Fall 2003 CSE, POSTECH.

Dynamic Programming Recurrence

Suppose that i < n. f(i,y) is the value of the optimal solution

to the knapsack instance defined by the state (i,y).– Items i through n are available.– Available capacity is y.

Suppose that in the optimal solution for the state (i,y), the first decision is to set xi = 0.

From the principle of optimality,it follows that f(i,y) = f(i+1,y).

Page 32: Algorithm Design Methods (II) Fall 2003 CSE, POSTECH.

Dynamic Programming Recurrence

The only other possibility for the first decision is xi = 1.

The case xi = 1 can arise only when y >= wi. From the principle of optimality,

it follows that f(i,y) = f(i+1,y-wi) + pi. Combining the two cases, we get

– f(i,y) = f(i+1,y) if y < wi.

– f(i,y) = max{f(i+1,y), f(i+1,y-wi)+p[i]} if y >= wi.

Page 33: Algorithm Design Methods (II) Fall 2003 CSE, POSTECH.

Recursive Code

/** @return f(i,y) **/

private static int f (int i, int y)

{

if (i == n) return (y < w[n])? 0:p[n];

if (y < w[i]) return f(i+1,y);

return max (f(i+1,y), f(i+1, y-w[i])+p[i]);

}

Page 34: Algorithm Design Methods (II) Fall 2003 CSE, POSTECH.

Recursion Tree

Page 35: Algorithm Design Methods (II) Fall 2003 CSE, POSTECH.

Time Complexity

Let t(n) be the time required when n items are available.

t(0) = t(1) = a, where a is a constant. When n > 1,

t(n) <= 2t(n-1) + b, where b is a constant. t(n) = O(2n). Solving dynamic programming recurrences

recursivelycan be hazardous to run time.

Page 36: Algorithm Design Methods (II) Fall 2003 CSE, POSTECH.

Reducing Run Time

Page 37: Algorithm Design Methods (II) Fall 2003 CSE, POSTECH.

Time Complexity

Level i of the recurrence tree has up to 2i-1 nodes. At each such node an f(i,y) is computed. Several nodes may compute the same f(i,y). We can save time by not recomputing already computed f(i,y)s. Save computed f(i,y)s in a dictionary.

– Key is (i,y) value.– f(i,y) is computed recursively only when (i,y) is not in the

dictionary.– Otherwise, the dictionary value is used.


Recommended