+ All Categories
Home > Documents > Lecture 2: General Problem-Solving Methods. Greedy Method Divide-and-Conquer Backtracking Dynamic...

Lecture 2: General Problem-Solving Methods. Greedy Method Divide-and-Conquer Backtracking Dynamic...

Date post: 01-Jan-2016
Category:
Upload: bernard-wright
View: 218 times
Download: 2 times
Share this document with a friend
Popular Tags:
16
Lecture 2: General Problem-Solving Methods
Transcript
Page 1: Lecture 2: General Problem-Solving Methods. Greedy Method Divide-and-Conquer Backtracking Dynamic Programming Graph Traversal Linear Programming Reduction.

Lecture 2:

General Problem-Solving Methods

Page 2: Lecture 2: General Problem-Solving Methods. Greedy Method Divide-and-Conquer Backtracking Dynamic Programming Graph Traversal Linear Programming Reduction.

Greedy Method

Divide-and-Conquer

Backtracking

Dynamic Programming

Graph Traversal

Linear Programming

Reduction Method

Page 3: Lecture 2: General Problem-Solving Methods. Greedy Method Divide-and-Conquer Backtracking Dynamic Programming Graph Traversal Linear Programming Reduction.

Greedy Method

The greedy method consists of an iteration of the following computations:

selection procedure - a selection procedure is created to choose the next item to add to a list of locally optimal solutions to sub problems

feasibility check - a test is made to see if the current set of choices satisfies some locally optimal criterion.

solution check - when a complete set is obtained it is checked to verify that it constitutes a solution for the original problem.

A question that should come to mind is: What is meant by locally optimal?

This term refers to the amount of work necessary to determine a solution or to measure the level to which a criterion is met. If the computation leading to an optimal solution or the evaluation of a criterion does not involve an exhaustive search then that activity may be considered local.

Page 4: Lecture 2: General Problem-Solving Methods. Greedy Method Divide-and-Conquer Backtracking Dynamic Programming Graph Traversal Linear Programming Reduction.

Minimum Spanning Trees

C

D

F

E

A

G

B

4

2

3

5

1

2

1

2

2

1

2

1

The minimum spanning tree problem is to find the minimum weight tree embedded in a weighted graph that includes all the vertices.

Weighted graph data representations edge list AB 1 AE 2 BC 1 BD 2 BE 5 BF 2 BG 2 CG 4 DE 3 DG 1 EF 1 FG 2

matrix A B C D E F GA - 1 - - 2 - -B 1 - 1 2 5 2 2C - 1 - - - - 4D - 2 - - 3 - 1E 2 5 - 3 - 1 -F - 2 - - 1 - 2G - 2 4 1 - 2 -

Which data representation would you use in an implementation of a minimum spanning tree algorithm? Why?

Page 5: Lecture 2: General Problem-Solving Methods. Greedy Method Divide-and-Conquer Backtracking Dynamic Programming Graph Traversal Linear Programming Reduction.

Divide-and-Conquer

As its name implies this method involves dividing a problem into smaller problems that can be more easily solved. While the specifics can vary from one application to another, divide-and-conquer always includes the following three steps in some form:

Divide - Typically this steps involves splitting one problem into two problems of approximately 1/2 the size of the original problem.

Conquer - The divide step is repeated (usually recursively) until individual problem sizes are small enough to be solved (conquered) directly.

Recombine - The solution to the original problem is obtained by combining all the solutions to the sub-problems.

Divide and Conquer is not applicable to every problem class. Even when D&C works it may not produce an efficient solution.

Page 6: Lecture 2: General Problem-Solving Methods. Greedy Method Divide-and-Conquer Backtracking Dynamic Programming Graph Traversal Linear Programming Reduction.

Quicksort Example

i j 1 2 3 4 5 6 7 8 9 10 11 12 13 14

2 1 6 9 8 5 7 4 6 3 5 4 7 6 0 5

4 2 6 5 8 9 7 4 6 3 5 4 7 6 0 5

6 3 6 5 4 9 7 8 6 3 5 4 7 6 0 5

8 4 6 5 4 3 7 8 6 9 5 4 7 6 0 5

9 5 6 5 4 3 5 8 6 9 7 4 7 6 0 5

10 6 6 5 4 3 5 4 6 9 7 8 7 6 0 5

13 7 6 5 4 3 5 4 0 9 7 8 7 6 6 5

14 8 6 5 4 3 5 4 0 5 7 8 7 6 6 9

8 5 5 4 3 5 4 0 6 7 8 7 6 6 6

pivot value

items being swapped

new sublists for next pass of quicksort

Page 7: Lecture 2: General Problem-Solving Methods. Greedy Method Divide-and-Conquer Backtracking Dynamic Programming Graph Traversal Linear Programming Reduction.

Backtracking

Backtracking is used to solve problems in which a feasible solution is needed rather than an optimal one, such as the solution to a maze or the arrangement of squares in the 15-puzzle. Typically, the solution to a backtracking problem is a sequence of items (or objects) chosen from a set of alternatives that satisfy some criterion.

A backtracking algorithm is a scheme for solving a series of sub-problems each of which may have multiple possible solutions and where the solution chosen for one sub-problem can affect the possible solutions of later sub-problems.

To solve the overall problem, we find a solution to the first sub-problem and then attempt to recursively solve the other sub-problems based on this first solution. If we cannot, or we want all possible solutions, we backtrack and try the next possible solution to the first sub-problem and so on.

Backtracking terminates when there are no more solutions to the first sub-problem or a solution to the overall problem is found.

http://dictionary.die.net/backtracking

Page 8: Lecture 2: General Problem-Solving Methods. Greedy Method Divide-and-Conquer Backtracking Dynamic Programming Graph Traversal Linear Programming Reduction.

N-Queens Problem

A classic backtracking algorithm is the solution to the N-Queens problem. In this problem you are to place queens (chess pieces) on an NxN chessboard in such a way that no two queens are directly attacking one another. That is no two queens share the same row, column or diagonal on the board.

Backtracking Approach - Version 1: Until all queens are placed, choose the first available location and put the next queen in this position. If queens remain to be placed and no space is left, backtrack (by removing the last queens placed and placing it in the next available position).

Page 9: Lecture 2: General Problem-Solving Methods. Greedy Method Divide-and-Conquer Backtracking Dynamic Programming Graph Traversal Linear Programming Reduction.

Dynamic Programming

In mathematics and computer science, dynamic programming is a method of solving complex problems by breaking them down into simpler steps. It is applicable to problems that exhibit the properties of overlapping sub problems and optimal substructure.

Overlapping sub problems means that the space of sub problems must be small, that is, any recursive algorithm solving the problem will solve the same sub problems over and over, rather than generating new sub problems. Dynamic programming takes account of this fact and solves each sub problem only once.

Optimal substructure means that the solution to a given optimization problem can be obtained by the combination of optimal solutions to its sub problems. Consequently, the first step towards devising a dynamic programming solution is to check whether the problem exhibits an optimal substructure.

Page 10: Lecture 2: General Problem-Solving Methods. Greedy Method Divide-and-Conquer Backtracking Dynamic Programming Graph Traversal Linear Programming Reduction.

The Binomial Coefficient

(a+b)0 = 1

(a+b)1 = a+b

(a+b)2 = a2+2ab+b2

(a+b)3 = a3+3a2b+3ab2+b3

(a+b)4 = a4+4a3b+6a2b2+4ab3+b4

The binomial theorem gives a closed-form expression for the coefficient of any term in the expansion of a binomial raised to the nth power.

1

1 1

1 2 1

1 3 3 1

1 4 6 4 1

1 5 10 10 5 1

knk

n

k

n baknk

nba

0 !!

!

nkork

nkfork

n

k

n

k

n

01

01

1

1

The binomial coefficient is also the number of combinations of n items taken k at a time, sometimes called n-choose-k.

Page 11: Lecture 2: General Problem-Solving Methods. Greedy Method Divide-and-Conquer Backtracking Dynamic Programming Graph Traversal Linear Programming Reduction.

Graph Traversal

Graph traversal refers to the problem of visiting all the nodes in a graph in a particular manner. Graph traversal can be used as a problem-solving method when a problem state can be represented as a graph and its solution can be represented as a path in the graph.

When the graph is a tree it can represent the problem space for a wide variety of combinatorial problems. In these cases the solution is usually at one of the leaf-nodes of the tree or is the path to a particular leaf-node.

Techniques such a branch-and-bound can be used to reduce the number of operations required to search the graph or tree problem space by eliminating infeasible or unpromising branches.

Page 12: Lecture 2: General Problem-Solving Methods. Greedy Method Divide-and-Conquer Backtracking Dynamic Programming Graph Traversal Linear Programming Reduction.

Traveling Salesperson with Branch-and-Bound

A B

C

D

EF

G

H

- 5 7 6 4 10 8 9

8 - 14 9 3 4 6 2

7 9 - 11 10 9 5 7

16 6 8 - 5 7 7 9

1 3 2 5 - 8 6 7

12 8 5 3 2 - 10 13

9 5 7 9 6 3 - 4

3 9 6 8 5 7 9 -

A

B

C

D

E

F

G

H

A B C D E F G H

In the most general case the distances between each pair of cities is a positive value with dist(A,B) dist(B,A). In the matrix representation, the main diagonal values are omitted (i.e. dist(A,A)0).

Page 13: Lecture 2: General Problem-Solving Methods. Greedy Method Divide-and-Conquer Backtracking Dynamic Programming Graph Traversal Linear Programming Reduction.

Linear Programming (LP)

Linear programming (LP) is a mathematical method for determining a way to achieve the best outcome in a given mathematical model for some list of requirements represented as linear equations.

More formally, linear programming is a technique for the optimization of a linear objective function, subject to linear equality and linear inequality constraints. Linear programs are problems that can be expressed in canonical form:

Maximize:

Subject to:

where x represents the vector of variables, c and b are vectors of coefficients and A is a matrix of coefficients. The expression to be maximized or minimized is called the objective function cTx The equations Ax<b are the constraints.

http://en.wikipedia.org/wiki/Linear_programming

Page 14: Lecture 2: General Problem-Solving Methods. Greedy Method Divide-and-Conquer Backtracking Dynamic Programming Graph Traversal Linear Programming Reduction.

The simplex method is a method for solving problems in linear programming. This method, invented by George Dantzig in 1947, tests adjacent vertices of the feasible set (which is a polytope) in sequence so that at each new vertex the objective function improves or is unchanged. The simplex method is very efficient in practice, generally taking 2m to 3m iterations at most (where m is the number of equality constraints), and converging in expected polynomial time for certain distributions of random inputs. However, its worst-case complexity is exponential.

The Simplex Method

feasible solution set

each facet represents a limiting constraint

simplex moves along surface to an optimal point

Page 15: Lecture 2: General Problem-Solving Methods. Greedy Method Divide-and-Conquer Backtracking Dynamic Programming Graph Traversal Linear Programming Reduction.

Reduction Method

In computability theory and computational complexity theory, a reduction is a transformation of one problem into another problem. Depending on the transformation used this can be used to define complexity classes on a set of problems.

Intuitively, problem A is reducible to problem B if solutions to B exist and give solutions to A whenever A has solutions. Thus, solving A cannot be harder than solving B. We write A ≤ B, usually with a subscript on the ≤ to indicate the type of reduction being used.

Page 16: Lecture 2: General Problem-Solving Methods. Greedy Method Divide-and-Conquer Backtracking Dynamic Programming Graph Traversal Linear Programming Reduction.

Using Reduction to Show that Vertex Cover is NP-Complete

3-SATISFIABILITY (3SAT) - Instance: Set U of variables, a collection C of clauses over U such that each clause c in C has size exactly 3. Question: Is there a truth assignment for U satisfying C?

VERTEX COVER - Instance: An undirected graph G and an integer K Question: Is there a vertex cover of size K or less for G, i.e., a subset V' of V with the size of V' less than K such that every edge has at least one endpoint in V'.

Claim: VERTEX COVER is NP-complete

Proof: It was proved in 1971, by Cook, that 3SAT is NP-complete. Next, we know that VERTEX COVER is in NP because we could verify any solution in polynomial time with a simple n2 examination of all the edges for endpoint inclusion in the given vertex cover.


Recommended