+ All Categories
Home > Documents > Chapter Overview Search

Chapter Overview Search

Date post: 13-Mar-2016
Category:
Upload: cathleen-guerrero
View: 71 times
Download: 0 times
Share this document with a friend
Description:
Motivation Objectives Search as Problem-Solving problem formulation problem types Uninformed Search breadth-first depth-first uniform-cost search depth-limited search iterative deepening bi-directional search. Informed Search best-first search search with heuristics - PowerPoint PPT Presentation
121
© 2000-2008 Franz Kurfess Search 3 Chapter Overview Search Motivation Objectives Search as Problem- Solving problem formulation problem types Uninformed Search breadth-first depth-first uniform-cost search depth-limited search iterative deepening bi-directional search Informed Search best-first search search with heuristics memory-bounded search iterative improvement search Constraint Satisfaction Important Concepts and Terms Chapter Summary
Transcript
Page 1: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 3

Chapter OverviewSearch

Motivation Objectives Search as Problem-Solving

problem formulation problem types

Uninformed Search breadth-first depth-first uniform-cost search depth-limited search iterative deepening bi-directional search

Informed Search best-first search search with heuristics memory-bounded search iterative improvement search

Constraint Satisfaction Important Concepts and

Terms Chapter Summary

Page 2: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 6

Examples getting from home to Cal Poly

start: home on Clearview Lane goal: Cal Poly CSC Dept. operators: move one block, turn

loading a moving truck start: apartment full of boxes and furniture goal: empty apartment, all boxes and furniture in the truck operators: select item, carry item from apartment to truck, load item

getting settled start: items randomly distributed over the place goal: satisfactory arrangement of items operators: select item, move item

Page 3: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 8

Motivationsearch strategies are important methods for many

approaches to problem-solvingthe use of search requires an abstract formulation of

the problem and the available steps to construct solutions

search algorithms are the basis for many optimization and planning methods

Page 4: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 9

Objectives formulate appropriate problems as search tasks

states, initial state, goal state, successor functions (operators), cost know the fundamental search strategies and algorithms

uninformed search breadth-first, depth-first, uniform-cost, iterative deepening, bi-directional

informed search best-first (greedy, A*), heuristics, memory-bounded, iterative improvement

evaluate the suitability of a search strategy for a problem completeness, time & space complexity, optimality

Page 5: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 11

Problem-Solving Agentsagents whose task it is to solve a particular problem

goal formulation what is the goal state what are important characteristics of the goal state how does the agent know that it has reached the goal are there several possible goal states

are they equal or are some more preferable

problem formulation what are the possible states of the world relevant for solving the

problem what information is accessible to the agent how can the agent progress from state to state

Page 6: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 12

Problem Formulationformal specification for the task of the agent

goal specification states of the world actions of the agent

identify the type of the problem what knowledge does the agent have about the state of

the world and the consequences of its own actions does the execution of the task require up-to-date

information sensing is necessary during the execution

Page 7: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 14

Well-Defined Problemsproblems with a readily available formal specification

initial state starting point from which the agent sets out

actions (operators, successor functions) describe the set of possible actions

state space set of all states reachable from the initial state by any sequence of

actions path

sequence of actions leading from one state in the state space to another

goal test determines if a given state is the goal state

Page 8: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 15

Well-Defined Problems (cont.) solution

path from the initial state to a goal state search cost

time and memory required to calculate a solution path cost

determines the expenses of the agent for executing the actions in a path

sum of the costs of the individual actions in a path total cost

sum of search cost and path cost overall cost for finding a solution

Page 9: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 16

Selecting States and Actionsstates describe distinguishable stages during the

problem-solving process dependent on the task and domain

actions move the agent from one state to another one by applying an operator to a state dependent on states, capabilities of the agent, and

properties of the environmentchoice of suitable states and operators

can make the difference between a problem that can or cannot be solved (in principle, or in practice)

Page 10: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 17

Example: From Home to Cal Polystates

locations: obvious: buildings that contain your home, Cal Poly CSC dept. more difficult: intermediate states

blocks, street corners, sidewalks, entryways, ... continuous transitions

agent-centric states moving, turning, resting, ...

operators depend on the choice of states e.g. move_one_block

abstraction is necessary to omit irrelevant details valid: can be expanded into a detailed version useful: easier to solve than in the detailed version

Page 11: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 18

Example Problems toy problems

vacuum world 8-puzzle 8-queens cryptarithmetic vacuum agent missionaries and cannibals

real-world problems route finding touring problems

traveling salesperson VLSI layout robot navigation assembly sequencing Web search

Page 12: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 19

Simple Vacuum World states

two locations dirty, clean

initial state any legitimate state

successor function (operators) left, right, suck

goal test all squares clean

path cost one unit per action

Properties: discrete locations, discrete dirt (binary), deterministic

Page 13: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 20

More Complex Vacuum Agent states

configuration of the room dimensions, obstacles, dirtiness

initial state locations of agent, dirt

successor function (operators) move, turn, suck

goal test all squares clean

path cost one unit per action

Properties: discrete locations, discrete dirt, deterministic, d * 2n states for dirt degree d,n locations

Page 14: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 21

8-Puzzle states

location of tiles (including blank tile) initial state

any legitimate configuration successor function (operators)

move tile alternatively: move blank

goal test any legitimate configuration of tiles

path cost one unit per move

Properties: abstraction leads to discrete configurations, discrete moves, deterministic9!/2 = 181,440 reachable states

Page 15: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 22

8-Queens incremental formulation

states arrangement of up to 8 queens on

the board initial state

empty board successor function (operators)

add a queen to any square goal test

all queens on board no queen attacked

path cost irrelevant (all solutions equally valid)

Properties: 3*1014 possible sequences; can be reduced to 2,057

complete-state formulation states

arrangement of 8 queens on the board

initial state all 8 queens on board

successor function (operators) move a queen to a different square

goal test no queen attacked

path cost irrelevant (all solutions equally valid)

Properties: good strategies can reduce the number of possible sequences considerably

Page 16: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 23

8-Queens Refinedsimple solutions may lead to very high search costs

64 fields, 8 queens ==> 648 possible sequencesmore refined solutions trim the search space, but

may introduce other constraints place queens on “unattacked” places

much more efficient may not lead to a solutions depending on the initial moves

move an attacked queen to another square in the same column, if possible to an “unattacked” square much more efficient

Page 17: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 24

Cryptarithmetic states

puzzle with letters and digits initial state

only letters present successor function (operators)

replace all occurrences of a letter by a digit not used yet goal test

only digits in the puzzle calculation is correct

path cost all solutions are equally valid

Page 18: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 25

Missionaries and Cannibals states

number of missionaries, cannibals, and boats on the banks of a river illegal states

missionaries are outnumbered by cannibals on either bank initial states

all missionaries, cannibals, and boats are on one bank successor function (operators)

transport a set of up to two participants to the other bank {1 missionary} | { 1cannibal} | {2 missionaries} | {2 cannibals} |

{1 missionary and 1 cannibal} goal test

nobody left on the initial river bank path cost

number of crossings

also known as “goats and cabbage”, “wolves and sheep”, etc

Page 19: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 26

Route Finding states

locations initial state

starting point successor function (operators)

move from one location to another goal test

arrive at a certain location path cost

may be quite complex money, time, travel comfort, scenery, ...

Page 20: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 27

Traveling Salesperson states

locations / cities illegal states

each city may be visited only once visited cities must be kept as state information

initial state starting point no cities visited

successor function (operators) move from one location to another one

goal test all locations visited agent at the initial location

path cost distance between locations

Page 21: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 28

VLSI Layout states

positions of components, wires on a chip initial state

incremental: no components placed complete-state: all components placed (e.g. randomly, manually)

successor function (operators) incremental: place components, route wire complete-state: move component, move wire

goal test all components placed components connected as specified

path cost may be complex

distance, capacity, number of connections per component

Page 22: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 29

Robot Navigation states

locations position of actuators

initial state start position (dependent on the task)

successor function (operators) movement, actions of actuators

goal test task-dependent

path cost may be very complex

distance, energy consumption

Page 23: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 30

Assembly Sequencingstates

location of componentsinitial state

no components assembledsuccessor function (operators)

place componentgoal test

system fully assembled path cost

number of moves

Page 24: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 31

Searching for Solutions traversal of the search space

from the initial state to a goal state legal sequence of actions as defined by successor function (operators)

general procedure check for goal state expand the current state

determine the set of reachable states return “failure” if the set is empty

select one from the set of reachable states move to the selected state

a search tree is generated nodes are added as more states are visited

Page 25: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 32

Search Terminology search tree

generated as the search space is traversed the search space itself is not necessarily a tree, frequently it is a graph the tree specifies possible paths through the search space

expansion of nodes as states are explored, the corresponding nodes are expanded by applying

the successor function this generates a new set of (child) nodes

the fringe (frontier) is the set of nodes not yet visited newly generated nodes are added to the fringe

search strategy determines the selection of the next node to be expanded can be achieved by ordering the nodes in the fringe

e.g. queue (FIFO), stack (LIFO), “best” node w.r.t. some measure (cost)

Page 26: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 33

Example: Graph Search

S3

A4

C2

D3

E1

B2

G0

1 1 1 3

1 3 3 4

5

1

2

2

the graph describes the search (state) space each node in the graph represents one state in the search space

e.g. a city to be visited in a routing or touring problem this graph has additional information

names and properties for the states (e.g. S, 3) links between nodes, specified by the successor function

properties for links (distance, cost, name, ...)

Page 27: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 34

Graph and Tree

S3

A4

C2

D3

E1

B2

G0

1 1 1 3

1 3 3 4

5

1

2

2

S3

5

A4

D3

1

1

33

4

2

C2

D3

G0

G0

G0

E1

G0

1

1

3

3

4

2

C2

D3

G0

G0

E1

G0

1

3

B2

1

3

C2

D3

G0

G0

E1

G0

1

3

4 E1

G0

2 4

3 2

4

the tree is generated by traversing the graph

the same node in the graph may appear repeatedly in the tree

the arrangement of the tree depends on the traversal strategy (search method)

the initial state becomes the root node of the tree

in the fully expanded tree, the goal states are the leaf nodes

cycles in graphs may result in infinite branches

Page 28: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 38

General Tree Search Algorithmfunction TREE-SEARCH(problem, fringe) returns solution

fringe := INSERT(MAKE-NODE(INITIAL-STATE[problem]), fringe) loop do

if EMPTY?(fringe) then return failure node := REMOVE-FIRST(fringe) if GOAL-TEST[problem] applied to STATE[node] succeeds then return SOLUTION(node) fringe := INSERT-ALL(EXPAND(node, problem), fringe)

generate the node from the initial state of the problem repeat

return failure if there are no more nodes in the fringe examine the current node; if it’s a goal, return the solution expand the current node, and add the new nodes to the fringe

Page 29: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 40

Evaluation Criteriacompleteness

if there is a solution, will it be foundtime complexity

how long does it take to find the solution does not include the time to perform actions

space complexity memory required for the search

optimality will the best solution be found

main factors for complexity considerations:branching factor b, depth d of the shallowest goal node, maximum path length m

Page 30: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 41

Search Cost and Path Costthe search cost indicates how expensive it is to

generate a solution time complexity (e.g. number of nodes generated) is

usually the main factor sometimes space complexity (memory usage) is

considered as wellpath cost indicates how expensive it is to execute

the solution found in the search distinct from the search cost, but often related

total cost is the sum of search and path costs

Page 31: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 42

Selection of a Search Strategymost of the effort is often spent on the selection of

an appropriate search strategy for a given problem uninformed search (blind search)

number of steps, path cost unknown agent knows when it reaches a goal

informed search (heuristic search) agent has background information about the problem

map, costs of actions

Page 32: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 43

Search Strategies

Uninformed Search breadth-first depth-first uniform-cost search depth-limited search iterative deepening bi-directional search constraint satisfaction

Informed Search best-first search search with heuristics memory-bounded search iterative improvement search

Page 33: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 44

all the nodes reachable from the current node are explored first achieved by the TREE-SEARCH method by appending

newly generated nodes at the end of the search queue

function BREADTH-FIRST-SEARCH(problem) returns solution

return TREE-SEARCH(problem, FIFO-QUEUE())

Breadth-First

b branching factor

d depth of the tree

Time Complexity bd+1

Space Complexity bd+1

Completeness yes (for finite b)

Optimality yes (for non-negative path costs)

Page 34: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 45

Breadth-First Snapshot 1InitialVisitedFringeCurrentVisibleGoal

1

2 3

Fringe: [] + [2,3]

Page 35: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 46

Breadth-First Snapshot 2InitialVisitedFringeCurrentVisibleGoal

1

2 3

4 5

Fringe: [3] + [4,5]

Page 36: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 47

Breadth-First Snapshot 3InitialVisitedFringeCurrentVisibleGoal

1

2 3

4 5 6 7

Fringe: [4,5] + [6,7]

Page 37: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 48

Breadth-First Snapshot 4InitialVisitedFringeCurrentVisibleGoal

1

2 3

4 5 6 7

8 9

Fringe: [5,6,7] + [8,9]

Page 38: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 49

Breadth-First Snapshot 5InitialVisitedFringeCurrentVisibleGoal

1

2 3

4 5 6 7

8 9 10 11

Fringe: [6,7,8,9] + [10,11]

Page 39: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 50

Breadth-First Snapshot 6InitialVisitedFringeCurrentVisibleGoal

1

2 3

4 5 6 7

8 9 10 11 12 13

Fringe: [7,8,9,10,11] + [12,13]

Page 40: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 51

Breadth-First Snapshot 7InitialVisitedFringeCurrentVisibleGoal

1

2 3

4 5 6 7

8 9 10 11 12 13 14 15

Fringe: [8,9.10,11,12,13] + [14,15]

Page 41: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 52

Breadth-First Snapshot 8InitialVisitedFringeCurrentVisibleGoal

1

2 3

4 5 6 7

8 9 10 11 12 13 14 15

16 17

Fringe: [9,10,11,12,13,14,15] + [16,17]

Page 42: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 53

Breadth-First Snapshot 9InitialVisitedFringeCurrentVisibleGoal

1

2 3

4 5 6 7

8 9 10 11 12 13 14 15

16 17 18 19

Fringe: [10,11,12,13,14,15,16,17] + [18,19]

Page 43: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 54

Breadth-First Snapshot 10InitialVisitedFringeCurrentVisibleGoal

1

2 3

4 5 6 7

8 9 10 11 12 13 14 15

16 17 18 19 20 21

Fringe: [11,12,13,14,15,16,17,18,19] + [20,21]

Page 44: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 55

Breadth-First Snapshot 11InitialVisitedFringeCurrentVisibleGoal

1

2 3

4 5 6 7

8 9 10 11 12 13 14 15

16 17 18 19 20 21 22 23

Fringe: [12, 13, 14, 15, 16, 17, 18, 19, 20, 21] + [22,23]

Page 45: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 56

Breadth-First Snapshot 12InitialVisitedFringeCurrentVisibleGoal

1

2 3

4 5 6 7

8 9 10 11 12 13 14 15

16 17 18 19 20 21 22 23 24 25

Fringe: [13,14,15,16,17,18,19,20,21] + [22,23]

Note: The goal node is “visible” here, but we can not perform the goal test yet.

Page 46: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 57

Breadth-First Snapshot 13InitialVisitedFringeCurrentVisibleGoal

1

2 3

4 5 6 7

8 9 10 11 12 13 14 15

16 17 18 19 20 21 22 23 24 25 26 27

Fringe: [14,15,16,17,18,19,20,21,22,23,24,25] + [26,27]

Page 47: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 58

Breadth-First Snapshot 14InitialVisitedFringeCurrentVisibleGoal

1

2 3

4 5 6 7

8 9 10 11 12 13 14 15

16 17 18 19 20 21 22 23 24 25 26 27 28 29

Fringe: [15,16,17,18,19,20,21,22,23,24,25,26,27] + [28,29]

Page 48: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 59

Breadth-First Snapshot 15InitialVisitedFringeCurrentVisibleGoal

1

2 3

4 5 6 7

8 9 10 11 12 13 14 15

16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31

Fringe: [15,16,17,18,19,20,21,22,23,24,25,26,27,28,29] + [30,31]

Page 49: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 60

Breadth-First Snapshot 16InitialVisitedFringeCurrentVisibleGoal

1

2 3

4 5 6 7

8 9 10 11 12 13 14 15

16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31

Fringe: [17,18,19,20,21,22,23,24,25,26,27,28,29,30,31]

Page 50: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 61

Breadth-First Snapshot 17InitialVisitedFringeCurrentVisibleGoal

1

2 3

4 5 6 7

8 9 10 11 12 13 14 15

16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31

Fringe: [18,19,20,21,22,23,24,25,26,27,28,29,30,31]

Page 51: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 62

Breadth-First Snapshot 18InitialVisitedFringeCurrentVisibleGoal

1

2 3

4 5 6 7

8 9 10 11 12 13 14 15

16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31

Fringe: [19,20,21,22,23,24,25,26,27,28,29,30,31]

Page 52: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 63

Breadth-First Snapshot 19InitialVisitedFringeCurrentVisibleGoal

1

2 3

4 5 6 7

8 9 10 11 12 13 14 15

16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31

Fringe: [20,21,22,23,24,25,26,27,28,29,30,31]

Page 53: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 64

Breadth-First Snapshot 20InitialVisitedFringeCurrentVisibleGoal

1

2 3

4 5 6 7

8 9 10 11 12 13 14 15

16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31

Fringe: [21,22,23,24,25,26,27,28,29,30,31]

Page 54: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 65

Breadth-First Snapshot 21InitialVisitedFringeCurrentVisibleGoal

1

2 3

4 5 6 7

8 9 10 11 12 13 14 15

16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31

Fringe: [22,23,24,25,26,27,28,29,30,31]

Page 55: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 66

Breadth-First Snapshot 22InitialVisitedFringeCurrentVisibleGoal

1

2 3

4 5 6 7

8 9 10 11 12 13 14 15

16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31

Fringe: [23,24,25,26,27,28,29,30,31]

Page 56: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 67

Breadth-First Snapshot 23InitialVisitedFringeCurrentVisibleGoal

1

2 3

4 5 6 7

8 9 10 11 12 13 14 15

16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31

Fringe: [24,25,26,27,28,29,30,31]

Page 57: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 68

Breadth-First Snapshot 24InitialVisitedFringeCurrentVisibleGoal

1

2 3

4 5 6 7

8 9 10 11 12 13 14 15

16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31

Fringe: [25,26,27,28,29,30,31]

Note: The goal test is positive for this node, and a solution is found in 24 steps.

Page 58: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 69

the nodes with the lowest cost are explored first similar to BREADTH-FIRST, but with an evaluation of the

cost for each reachable node g(n) = path cost(n) = sum of individual edge costs to reach

the current node

function UNIFORM-COST-SEARCH(problem) returns solution

return TREE-SEARCH(problem, COST-FN, FIFO-QUEUE())

Uniform-Cost -First

Time Complexity bC*/e

Space Complexity bC*/e

Completeness yes (finite b, step costs >= e)

Optimality yes

b branching factor

C* cost of the optimal solution

e minimum cost per action

Page 59: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 70

Uniform-Cost SnapshotInitialVisitedFringeCurrentVisibleGoal

1

2 3

4 5 6 7

8 9 10 11 12 13 14 15

16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31

4 3

7

2

2 2 4

5 4 4 4 3 6 9

3 4 7 2 4 8 6 4 3 4 2 3 9 25 8

Fringe: [27(10), 4(11), 25(12), 26(12), 14(13), 24(13), 20(14), 15(16), 21(18)] + [22(16), 23(15)]

Edge Cost 9

Page 60: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 71

Uniform Cost Fringe Trace1. [1(0)]2. [3(3), 2(4)]3. [2(4), 6(5), 7(7)]4. [6(5), 5(6), 7(7), 4(11)]5. [5(6), 7(7), 13(8), 12(9), 4(11)]6. [7(7), 13(8), 12(9), 10(10), 11(10), 4(11)]7. [13(8), 12(9), 10(10), 11(10), 4(11), 14(13), 15(16)]8. [12(9), 10(10), 11(10), 27(10), 4(11), 26(12), 14(13), 15(16)]9. [10(10), 11(10), 27(10), 4(11), 26(12), 25(12), 14(13), 24(13), 15(16)]10. [11(10), 27(10), 4(11), 25(12), 26(12), 14(13), 24(13), 20(14), 15(16), 21(18)]11. [27(10), 4(11), 25(12), 26(12), 14(13), 24(13), 20(14), 23(15), 15(16), 22(16), 21(18)]12. [4(11), 25(12), 26(12), 14(13), 24(13), 20(14), 23(15), 15(16), 23(16), 21(18)]13. [25(12), 26(12), 14(13), 24(13),8(13), 20(14), 23(15), 15(16), 23(16), 9(16), 21(18)]14. [26(12), 14(13), 24(13),8(13), 20(14), 23(15), 15(16), 23(16), 9(16), 21(18)]15. [14(13), 24(13),8(13), 20(14), 23(15), 15(16), 23(16), 9(16), 21(18)]16. [24(13),8(13), 20(14), 23(15), 15(16), 23(16), 9(16), 29(16),21(18), 28(21)]

Goal reached!

Notation: [Bold+Yellow: Current Node; White: Old Fringe Node; Green+Italics: New Fringe Node].Assumption: New nodes with the same cost as existing nodes are added after the existing node.

Page 61: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 72

Breadth-First vs. Uniform-Costbreadth-first always expands the shallowest node

only optimal if all step costs are equaluniform-cost considers the overall path cost

optimal for any (reasonable) cost function non-zero, positive

gets bogged down in trees with many fruitless, short branches low path cost, but no goal node

both are complete for non-extreme problems finite number of branches strictly positive search function

Page 62: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 73

continues exploring newly generated nodes achieved by the TREE-SEARCH method by appending

newly generated nodes at the beginning of the search queue utilizes a Last-In, First-Out (LIFO) queue, or stack

function DEPTH-FIRST-SEARCH(problem) returns solution

return TREE-SEARCH(problem, LIFO-QUEUE())

Depth-First

b branching factor

m maximum path length

Time Complexity bm

Space Complexity b*m

Completeness no (for infinite branch length)

Optimality no

Page 63: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 74

Depth-First SnapshotInitialVisitedFringeCurrentVisibleGoal

1

2 3

4 5 6 7

8 9 10 11 12 13 14 15

16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31

Fringe: [3] + [22,23]

Page 64: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 75

Depth-First vs. Breadth-First depth-first goes off into one branch until it reaches a leaf node

not good if the goal is on another branch neither complete nor optimal uses much less space than breadth-first

much fewer visited nodes to keep track of smaller fringe

breadth-first is more careful by checking all alternatives complete and optimal

under most circumstances very memory-intensive

Page 65: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 76

Backtracking Searchvariation of depth-first search

only one successor node is generated at a time even better space complexity: O(m) instead of O(b*m) even more memory space can be saved by incrementally modifying

the current state, instead of creating a new one only possible if the modifications can be undone this is referred to as backtracking

frequently used in planning, theorem proving

Page 66: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 77

Depth-Limited Searchsimilar to depth-first, but with a limit

overcomes problems with infinite paths sometimes a depth limit can be inferred or estimated from

the problem description in other cases, a good depth limit is only known when the problem

is solved based on the TREE-SEARCH method must keep track of the depth

function DEPTH-LIMITED-SEARCH(problem, depth-limit) returns solution

return TREE-SEARCH(problem, depth-limit, LIFO-QUEUE())

b branching factor

l depth limit

Time Complexity bl

Space Complexity b*l

Completeness no (goal beyond l, or infinite branch length)

Optimality no

Page 67: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 78

applies LIMITED-DEPTH with increasing depth limits combines advantages of BREADTH-FIRST and DEPTH-

FIRST methods many states are expanded multiple times

doesn’t really matter because the number of those nodes is small in practice, one of the best uninformed search methods

for large search spaces, unknown depth

function ITERATIVE-DEEPENING-SEARCH(problem) returns solution for depth := 0 to unlimited do result := DEPTH-LIMITED-SEARCH(problem, depth-limit)

if result != cutoff then return result

Iterative Deepening

b branching factor

d tree depth

Time Complexity bd

Space Complexity b*d

Completeness yes

Optimality yes (all step costs identical)

Page 68: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 79

Bi-directional Searchsearch simultaneously from two directions

forward from the initial and backward from the goal statemay lead to substantial savings if it is applicablehas severe limitations

predecessors must be generated, which is not always possible

search must be coordinated between the two searches one search must keep all nodes in memory

b branching factor

d tree depth

Time Complexity bd/2

Space Complexity bd/2

Completeness yes (b finite, breadth-first for both directions)

Optimality yes (all step costs identical, breadth-first for both directions)

Page 69: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 80

Improving Search Methodsmake algorithms more efficient

avoiding repeated states utilizing memory efficiently

use additional knowledge about the problem properties (“shape”) of the search space

more interesting areas are investigated first pruning of irrelevant areas

areas that are guaranteed not to contain a solution can be discarded

Page 70: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 81

Avoiding Repeated Statesin many approaches, states may be expanded

multiple times e.g. iterative deepening problems with reversible actions

eliminating repeated states may yield an exponential reduction in search cost e.g. some n-queens strategies

place queen in the left-most non-threatening column rectangular grid

4d leaves, but only 2d2 distinct states

Page 71: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 82

Informed Searchrelies on additional knowledge about the problem or

domain frequently expressed through heuristics (“rules of thumb”)

used to distinguish more promising paths towards a goal may be mislead, depending on the quality of the heuristic

in general, performs much better than uninformed search but frequently still exponential in time and space for

realistic problems

Page 72: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 83

Best-First Search relies on an evaluation function that gives an indication of how

useful it would be to expand a node family of search methods with various evaluation functions usually gives an estimate of the distance to the goal often referred to as heuristics in this context

the node with the lowest value is expanded first the name is a little misleading: the node with the lowest value for the

evaluation function is not necessarily one that is on an optimal path to a goal

if we really know which one is the best, there’s no need to do a search

function BEST-FIRST-SEARCH(problem, EVAL-FN) returns solution fringe := queue with nodes ordered by EVAL-FN

return TREE-SEARCH(problem, fringe)

Page 73: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 84

Greedy Best-First Searchminimizes the estimated cost to a goal

expand the node that seems to be closest to a goal utilizes a heuristic function as evaluation function

f(n) = h(n) = estimated cost from the current node to a goal heuristic functions are problem-specific often straight-line distance for route-finding and similar problems

often better than depth-first, although worst-time complexities are equal or worse (space)

Completeness Time Complexity Space Complexity Optimality

no bm bm no

b: branching factor, d: depth of the solution, m: maximum depth of the search tree, l: depth limit

function GREEDY-SEARCH(problem) returns solution

return BEST-FIRST-SEARCH(problem, h)

Page 74: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 85

Greedy Best-First Search Snapshot

77 6 5 4 3 2 1 0 1 3 5 62 48

65 4 2 4 537

65 56

77

9 InitialVisitedFringeCurrentVisibleGoal

1

2 3

4 5 6 7

8 9 10 11 12 13 14 15

16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31

Fringe: [13(4), 7(6), 8(7)] + [24(0), 25(1)]

7Heuristics

Page 75: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 86

A* Searchcombines greedy and uniform-cost search to find the

(estimated) cheapest path through the current node f(n) = g(n) + h(n)

= path cost + estimated cost to the goal heuristics must be admissible

never overestimate the cost to reach the goal very good search method, but with complexity problems

function A*-SEARCH(problem) returns solution

return BEST-FIRST-SEARCH(problem, g+h)

Completeness Time Complexity Space Complexity Optimality

yes bd bd yes

b: branching factor, d: depth of the solution, m: maximum depth of the search tree, l: depth limit

Page 76: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 87

A* Snapshot

77 6 5 4 3 2 1 0 1 3 5 62 48

65 4 2 4 537

65 56

77

9 InitialVisitedFringeCurrentVisibleGoal

1

2 3

4 5 6 7

8 9 10 11 12 13 14 15

16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31

4 3

7

2

2 2 4

5 4 4 4 3 6 9

3 4 7 2 4 8 6 4 3 4 2 3 9 25 8

Fringe: [2(4+7), 13(3+2+3+4), 7(3+4+6)] + [24(3+2+4+4+0), 25(3+2+4+3+1)]

Edge Cost

7Heuristics9

f-cost 10

9

11 10

11

10 13

12

13 13

Page 77: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 88

A* Snapshot with all f-Costs

77 6 5 4 3 2 1 0 1 3 5 62 48

65 4 2 4 537

65 56

77

9 InitialVisitedFringeCurrentVisibleGoal

1

2 3

4 5 6 7

8 9 10 11 12 13 14 15

16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31

4 3

7

2

2 2 4

5 4 4 4 3 6 9

3 4 7 2 4 8 6 4 3 4 2 3 9 25 8

Edge Cost

7Heuristics9

11 10

17 11 10 13

20 21 13 11 12 18 22

24 24 29 23 18 19 18 16 13 13 14 25 31 2513

f-cost 10

21

14

Page 78: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 89

A* Propertiesthe value of f never decreases along any path

starting from the initial node also known as monotonicity of the function almost all admissible heuristics show monotonicity

those that don’t can be modified through minor changes

this property can be used to draw contours regions where the f-cost is below a certain threshold with uniform cost search (h = 0), the contours are circular the better the heuristics h, the narrower the contour around

the optimal path

Page 79: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 90

A* Snapshot with Contour f=11

77 6 5 4 3 2 1 0 1 3 5 62 48

65 4 2 4 537

65 56

77

9 InitialVisitedFringeCurrentVisibleGoal

1

2 3

4 5 6 7

8 9 10 11 12 13 14 15

16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31

4 3

7

2

2 2 4

5 4 4 4 3 6 9

3 4 7 2 4 8 6 4 3 4 2 3 9 25 8

Edge Cost

7Heuristics9

11 10

17 11 10 13

20 21 13 11 12 18 22

24 24 29 23 18 19 18 16 13 13 14 25 31 2513

f-cost 10

21

14

Contour

Page 80: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 91

A* Snapshot with Contour f=13

77 6 5 4 3 2 1 0 1 3 5 6

2

48

65 4 2 4 537

65 56

77

9 InitialVisitedFringeCurrentVisibleGoal

1

2 3

4 5 6 7

8 9 10 11 12 13 14 15

16 17 18 19 20 21 22 23 24 25

26

27 28 29 30 31

4 3

7

2

2 2 4

5 4 4 4 3 6 9

3 4 7 2 4 8 6 4 3 4 2 3 9 25 8

Edge Cost

7Heuristics9

11 10

17 11 10 13

20 21 13 11 12 18 22

24 24 29 23 18 19 18 16 13 13

14

25 31 2513

f-cost 10

21

14

Contour

Page 81: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 92

Optimality of A*A* will find the optimal solution

the first solution found is the optimal oneA* is optimally efficient

no other algorithm is guaranteed to expand fewer nodes than A*

A* is not always “the best” algorithm optimality refers to the expansion of nodes

other criteria might be more relevant it generates and keeps all nodes in memory

improved in variations of A*

Page 82: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 93

Complexity of A*the number of nodes within the goal contour search

space is still exponential with respect to the length of the solution better than other algorithms, but still problematic

frequently, space complexity is more severe than time complexity A* keeps all generated nodes in memory

Page 83: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 94

Memory-Bounded Searchsearch algorithms that try to conserve memorymost are modifications of A*

iterative deepening A* (IDA*) simplified memory-bounded A* (SMA*)

Page 84: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 95

Iterative Deepening A* (IDA*)explores paths within a given contour (f-cost limit) in

a depth-first manner this saves memory space because depth-first keeps only

the current path in memory but it results in repeated computation of earlier contours since it

doesn’t remember its history was the “best” search algorithm for many practical

problems for some time does have problems with difficult domains

contours differ only slightly between states algorithm frequently switches back and forth

similar to disk thrashing in (old) operating systems

Page 85: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 96

Recursive Best-First Searchsimilar to best-first search, but with lower space

requirements O(bd) instead of O(bm)

it keeps track of the best alternative to the current path best f-value of the paths explored so far from predecessors

of the current node if it needs to re-explore parts of the search space, it knows

the best candidate path still may lead to multiple re-explorations

Page 86: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 97

Simplified Memory-Bounded A* (SMA*)

uses all available memory for the search drops nodes from the queue when it runs out of space

those with the highest f-costs avoids re-computation of already explored area

keeps information about the best path of a “forgotten” subtree in its ancestor

complete if there is enough memory for the shortest solution path

often better than A* and IDA* but some problems are still too tough trade-off between time and space requirements

Page 87: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 98

Heuristics for Searchingfor many tasks, a good heuristic is the key to finding

a solution prune the search space move towards the goal

relaxed problems fewer restrictions on the successor function (operators) its exact solution may be a good heuristic for the original

problem

Page 88: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 99

8-Puzzle Heuristics level of difficulty

around 20 steps for a typical solution branching factor is about 3 exhaustive search would be 320 =3.5 * 109 9!/2 = 181,440 different reachable states

distinct arrangements of 9 squares candidates for heuristic functions

number of tiles in the wrong position sum of distances of the tiles from their goal position

city block or Manhattan distance generation of heuristics

possible from formal specifications

Page 89: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 100

Local Search and Optimizationfor some problem classes, it is sufficient to find a

solution the path to the solution is not relevant

memory requirements can be dramatically relaxed by modifying the current state all previous states can be discarded since only information about the current state is kept, such

methods are called local

Page 90: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 101

Iterative Improvement Searchfor some problems, the state description provides all

the information required for a solution path costs become irrelevant global maximum or minimum corresponds to the optimal

solutioniterative improvement algorithms start with some

configuration, and try modifications to improve the quality 8-queens: number of un-attacked queens VLSI layout: total wire length

analogy: state space as landscape with hills and valleys

Page 91: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 102

Hill-Climbing Searchcontinually moves uphill

increasing value of the evaluation function gradient descent search is a variation that moves downhill

very simple strategy with low space requirements stores only the state and its evaluation, no search tree

problems local maxima

algorithm can’t go higher, but is not at a satisfactory solution plateau

area where the evaluation function is flat ridges

search may oscillate slowly

Page 92: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 103

Simulated Annealingsimilar to hill-climbing, but some down-hill movement

random move instead of the best move depends on two parameters

∆E, energy difference between moves; T, temperature temperature is slowly lowered, making bad moves less likely

analogy to annealing gradual cooling of a liquid until it freezes

will find the global optimum if the temperature is lowered slowly enough

applied to routing and scheduling problems VLSI layout, scheduling

Page 93: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 104

Local Beam Searchvariation of beam search

a path-based method that looks at several paths “around” the current one

keeps k states in memory, instead of only one information between the states can be shared

moves to the most promising areas

stochastic local beam search selects the k successor states randomly with a probability determined by the evaluation function

Page 94: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 105

Genetic Algorithms (GAs)variation of stochastic beam search

successor states are generated as variations of two parent states, not only one

corresponds to natural selection with sexual reproduction

Page 95: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 106

GA Terminology population

set of k randomly generated states generation

population at a point in time usually, propagation is synchronized for the whole population

individual one element from the population described as a string over a finite alphabet

binary, ACGT, letters, digits consistent for the whole population

fitness function evaluation function in search terminology higher values lead to better chances for reproduction

Page 96: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 107

GA Principles reproduction

the state description of the two parents is split at the crossover point determined in advance, often randomly chosen must be the same for both parents

one part is combined with the other part of the other parent one or both of the descendants may be added to the population compatible state descriptions should assure viable descendants

depends on the choice of the representation may not have a high fitness value

mutation each individual may be subject to random modifications in its state

description usually with a low probability

schema useful components of a solution can be preserved across generations

Page 97: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 108

GA Applicationsoften used for optimization problems

circuit layout, system design, schedulingtermination

“good enough” solution found no significant improvements over several generations time limit

Page 98: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 109

Constraint Satisfaction satisfies additional structural properties of the problem

may depend on the representation of the problem the problem is defined through a set of variables and a set of

domains variables can have possible values specified by the problem constraints describe allowable combinations of values for a subset of

the variables state in a CSP

defined by an assignment of values to some or all variables solution to a CSP

must assign values to all variables must satisfy all constraints solutions may be ranked according to an objective function

Page 99: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 110

CSP Approachthe goal test is decomposed into a set of constraints

on variables checks for violation of constraints before new nodes are

generated must backtrack if constraints are violated

forward-checking looks ahead to detect unsolvability based on the current values of constraint variables

Page 100: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 111

CSP Example: Map Coloringcolor a map with three colors so that adjacent

countries have different colors

G

FE

D

C

B

Avariables: A, B, C, D, E, F, G

values: {red, green, blue}

constraints: “no neighboring regions have the same color”

legal combinations for A, B: {(red, green), (red, blue), (green, red), (green, blue), (blue, red), (blue, green)}

Page 101: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 112

Constraint Graphvisual representation of a CSP

variables are nodes arcs are constraints

G

FED

C

B

A

the map coloring examplerepresented as constraint graph

Page 102: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 113

Benefits of CSPstandardized representation pattern

variables with assigned values constraints on the values allows the use of generic heuristics

no domain knowledge is required

Page 103: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 114

CSP as Incremental Search Probleminitial state

all (or at least some) variables unassignedsuccessor function

assign a value to an unassigned variable must not conflict with previously assigned variables

goal test all variables have values assigned no conflicts possible

not allowed in the successor functionpath cost

e.g. a constant for each step may be problem-specific

Page 104: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 115

CSPs and Searchin principle, any search algorithm can be used to

solve a CSP awful branching factor

n*d for n variables with d values at the top level, (n-1)*d at the next level, etc.

not very efficient, since they neglect some CSP properties commutativity: the order in which values are assigned to variables

is irrelevant, since the outcome is the same

Page 105: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 116

Backtracking Search for CSPsa variation of depth-first search that is often used for

CSPs values are chosen for one variable at a time if no legal values are left, the algorithm backs up and

changes a previous assignment very easy to implement

initial state, successor function, goal test are standardized not very efficient

can be improved by trying to select more suitable unassigned variables first

Page 106: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 117

Heuristics for CSPmost-constrained variable (minimum remaining

values, “fail-first”) variable with the fewest possible values is selected tends to minimize the branching factor

most-constraining variable variable with the largest number of constraints on other

unassigned variablesleast-constraining value

for a selected variable, choose the value that leaves more freedom for future choices

Page 107: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 118

Analyzing Constraintsforward checking

when a value X is assigned to a variable, inconsistent values are eliminated for all variables connected to X identifies “dead” branches of the tree before they are visited

constraint propagation analyses interdependencies between variable assignments

via arc consistency an arc between X and Y is consistent if for every possible value x of

X, there is some value y of Y that is consistent with x more powerful than forward checking, but still reasonably efficient but does not reveal every possible inconsistency

Page 108: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 119

Local Search and CSP local search (iterative improvement) is frequently used for

constraint satisfaction problems values are assigned to all variables modification operators move the configuration towards a solution

often called heuristic repair methods repair inconsistencies in the current configuration

simple strategy: min-conflicts minimizes the number of conflicts with other variables solves many problems very quickly

million-queens problem in less than 50 steps can be run as online algorithm

use the current state as new initial state

Page 109: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 120

Analyzing Problem Structuressome problem properties can be derived from the

structure of the respective constraint graph isolated sub-problems

no connections to other parts of the graph can be solved independently e.b. “islands” in map-coloring problems dividing a problem into independent sub-problems reduces

complexity tremendously ideally from exponential to polynomial or even linear

tree if the constraint graph is a tree, the CSP can be solved in time

linear in the number of variables sometimes solutions can be found by reducing a general graph to a

tree nodes are removed or collapsed

Page 110: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 121

CSP Example: Map Coloring (cont.)most-constrained-variable heuristic

G

FE

D

C

B

A

Page 111: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 122

8-Queens with Min-Conflictsone queen in each column

usually several conflictscalculate the number of conflicts for each possible

position of a selected queenmove the queen to the position with the least

conflicts

Page 112: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 123

8-Queens Example 1

12213221

222

3231

1

Page 113: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 124

8-Queens Example 1

1323

212

1

2330211

1

Page 114: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 125

8-Queens Example 1 solution found in 4 steps min-conflicts heuristic uses additional heuristics to

select the “best” queen to move try to move out of the corners

similar to least-constraining value heuristics

Page 115: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 126

8-Queens Example 2

2

2

02

1

02

1

1232122

2

2

02

1

02

1

Page 116: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 127

8-Queens Example 2

1 123

2 122

223 3

11

2

1

01

2

0

11

1213222

1

2

01

1

0

10

Page 117: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 128

8-Queens Example 2

1

01

0

11

12132222

01

1

0

10

1

11

Page 118: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 133

CSP Properties discrete variables over finite domains

relatively simple e.g. map coloring, 8-queens

number of variable assignments can be dn

domain size d, n variables exponential time complexity (worst-case) in practice, generic CSP algorithms can solve problems much larger than

regular search algorithms more complex problems may require the use of a constraint language

it is not possible to enumerate all combinations of values e.g. job scheduling (“precedes”, “duration”)

related problems are studied in the field of Operations Research often continuous domains; e.g. linear programming

Page 119: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 136

Important Concepts and Terms initial state iterative deepening search iterative improvement local search memory-bounded search operator optimality path path cost function problem recursive best-first search search space complexity state state space time complexity uniform-cost search

agent A* search best-first search bi-directional search breadth-first search depth-first search depth-limited search completeness constraint satisfaction depth-limited search genetic algorithm general search algorithm goal goal test function greedy best-first search heuristics

Page 120: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 137

Chapter Summary tasks can often be formulated as search problems

initial state, successor function (operators), goal test, path cost various search methods systematically comb the search

space uninformed search

breadth-first, depth-first, and variations informed search

best-first, A*, iterative improvement the choice of good heuristics can improve the search

dramatically task-dependent

Page 121: Chapter Overview Search

© 2000-2008 Franz Kurfess Search 138


Recommended