ITCS 3153Artificial Intelligence
Lecture 3Lecture 3
Uninformed SearchesUninformed Searches
(mostly copied from Berkeley)(mostly copied from Berkeley)
Lecture 3Lecture 3
Uninformed SearchesUninformed Searches
(mostly copied from Berkeley)(mostly copied from Berkeley)
Outline
Problem Solving AgentsProblem Solving Agents• Restricted form of general agentRestricted form of general agent
Problem TypesProblem Types• Fully vs. partially observable, deterministic vs. stochasticFully vs. partially observable, deterministic vs. stochastic
Problem FormulationProblem Formulation• State space, initial state, successor function, goal test, path costState space, initial state, successor function, goal test, path cost
Example ProblemsExample Problems
Basic Search AlgorithmsBasic Search Algorithms
Problem Solving AgentsProblem Solving Agents• Restricted form of general agentRestricted form of general agent
Problem TypesProblem Types• Fully vs. partially observable, deterministic vs. stochasticFully vs. partially observable, deterministic vs. stochastic
Problem FormulationProblem Formulation• State space, initial state, successor function, goal test, path costState space, initial state, successor function, goal test, path cost
Example ProblemsExample Problems
Basic Search AlgorithmsBasic Search Algorithms
Problem Solving Agents
Restricted form of general agent:Restricted form of general agent:functionfunction SIMPLE-PROBLEM-SOLVING-AGENT( SIMPLE-PROBLEM-SOLVING-AGENT(perceptpercept) ) returnsreturns an action an action staticstatic: : seqseq, an action sequence, initially empty, an action sequence, initially empty statestate, some description of the current world state, some description of the current world state goalgoal, a goal, initially null, a goal, initially null problemproblem, a problem definition, a problem definition state state <- UPDATE-STATE(<- UPDATE-STATE(statestate, , perceptpercept)) ifif seqseq is empty is empty thenthen goalgoal <- FORMULATE-GOAL( <- FORMULATE-GOAL(statestate)) problemproblem <- FORMULATE-PROBLEM( <- FORMULATE-PROBLEM(statestate, , goalgoal)) seq <- SEARCH(seq <- SEARCH(problemproblem)) actionaction <- RECOMMENDATION( <- RECOMMENDATION(seq, state)seq, state) seqseq <- REMAINDER( <- REMAINDER(seq, stateseq, state)) returnreturn actionaction
Note: This is offline problem solving; solution with “eyes closed.” Note: This is offline problem solving; solution with “eyes closed.” Online problem solving involves acting without complete knowledgeOnline problem solving involves acting without complete knowledge
Restricted form of general agent:Restricted form of general agent:functionfunction SIMPLE-PROBLEM-SOLVING-AGENT( SIMPLE-PROBLEM-SOLVING-AGENT(perceptpercept) ) returnsreturns an action an action staticstatic: : seqseq, an action sequence, initially empty, an action sequence, initially empty statestate, some description of the current world state, some description of the current world state goalgoal, a goal, initially null, a goal, initially null problemproblem, a problem definition, a problem definition state state <- UPDATE-STATE(<- UPDATE-STATE(statestate, , perceptpercept)) ifif seqseq is empty is empty thenthen goalgoal <- FORMULATE-GOAL( <- FORMULATE-GOAL(statestate)) problemproblem <- FORMULATE-PROBLEM( <- FORMULATE-PROBLEM(statestate, , goalgoal)) seq <- SEARCH(seq <- SEARCH(problemproblem)) actionaction <- RECOMMENDATION( <- RECOMMENDATION(seq, state)seq, state) seqseq <- REMAINDER( <- REMAINDER(seq, stateseq, state)) returnreturn actionaction
Note: This is offline problem solving; solution with “eyes closed.” Note: This is offline problem solving; solution with “eyes closed.” Online problem solving involves acting without complete knowledgeOnline problem solving involves acting without complete knowledge
Example: Romania
On holiday in Romania; currently in Arad.On holiday in Romania; currently in Arad.
Flight leaves tomorrow from Bucharest.Flight leaves tomorrow from Bucharest.
Formulate Goal:Formulate Goal:• be in Bucharestbe in Bucharest
Formulate Problem:Formulate Problem:• statesstates: various cities: various cities
• actionsactions: drive between citites: drive between citites
Find Solution:Find Solution:• Sequence of cities, e.g., Arad, Sibiu, Fagaras, BucharestSequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest
On holiday in Romania; currently in Arad.On holiday in Romania; currently in Arad.
Flight leaves tomorrow from Bucharest.Flight leaves tomorrow from Bucharest.
Formulate Goal:Formulate Goal:• be in Bucharestbe in Bucharest
Formulate Problem:Formulate Problem:• statesstates: various cities: various cities
• actionsactions: drive between citites: drive between citites
Find Solution:Find Solution:• Sequence of cities, e.g., Arad, Sibiu, Fagaras, BucharestSequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest
Example: Romania
Problem Types
Deterministic, fully observable Deterministic, fully observable single-state problem single-state problem• Agent knows exactly what state it will be in; solution is a sequenceAgent knows exactly what state it will be in; solution is a sequence
Non-observable Non-observable conformant problem conformant problem• Agent may have no idea where it is; solution (if any) is a sequenceAgent may have no idea where it is; solution (if any) is a sequence
Non-deterministic and/or partially observableNon-deterministic and/or partially observable• Percepts provide Percepts provide newnew information about current state information about current state
• Solution is a Solution is a treetree or or policypolicy
• Often Often interleaveinterleave search, execution search, execution
Unknown state space Unknown state space exploration problem (“online”) exploration problem (“online”)
Deterministic, fully observable Deterministic, fully observable single-state problem single-state problem• Agent knows exactly what state it will be in; solution is a sequenceAgent knows exactly what state it will be in; solution is a sequence
Non-observable Non-observable conformant problem conformant problem• Agent may have no idea where it is; solution (if any) is a sequenceAgent may have no idea where it is; solution (if any) is a sequence
Non-deterministic and/or partially observableNon-deterministic and/or partially observable• Percepts provide Percepts provide newnew information about current state information about current state
• Solution is a Solution is a treetree or or policypolicy
• Often Often interleaveinterleave search, execution search, execution
Unknown state space Unknown state space exploration problem (“online”) exploration problem (“online”)
Example: Vacuum World
Single-state, start in #5.Single-state, start in #5.
• Solution??Solution??
Single-state, start in #5.Single-state, start in #5.
• Solution??Solution??
Example: Vacuum World
Single-state, start in #5.Single-state, start in #5.
• Solution: [Right, Suck]Solution: [Right, Suck]
Conformant, start in Conformant, start in {1,2,3,4,5,6,7,8}{1,2,3,4,5,6,7,8}
• E.g., right goes to {2,4,6,8}E.g., right goes to {2,4,6,8}
• Solution??Solution??
Single-state, start in #5.Single-state, start in #5.
• Solution: [Right, Suck]Solution: [Right, Suck]
Conformant, start in Conformant, start in {1,2,3,4,5,6,7,8}{1,2,3,4,5,6,7,8}
• E.g., right goes to {2,4,6,8}E.g., right goes to {2,4,6,8}
• Solution??Solution??
Example: Vacuum World
Single-state, start in #5.Single-state, start in #5.
• Solution: Solution: [Right, Suck][Right, Suck]
Conformant, start in {1,2,3,4,5,6,7,8}Conformant, start in {1,2,3,4,5,6,7,8}
• E.g., right goes to {2,4,6,8}E.g., right goes to {2,4,6,8}
• Solution: Solution: [Right, Suck, Left, Suck][Right, Suck, Left, Suck]
Contingency, start in #5Contingency, start in #5
• Murphy’s Law: Murphy’s Law: SuckSuck can dirty a clean can dirty a clean carpetcarpet
• Local sensing: dirt, location onlyLocal sensing: dirt, location only
• Solution??Solution??
Single-state, start in #5.Single-state, start in #5.
• Solution: Solution: [Right, Suck][Right, Suck]
Conformant, start in {1,2,3,4,5,6,7,8}Conformant, start in {1,2,3,4,5,6,7,8}
• E.g., right goes to {2,4,6,8}E.g., right goes to {2,4,6,8}
• Solution: Solution: [Right, Suck, Left, Suck][Right, Suck, Left, Suck]
Contingency, start in #5Contingency, start in #5
• Murphy’s Law: Murphy’s Law: SuckSuck can dirty a clean can dirty a clean carpetcarpet
• Local sensing: dirt, location onlyLocal sensing: dirt, location only
• Solution??Solution??
Example: Vacuum World
Single-state, start in #5.Single-state, start in #5.
• Solution: [Solution: [Right, SuckRight, Suck]]
Conformant, start in {1,2,3,4,5,6,7,8}Conformant, start in {1,2,3,4,5,6,7,8}
• E.g., right goes to {2,4,6,8}E.g., right goes to {2,4,6,8}
• Solution: [Solution: [Right, Suck, Left, SuckRight, Suck, Left, Suck]]
Contingency, start in #5Contingency, start in #5
• Murphy’s Law: Murphy’s Law: SuckSuck can dirty a clean can dirty a clean carpetcarpet
• Local sensing: dirt, location onlyLocal sensing: dirt, location only
• Solution: [Solution: [Right, Right, if if dirtdirt then then SuckSuck]]
Single-state, start in #5.Single-state, start in #5.
• Solution: [Solution: [Right, SuckRight, Suck]]
Conformant, start in {1,2,3,4,5,6,7,8}Conformant, start in {1,2,3,4,5,6,7,8}
• E.g., right goes to {2,4,6,8}E.g., right goes to {2,4,6,8}
• Solution: [Solution: [Right, Suck, Left, SuckRight, Suck, Left, Suck]]
Contingency, start in #5Contingency, start in #5
• Murphy’s Law: Murphy’s Law: SuckSuck can dirty a clean can dirty a clean carpetcarpet
• Local sensing: dirt, location onlyLocal sensing: dirt, location only
• Solution: [Solution: [Right, Right, if if dirtdirt then then SuckSuck]]
Single-state problem formationA A problemproblem is defined by four items: is defined by four items:
Initial stateInitial state
• E.g., “at Arad”E.g., “at Arad”
Successor function S(x) = set of action-state pairsSuccessor function S(x) = set of action-state pairs
• E.g., S(Arad) = {<AradE.g., S(Arad) = {<AradZerind,Zerind>, <AradZerind,Zerind>, <AradSibiu,Sibiu>,…}Sibiu,Sibiu>,…}
Goal test, can beGoal test, can be
• Explicit, e.g., “at Bucharest”Explicit, e.g., “at Bucharest”
• Implicit, e.g., Implicit, e.g., NoDirt(x)NoDirt(x)
Path cost (additive)Path cost (additive)
• E.g., a sum of distances, number of actions executed, etc.E.g., a sum of distances, number of actions executed, etc.
• C(x,a,y) is the step cost, assumed to be non-negativeC(x,a,y) is the step cost, assumed to be non-negative
A A solutionsolution is a sequence of actions leading from the initial state to a goal state is a sequence of actions leading from the initial state to a goal state
A A problemproblem is defined by four items: is defined by four items:
Initial stateInitial state
• E.g., “at Arad”E.g., “at Arad”
Successor function S(x) = set of action-state pairsSuccessor function S(x) = set of action-state pairs
• E.g., S(Arad) = {<AradE.g., S(Arad) = {<AradZerind,Zerind>, <AradZerind,Zerind>, <AradSibiu,Sibiu>,…}Sibiu,Sibiu>,…}
Goal test, can beGoal test, can be
• Explicit, e.g., “at Bucharest”Explicit, e.g., “at Bucharest”
• Implicit, e.g., Implicit, e.g., NoDirt(x)NoDirt(x)
Path cost (additive)Path cost (additive)
• E.g., a sum of distances, number of actions executed, etc.E.g., a sum of distances, number of actions executed, etc.
• C(x,a,y) is the step cost, assumed to be non-negativeC(x,a,y) is the step cost, assumed to be non-negative
A A solutionsolution is a sequence of actions leading from the initial state to a goal state is a sequence of actions leading from the initial state to a goal state
State Space
Real world is absurdly complex Real world is absurdly complex state space must be state space must be abstracted for problem solvingabstracted for problem solving
• (Abstract) state = set of real states(Abstract) state = set of real states
• (Abstract) action = complex combination of real actions, e.g., (Abstract) action = complex combination of real actions, e.g., “Arad“AradZerind” represents a complex set of possible routes, detours, Zerind” represents a complex set of possible routes, detours, rest stops, etc.rest stops, etc.
• (Abstract) solution = set of real paths that are solutions in the real world(Abstract) solution = set of real paths that are solutions in the real world
Each abstract action should be “easier” than the original Each abstract action should be “easier” than the original problem!problem!
Real world is absurdly complex Real world is absurdly complex state space must be state space must be abstracted for problem solvingabstracted for problem solving
• (Abstract) state = set of real states(Abstract) state = set of real states
• (Abstract) action = complex combination of real actions, e.g., (Abstract) action = complex combination of real actions, e.g., “Arad“AradZerind” represents a complex set of possible routes, detours, Zerind” represents a complex set of possible routes, detours, rest stops, etc.rest stops, etc.
• (Abstract) solution = set of real paths that are solutions in the real world(Abstract) solution = set of real paths that are solutions in the real world
Each abstract action should be “easier” than the original Each abstract action should be “easier” than the original problem!problem!
Example: Vacuum World state space graph
States? Actions? Goal test? Path cost?States? Actions? Goal test? Path cost?States? Actions? Goal test? Path cost?States? Actions? Goal test? Path cost?
Example: Vacuum World state space graph
States:States:
• Integer dirt and robot locations Integer dirt and robot locations (ignore dirt amounts)(ignore dirt amounts)
Actions:Actions:
• Left, Right, Suck, NoOpLeft, Right, Suck, NoOp
Goal test:Goal test:
• No dirtNo dirt
Path cost:Path cost:
• 1 per action (0 for 1 per action (0 for NoOpNoOp))
States:States:
• Integer dirt and robot locations Integer dirt and robot locations (ignore dirt amounts)(ignore dirt amounts)
Actions:Actions:
• Left, Right, Suck, NoOpLeft, Right, Suck, NoOp
Goal test:Goal test:
• No dirtNo dirt
Path cost:Path cost:
• 1 per action (0 for 1 per action (0 for NoOpNoOp))
Other Examples
Eight puzzleEight puzzle
Robotic AssemblyRobotic Assembly
States? Actions? Goal test? Path cost?States? Actions? Goal test? Path cost?
Eight puzzleEight puzzle
Robotic AssemblyRobotic Assembly
States? Actions? Goal test? Path cost?States? Actions? Goal test? Path cost?
Tree Search Algorithms
Basic idea:Basic idea:
• Offline, simulated exploration of state space by generating successors of Offline, simulated exploration of state space by generating successors of already explored states (AKA already explored states (AKA expandingexpanding states) states)
functionfunction TREE-SEARCH( TREE-SEARCH(problem, strategyproblem, strategy) ) returnsreturns a solution, or failure a solution, or failure initialize the search tree using the initial state of initialize the search tree using the initial state of problemproblem loop doloop do ifif there are no more candidates for expansion there are no more candidates for expansion then returnthen return failure failure choose a leaf node for expansion according to choose a leaf node for expansion according to strategystrategy ifif the node contains a goal state the node contains a goal state then returnthen return the corresponding solution the corresponding solution elseelse expand the node and add the resulting nodes to the search tree expand the node and add the resulting nodes to the search tree endend
Basic idea:Basic idea:
• Offline, simulated exploration of state space by generating successors of Offline, simulated exploration of state space by generating successors of already explored states (AKA already explored states (AKA expandingexpanding states) states)
functionfunction TREE-SEARCH( TREE-SEARCH(problem, strategyproblem, strategy) ) returnsreturns a solution, or failure a solution, or failure initialize the search tree using the initial state of initialize the search tree using the initial state of problemproblem loop doloop do ifif there are no more candidates for expansion there are no more candidates for expansion then returnthen return failure failure choose a leaf node for expansion according to choose a leaf node for expansion according to strategystrategy ifif the node contains a goal state the node contains a goal state then returnthen return the corresponding solution the corresponding solution elseelse expand the node and add the resulting nodes to the search tree expand the node and add the resulting nodes to the search tree endend
Implementation: states vs. nodes
StateState
• (Representation of) a physical configureation(Representation of) a physical configureation
NodeNode
• Data structure constituting part of a search treeData structure constituting part of a search tree
– Includes Includes parent, children, depth, path costparent, children, depth, path cost g(x) g(x)
States do not have parents, children, depth, or States do not have parents, children, depth, or path cost!path cost!
StateState
• (Representation of) a physical configureation(Representation of) a physical configureation
NodeNode
• Data structure constituting part of a search treeData structure constituting part of a search tree
– Includes Includes parent, children, depth, path costparent, children, depth, path cost g(x) g(x)
States do not have parents, children, depth, or States do not have parents, children, depth, or path cost!path cost!
Implementation: general tree searchfunction function TREE-SEARCH (TREE-SEARCH (problemproblem, , fringefringe) ) returnsreturns a solution, or failure a solution, or failure fringefringe INSERT(MAKE-NODE(INITIAL-STATE[problem]), INSERT(MAKE-NODE(INITIAL-STATE[problem]), fringefringe)) loop doloop do ifif fringe fringe is empty is empty then returnthen return false false nodenode REMOVE-FRONT( REMOVE-FRONT(fringefringe)) ifif GOAL-TEST[ GOAL-TEST[problemproblem] applied to STATE(] applied to STATE(nodenode) succeeds ) succeeds returnreturn nodenode fringefringe INSERTALL(EXPAND( INSERTALL(EXPAND(nodenode, , problemproblem), ), fringefringe))
function function EXPAND (EXPAND (nodenode, , problemproblem) ) returnsreturns a set of states a set of states successorssuccessors the empty set the empty set for each for each action, resultaction, result inin SUCCESSOR-FN[problem](STATE[ SUCCESSOR-FN[problem](STATE[nodenode]) ]) dodo ss a new NODE a new NODE PARENT-NODE[PARENT-NODE[ss] ] nodenode; ACTION[; ACTION[ss] ] actionaction; STATE(; STATE(ss) ) resultresult PATH-COST[PATH-COST[ss] ] PATH-COST[ PATH-COST[nodenode] + STEP-COST(] + STEP-COST(node, action, snode, action, s)) DEPTH[DEPTH[ss] ] DEPTH[ DEPTH[nodenode] + 1] + 1 add add ss to to successorssuccessorsreturn return successorssuccessors
Find the error on this page!Find the error on this page!
function function TREE-SEARCH (TREE-SEARCH (problemproblem, , fringefringe) ) returnsreturns a solution, or failure a solution, or failure fringefringe INSERT(MAKE-NODE(INITIAL-STATE[problem]), INSERT(MAKE-NODE(INITIAL-STATE[problem]), fringefringe)) loop doloop do ifif fringe fringe is empty is empty then returnthen return false false nodenode REMOVE-FRONT( REMOVE-FRONT(fringefringe)) ifif GOAL-TEST[ GOAL-TEST[problemproblem] applied to STATE(] applied to STATE(nodenode) succeeds ) succeeds returnreturn nodenode fringefringe INSERTALL(EXPAND( INSERTALL(EXPAND(nodenode, , problemproblem), ), fringefringe))
function function EXPAND (EXPAND (nodenode, , problemproblem) ) returnsreturns a set of states a set of states successorssuccessors the empty set the empty set for each for each action, resultaction, result inin SUCCESSOR-FN[problem](STATE[ SUCCESSOR-FN[problem](STATE[nodenode]) ]) dodo ss a new NODE a new NODE PARENT-NODE[PARENT-NODE[ss] ] nodenode; ACTION[; ACTION[ss] ] actionaction; STATE(; STATE(ss) ) resultresult PATH-COST[PATH-COST[ss] ] PATH-COST[ PATH-COST[nodenode] + STEP-COST(] + STEP-COST(node, action, snode, action, s)) DEPTH[DEPTH[ss] ] DEPTH[ DEPTH[nodenode] + 1] + 1 add add ss to to successorssuccessorsreturn return successorssuccessors
Find the error on this page!Find the error on this page!
Search strategiesA strategy is defined by picking the A strategy is defined by picking the order of node expansionorder of node expansion
Strategies are evaluated along the following dimensions:Strategies are evaluated along the following dimensions:
• Completeness – does it always find a solution if one exists?Completeness – does it always find a solution if one exists?
• Time complexity – number of nodes generated/expandedTime complexity – number of nodes generated/expanded
• Space complexity – maximum nodes in memorySpace complexity – maximum nodes in memory
• Optimality – does it always find a least cost solution?Optimality – does it always find a least cost solution?
Time and space complexity are measured in terms of:Time and space complexity are measured in terms of:
• b – maximum branching factor of the search treeb – maximum branching factor of the search tree
• d – depth of the least-cost solutiond – depth of the least-cost solution
• m – maximum depth of the state space (may be infinite)m – maximum depth of the state space (may be infinite)
A strategy is defined by picking the A strategy is defined by picking the order of node expansionorder of node expansion
Strategies are evaluated along the following dimensions:Strategies are evaluated along the following dimensions:
• Completeness – does it always find a solution if one exists?Completeness – does it always find a solution if one exists?
• Time complexity – number of nodes generated/expandedTime complexity – number of nodes generated/expanded
• Space complexity – maximum nodes in memorySpace complexity – maximum nodes in memory
• Optimality – does it always find a least cost solution?Optimality – does it always find a least cost solution?
Time and space complexity are measured in terms of:Time and space complexity are measured in terms of:
• b – maximum branching factor of the search treeb – maximum branching factor of the search tree
• d – depth of the least-cost solutiond – depth of the least-cost solution
• m – maximum depth of the state space (may be infinite)m – maximum depth of the state space (may be infinite)
Uninformed Search Strategies
Uninformed strategies use only the information Uninformed strategies use only the information available in the problem definitionavailable in the problem definition
• Breadth-first searchBreadth-first search
• Uniform-cost searchUniform-cost search
• Depth-first searchDepth-first search
• Depth-limited searchDepth-limited search
• Iterative deepening searchIterative deepening search
Uninformed strategies use only the information Uninformed strategies use only the information available in the problem definitionavailable in the problem definition
• Breadth-first searchBreadth-first search
• Uniform-cost searchUniform-cost search
• Depth-first searchDepth-first search
• Depth-limited searchDepth-limited search
• Iterative deepening searchIterative deepening search
Breadth-first search
Expand shallowest unexpanded nodeExpand shallowest unexpanded node
Implementation:Implementation:
• FringeFringe is a FIFO queue, i.e., new successors go at end is a FIFO queue, i.e., new successors go at end
• Execute first few expansions of Arad to Bucharest using Execute first few expansions of Arad to Bucharest using Breadth-first searchBreadth-first search
Expand shallowest unexpanded nodeExpand shallowest unexpanded node
Implementation:Implementation:
• FringeFringe is a FIFO queue, i.e., new successors go at end is a FIFO queue, i.e., new successors go at end
• Execute first few expansions of Arad to Bucharest using Execute first few expansions of Arad to Bucharest using Breadth-first searchBreadth-first search
Properties of breadth-first search
Complete?? Yes (if b is finite)Complete?? Yes (if b is finite)
Time?? 1 + b + bTime?? 1 + b + b22 + … + b + … + bdd + b(b + b(bdd-1) = O(b-1) = O(bd+1d+1), i.e., ), i.e., exp in dexp in d
Space?? O(bSpace?? O(bd+1d+1) (keeps every node in memory)) (keeps every node in memory)
Optimal?? If cost = 1 per step, not optimal in Optimal?? If cost = 1 per step, not optimal in generalgeneral
Space is the big problem; can easily generate Space is the big problem; can easily generate nodes at 10 MB/s, so 24hrs = 860GB!nodes at 10 MB/s, so 24hrs = 860GB!
Complete?? Yes (if b is finite)Complete?? Yes (if b is finite)
Time?? 1 + b + bTime?? 1 + b + b22 + … + b + … + bdd + b(b + b(bdd-1) = O(b-1) = O(bd+1d+1), i.e., ), i.e., exp in dexp in d
Space?? O(bSpace?? O(bd+1d+1) (keeps every node in memory)) (keeps every node in memory)
Optimal?? If cost = 1 per step, not optimal in Optimal?? If cost = 1 per step, not optimal in generalgeneral
Space is the big problem; can easily generate Space is the big problem; can easily generate nodes at 10 MB/s, so 24hrs = 860GB!nodes at 10 MB/s, so 24hrs = 860GB!
Uniform-cost searchExpand least-cost unexpanded nodeExpand least-cost unexpanded node
Implementation:Implementation:
• FringeFringe = queue ordered by path cost = queue ordered by path cost
Equivalent to breadth-first if…Equivalent to breadth-first if…
Complete?? If step cost Complete?? If step cost ≥≥ εε
Time?? # of nodes with g Time?? # of nodes with g ≤≤ cost of optimal solution, O(b cost of optimal solution, O(bC/ C/ εε), where C is the ), where C is the cost of the optimal solutioncost of the optimal solution
Space?? # of nodes with Space?? # of nodes with g g ≤≤ cost of optimal solution, O(b cost of optimal solution, O(bC/ C/ εε))
Optimal?? Yes–nodes expanded in increasing order of Optimal?? Yes–nodes expanded in increasing order of g(n)g(n)
Execute first few expansions of Arad to Bucharest using Uniform-first searchExecute first few expansions of Arad to Bucharest using Uniform-first search
Expand least-cost unexpanded nodeExpand least-cost unexpanded node
Implementation:Implementation:
• FringeFringe = queue ordered by path cost = queue ordered by path cost
Equivalent to breadth-first if…Equivalent to breadth-first if…
Complete?? If step cost Complete?? If step cost ≥≥ εε
Time?? # of nodes with g Time?? # of nodes with g ≤≤ cost of optimal solution, O(b cost of optimal solution, O(bC/ C/ εε), where C is the ), where C is the cost of the optimal solutioncost of the optimal solution
Space?? # of nodes with Space?? # of nodes with g g ≤≤ cost of optimal solution, O(b cost of optimal solution, O(bC/ C/ εε))
Optimal?? Yes–nodes expanded in increasing order of Optimal?? Yes–nodes expanded in increasing order of g(n)g(n)
Execute first few expansions of Arad to Bucharest using Uniform-first searchExecute first few expansions of Arad to Bucharest using Uniform-first search
Depth-first search
Expand deepest unexpanded nodeExpand deepest unexpanded node
Implementation:Implementation:
• FringeFringe = LIFO queue, i.e., a stack = LIFO queue, i.e., a stack
• Execute first few expansions of Arad to Bucharest using Execute first few expansions of Arad to Bucharest using Depth-first searchDepth-first search
Expand deepest unexpanded nodeExpand deepest unexpanded node
Implementation:Implementation:
• FringeFringe = LIFO queue, i.e., a stack = LIFO queue, i.e., a stack
• Execute first few expansions of Arad to Bucharest using Execute first few expansions of Arad to Bucharest using Depth-first searchDepth-first search
Depth-first search
Complete??Complete??
Time??Time??
Space??Space??
Optimal??Optimal??
Complete??Complete??
Time??Time??
Space??Space??
Optimal??Optimal??
Depth-first searchComplete??Complete??
• No: fails in infinite-depth spaces, spaces with loops.No: fails in infinite-depth spaces, spaces with loops.
• Can be modified to avoid repeated states along path Can be modified to avoid repeated states along path complete in finite spaces complete in finite spaces
Time??Time??
• O(bO(bmm)): terrible if : terrible if mm is much larger than is much larger than dd, but if solutions are dense, may be much faster , but if solutions are dense, may be much faster than breadth-firstthan breadth-first
Space??Space??
• O(bm), i.e., linear space!O(bm), i.e., linear space!
Optimal??Optimal??
• NoNo
Complete??Complete??
• No: fails in infinite-depth spaces, spaces with loops.No: fails in infinite-depth spaces, spaces with loops.
• Can be modified to avoid repeated states along path Can be modified to avoid repeated states along path complete in finite spaces complete in finite spaces
Time??Time??
• O(bO(bmm)): terrible if : terrible if mm is much larger than is much larger than dd, but if solutions are dense, may be much faster , but if solutions are dense, may be much faster than breadth-firstthan breadth-first
Space??Space??
• O(bm), i.e., linear space!O(bm), i.e., linear space!
Optimal??Optimal??
• NoNo
Depth-limited search
Depth-first search with depth limit lDepth-first search with depth limit l• i.e., nodes at depth i.e., nodes at depth ll have no successors have no successors
function function DEPTH-LIMITED-SEARCH (DEPTH-LIMITED-SEARCH (problem, limitproblem, limit) ) returns returns soln/fail/cutoffsoln/fail/cutoff RECURSIVE-DLS(MAKE-NODE(INITIAL-STATE[RECURSIVE-DLS(MAKE-NODE(INITIAL-STATE[problemproblem], ], problemproblem, , limitlimit))
function function RECURSIVE-DLS (RECURSIVE-DLS (node, problem, limitnode, problem, limit) ) returns returns soln/fail/cutoffsoln/fail/cutoff cutoff-occurred? cutoff-occurred? falsefalse ifif GOAL-TEST[ GOAL-TEST[problemproblem](STATE[](STATE[nodenode]) ]) then return then return nodenode else if else if DEPTH[DEPTH[nodenode] = ] = limitlimit then return then return cutoffcutoff else for each else for each successor successor in in EXPAND(EXPAND(nodenode, , problemproblem) ) dodo resultresult RECURSIVE-DLS( RECURSIVE-DLS(successor, problem, limitsuccessor, problem, limit)) if if resultresult = = cutoffcutoff then then cutoff-occurred? cutoff-occurred? truetrue else if else if resultresult ≠ ≠ failurefailure then return then return resultresult if if cutoff-occurred? tcutoff-occurred? then returnhen return cutoffcutoff else return else return failurefailure
Depth-first search with depth limit lDepth-first search with depth limit l• i.e., nodes at depth i.e., nodes at depth ll have no successors have no successors
function function DEPTH-LIMITED-SEARCH (DEPTH-LIMITED-SEARCH (problem, limitproblem, limit) ) returns returns soln/fail/cutoffsoln/fail/cutoff RECURSIVE-DLS(MAKE-NODE(INITIAL-STATE[RECURSIVE-DLS(MAKE-NODE(INITIAL-STATE[problemproblem], ], problemproblem, , limitlimit))
function function RECURSIVE-DLS (RECURSIVE-DLS (node, problem, limitnode, problem, limit) ) returns returns soln/fail/cutoffsoln/fail/cutoff cutoff-occurred? cutoff-occurred? falsefalse ifif GOAL-TEST[ GOAL-TEST[problemproblem](STATE[](STATE[nodenode]) ]) then return then return nodenode else if else if DEPTH[DEPTH[nodenode] = ] = limitlimit then return then return cutoffcutoff else for each else for each successor successor in in EXPAND(EXPAND(nodenode, , problemproblem) ) dodo resultresult RECURSIVE-DLS( RECURSIVE-DLS(successor, problem, limitsuccessor, problem, limit)) if if resultresult = = cutoffcutoff then then cutoff-occurred? cutoff-occurred? truetrue else if else if resultresult ≠ ≠ failurefailure then return then return resultresult if if cutoff-occurred? tcutoff-occurred? then returnhen return cutoffcutoff else return else return failurefailure
Depth-limited search
Complete??Complete??
Time??Time??
Space??Space??
Optimal??Optimal??
Complete??Complete??
Time??Time??
Space??Space??
Optimal??Optimal??
Iterative deepening searchfunction function ITERATIVE-DEEPENING-SEARCH(ITERATIVE-DEEPENING-SEARCH(problemproblem) ) returnsreturns a solution a solution
inputsinputs: : problemproblem, a problem, a problem
for for depthdepth 0 0 toto ∞ ∞ dodo
resultresult DEPTH-LIMITED-SEARCH( DEPTH-LIMITED-SEARCH(problemproblem, , depthdepth))
ifif result result ≠ ≠ cutoffcutoff then returnthen return resultresult
endend
function function ITERATIVE-DEEPENING-SEARCH(ITERATIVE-DEEPENING-SEARCH(problemproblem) ) returnsreturns a solution a solution
inputsinputs: : problemproblem, a problem, a problem
for for depthdepth 0 0 toto ∞ ∞ dodo
resultresult DEPTH-LIMITED-SEARCH( DEPTH-LIMITED-SEARCH(problemproblem, , depthdepth))
ifif result result ≠ ≠ cutoffcutoff then returnthen return resultresult
endend
Properties of iterative deepening
Complete??Complete??
Time??Time??
Space??Space??
Optimal??Optimal??
Complete??Complete??
Time??Time??
Space??Space??
Optimal??Optimal??
Properties of iterative deepening
Complete??Complete??• YesYes
Time??Time??• (d+1)b(d+1)b00 + db + db11 + (d-1)b + (d-1)b22 + … b + … bdd = O(b = O(bdd))
Space??Space??• O(bd)O(bd)
Optimal??Optimal??• If step cost = 1If step cost = 1
• Can be modified to explore uniform-cost treeCan be modified to explore uniform-cost tree
Complete??Complete??• YesYes
Time??Time??• (d+1)b(d+1)b00 + db + db11 + (d-1)b + (d-1)b22 + … b + … bdd = O(b = O(bdd))
Space??Space??• O(bd)O(bd)
Optimal??Optimal??• If step cost = 1If step cost = 1
• Can be modified to explore uniform-cost treeCan be modified to explore uniform-cost tree
SummaryAll tree searching techniques are more alike than differentAll tree searching techniques are more alike than different
Breadth-first has space issues, and possibly optimality issuesBreadth-first has space issues, and possibly optimality issues
Uniform-cost has space issuesUniform-cost has space issues
Depth-first has time and optimality issues, and possibly completeness Depth-first has time and optimality issues, and possibly completeness issuesissues
Depth-limited search has optimality and completeness issuesDepth-limited search has optimality and completeness issues
Iterative deepening is the best uninformed search we have exploredIterative deepening is the best uninformed search we have explored
Next class we study informed searchesNext class we study informed searches
All tree searching techniques are more alike than differentAll tree searching techniques are more alike than different
Breadth-first has space issues, and possibly optimality issuesBreadth-first has space issues, and possibly optimality issues
Uniform-cost has space issuesUniform-cost has space issues
Depth-first has time and optimality issues, and possibly completeness Depth-first has time and optimality issues, and possibly completeness issuesissues
Depth-limited search has optimality and completeness issuesDepth-limited search has optimality and completeness issues
Iterative deepening is the best uninformed search we have exploredIterative deepening is the best uninformed search we have explored
Next class we study informed searchesNext class we study informed searches