Date post: | 21-Jan-2016 |
Category: |
Documents |
Upload: | oliver-summers |
View: | 214 times |
Download: | 0 times |
CS 8520: Artificial Intelligence
Search 2
Paula Matuszek
Fall, 2008
Slides based on Hwee Tou Ng, aima.eecs.berkeley.edu/slides-ppt, which are in turn based on Russell, aima.eecs.berkeley.edu/slides-pdf.
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 2
Search strategies• A search strategy is defined by picking the order of
node expansion. (e.g., breadth-first, depth-first) • Strategies are evaluated along the following
dimensions:– completeness: does it always find a solution if one exists?– time complexity: number of nodes generated– space complexity: maximum number of nodes in memory– optimality: does it always find a least-cost solution?
• Time and space complexity are measured in terms of – b: maximum branching factor of the search tree– d: depth of the least-cost solution– m: maximum depth of the state space (may be infinite)
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 3
Uninformed search strategies• Uninformed search strategies use only the
information available in the problem definition
• Breadth-first search
• Uniform-cost search
• Depth-first search
• Depth-limited search
• Iterative deepening search
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 4
Implementation: general tree search
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 5
Breadth-first search• Expand shallowest unexpanded node
• Implementation:– fringe is a FIFO queue, i.e., new successors go
at end
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 6
Breadth-first search• Expand shallowest unexpanded node
• Implementation:– fringe is a FIFO queue, i.e., new successors
go at end
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 7
Breadth-first search• Expand shallowest unexpanded node
• Implementation:– fringe is a FIFO queue, i.e., new successors go
at end
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 8
Breadth-first search• Expand shallowest unexpanded node
• Implementation:– fringe is a FIFO queue, i.e., new successors go
at end
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 9
Properties of breadth-first search• Complete? Yes (if b is finite)• Time? 1+b+b2+b3+… +bd + b(bd-1) = O(bd+1)• Space? O(bd+1) (keeps every node in memory)• Optimal? Yes (if cost = 1 per step)
• Space is the bigger problem (more than time)
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 10
Uniform-cost search• Expand least-cost unexpanded node• Implementation:
– fringe = queue ordered by path cost
• Equivalent to breadth-first if step costs all equal• Complete? Yes, if step cost >= epsilon (otherwise can
loop)• Time and space? O(bceiling(C*/ epsilon)) where C* is the cost of
the optimal solution and epsilon is the smallest step cost– Can be much worse than breadth-first if many small steps not on
optimal path
• Optimal? Yes – nodes expanded in increasing order of g(n)
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 11
Uniform Cost Search
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 12
Depth-first search• Expand deepest unexpanded node• Implementation:
– fringe = LIFO queue, i.e., put successors at front
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 13
Depth-first search• Expand deepest unexpanded node• Implementation:
– fringe = LIFO queue, i.e., put successors at front
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 14
Depth-first search• Expand deepest unexpanded node• Implementation:
– fringe = LIFO queue, i.e., put successors at front
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 15
Depth-first search• Expand deepest unexpanded node• Implementation:
– fringe = LIFO queue, i.e., put successors at front
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 16
Depth-first search• Expand deepest unexpanded node• Implementation:
– fringe = LIFO queue, i.e., put successors at front
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 17
Depth-first search• Expand deepest unexpanded node• Implementation:
– fringe = LIFO queue, i.e., put successors at front
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 18
Depth-first search• Expand deepest unexpanded node• Implementation:
– fringe = LIFO queue, i.e., put successors at front
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 19
Depth-first search• Expand deepest unexpanded node• Implementation:
– fringe = LIFO queue, i.e., put successors at front
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 20
Depth-first search• Expand deepest unexpanded node• Implementation:
– fringe = LIFO queue, i.e., put successors at front
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 21
Depth-first search• Expand deepest unexpanded node• Implementation:
– fringe = LIFO queue, i.e., put successors at front
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 22
Depth-first search• Expand deepest unexpanded node• Implementation:
– fringe = LIFO queue, i.e., put successors at front
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 23
Depth-first search• Expand deepest unexpanded node• Implementation:
– fringe = LIFO queue, i.e., put successors at front
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 24
Properties of depth-first search
• Complete? No: fails in infinite-depth spaces, spaces with loops– Modify to avoid repeated states along path
complete in finite spaces
• Time? O(bm): terrible if m is much larger than d– but if solutions are dense, may be much faster than
breadth-first
• Space? O(bm), i.e., linear space!• Optimal? No
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 25
Depth-limited search• = depth-first search with depth limit l
• Nodes at depth l have no successors
• Solves problem of infinite depth
• Incomplete
• Recursive implementation:
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 26
Iterative deepening search•Repeated Depth-Limited search, incrementing limit l until a solution is found or failure.
•Repeats earlier steps at each new level, so inefficient -- but never more than doubles cost
•No longer incomplete
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 27
Iterative deepening search l =0
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 28
Iterative deepening search l =1
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 29
Iterative deepening search l =2
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 30
Iterative deepening search l =3
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 31
Properties of iterative deepening
• Complete? Yes
• Time? (d+1)b0 + d b1 + (d-1)b2 + … + bd = O(bd)
• Space? O(bd)
• Optimal? Yes, if step cost = 1
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 32
Summary of Algorithms for Ininformed Search
Criterion Breadth-first
Uniform Cost
Depth First
Depth-Limited
Iterative Deepening
Complete?
Yes Yes No No Yes
Time? O(bd+1) O(b(ceilingC*
/epsilon)
O(bm) O(bl) O(bd)
Space? O(bd+1) O(b(ceilingC*
/epsilon)
O(bm) O(bl) O(bd)
Optimal? Yes Yes No No Yes
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 33
A Caution: Repeated States• Failure to detect repeated states can turn a linear
problem into an exponential one, or even an infinite one.– For example: 8-puzzle– Simple repeat -- empty square simply moves back and
forth– More complex repeats also possible.
• Save list of expanded states -- the closed list.• Add new state to fringe only if it's not in closed list.
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 34
Graph search
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 35
Summary: Uninformed Search• Problem formulation usually requires abstracting away
real-world details to define a state space that can feasibly be explored
• Variety of uninformed search strategies
• Iterative deepening search uses only linear space and not much more time than other uninformed algorithms: usual choice
Informed search algorithms
Slides derived in part from
www.cs.berkeley.edu/~russell/slides/chapter04a.pdf, converted to powerpoint by Min-Yen Kan, National University of Singapore,
and from www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt, Marie DesJardins, University of Maryland Baltimore County.
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 37
Review: Tree search
• A search strategy is defined by picking the order of node expansion
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 38
Heuristic Search• Uninformed search is generic; choice of node to
expand is dependent on shape of tree and strategy for node expansion.
• Sometimes domain knowledge can help us make a better decision.
• For the Romania problem, eyeballing it results in looking at certain cities first because they "look closer" to where we are going.
• If that domain knowledge can be captured in a heuristic, search performance can be improved by using that heuristic.
• This gives us an informed search strategy.
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 39
So What's A Heuristic?Webster's Revised Unabridged Dictionary (1913) (web1913)
Heuristic \Heu*ris"tic\, a. [Gr. ? to discover.] Serving to discover or find out.
The Free On-line Dictionary of Computing (15Feb98) heuristic 1. <programming> A rule of thumb, simplification or
educated guess that reduces or limits the search for solutions in domains that are difficult and poorly understood. Unlike algorithms, heuristics do not guarantee feasible solutions and are often used with no theoretical guarantee. 2. <algorithm> approximation algorithm.
From WordNet (r) 1.6 heuristic adj 1: (computer science) relating to or using a heuristic
rule 2: of or relating to a general formulation that serves to guide investigation [ant: algorithmic] n : a commonsense rule (or set of rules) intended to increase the probability of solving some problem [syn: heuristic rule, heuristic program]
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 40
Heuristics• For search it has a very specific meaning:
– All domain knowledge used in the search is encoded in the heuristic function h.
• Examples:– Missionaries and Cannibals: Number of people on starting river bank– 8-puzzle: Number of tiles out of place – 8-puzzle: Sum of distances from goal – Romania: straight-line distance from city to Bucharest
• In general:– h(n) >= 0 for all nodes n – h(n) = 0 implies that n is a goal node – h(n) = infinity implies that n is a deadend from which a goal cannot be
reached
• h is some estimate of how desirable a move is, or how close it gets us to our goal
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 41
Best-first search• Order nodes on the nodes list by
increasing value of an evaluation function, f(n), that incorporates domain-specific information in some way.
• This is a generic way of referring to the class of informed methods.
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 42
Best-first search• Idea: use an evaluation function f(n) for each node
– estimate of "desirability"Expand most desirable unexpanded node
• Implementation:Order the nodes in fringe in decreasing order of desirability
• Special cases:– greedy best-first search– A* search
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 43
Romania with step costs in km
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 44
Greedy best-first search• Evaluation function f(n) = h(n) (heuristic)
• = estimate of cost from n to goal
• e.g., hSLD(n) = straight-line distance from n to Bucharest
• Greedy best-first search expands the node that appears to be closest to goal
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 45
Greedy best-first search example
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 46
Greedy best-first search example
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 47
Greedy best-first search example
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 48
Greedy best-first search example
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 49
Properties of greedy best-first search
• Complete? No – can get stuck in loops, e.g., Iasi Neamt Iasi Neamt
• Time? O(bm), but a good heuristic can give dramatic improvement
• Space? O(bm) -- keeps all nodes in memory
• Optimal? No• Remember: Time and space complexity are measured in
terms of – b: maximum branching factor of the search tree– d: depth of the least-cost solution– m: maximum depth of the state space (may be infinite)
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 50
A* search• Idea: avoid expanding paths that are already
expensive
• Evaluation function f(n) = g(n) + h(n)
• g(n) = cost so far to reach n
• h(n) = estimated cost from n to goal
• f(n) = estimated total cost of path through n to goal
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 51
A* search example
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 52
A* search example
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 53
A* search example
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 54
A* search example
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 55
A* search example
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 56
A* search example
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 57
Admissible heuristics• A heuristic h(n) is admissible if for every node n,
h(n) <= h*(n), where h*(n) is the true cost to reach the goal state from n.
• An admissible heuristic never overestimates the cost to reach the goal, i.e., it is optimistic.
• This means that we won't ignore a better path because we think the cost is too high. (If we underestimate it we wil learn that when we explore it.)
• Example: hSLD(n) (never overestimates the actual road distance)
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 58
Admissible heuristicsE.g., for the 8-puzzle:• h1(n) = number of misplaced tiles• h2(n) = total Manhattan distance
(i.e., no. of squares from desired location of each tile)
• h1(S) = ? • h2(S) = ?
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 59
Admissible heuristicsE.g., for the 8-puzzle:• h1(n) = number of misplaced tiles• h2(n) = total Manhattan distance(i.e., no. of squares from desired location of each tile)
• h1(S) = ? 8• h2(S) = ? 3+1+2+2+2+3+3+2 = 18
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 60
Properties of A*• If h(n) is admissible
• Complete? Yes (unless there are infinitely many nodes with f ≤ f(G) )
• Time? Exponential in [relative error in h * length of solution]
• Space? Keeps all nodes in memory
• Optimal? Yes; cannot expand f i+1 until f i is finished.
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 61
Some observations on A*• Perfect heuristic: If h(n) = h*(n) for all n, then only nodes on
the optimal solution path will be expanded. So, no extra work will be performed.
• Null heuristic: If h(n) = 0 for all n, then this is an admissible heuristic and A* acts like Uniform-Cost Search.
• Better heuristic: If h1(n) < h2(n) <= h*(n) for all non-goal nodes, then h2 is a better heuristic than h1 – If A1* uses h1, and A2* uses h2, then every node expanded by A2* is
also expanded by A1*. – In other words, A1 expands at least as many nodes as A2*. – We say that A2* is better informed than A1*, or A2* dominates A1*
• The closer h is to h*, the fewer extra nodes that will be expanded
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 62
What’s a good heuristic? How do we find one?
• If h1(n) < h2(n) <= h*(n) for all n, then both are admissible and h2 is better than (dominates) h1.
• Relaxing the problem: remove constraints to create a (much) easier problem; use the solution cost for this problem as the heuristic function
• Combining heuristics: take the max of several admissible heuristics: still have an admissible heuristic, and it’s better!
• Identify good features, then use a learning algorithm to find a heuristic function: may lose admissibility
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 63
Relaxed problems• A problem with fewer restrictions on the actions is
called a relaxed problem• The cost of an optimal solution to a relaxed
problem is an admissible heuristic for the original problem
• If the rules of the 8-puzzle are relaxed so that a tile can move anywhere, then h1(n) gives the shortest solution
• If the rules are relaxed so that a tile can move to any adjacent square, then h2(n) gives the shortest solution
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 64
Some Examples of Heuristics?• 8-puzzle?
• Mapquest driving directions?
• Minesweeper?
• Crossword puzzle?
• Making a medical diagnosis?
• ??
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 65
Local search algorithms• In many optimization problems, the path to the
goal is irrelevant; the goal state itself is the solution
• State space = set of "complete" configurations• Find configuration satisfying constraints, e.g., n-
queens
• In such cases, we can use local search algorithms• Keep a single "current" state, try to improve it
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 66
Example: n-queens• Put n queens on an n x n board with no two
queens on the same row, column, or diagonal
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 67
Hill-climbing search• If there exists a successor s for the current state n such that
– h(s) < h(n)
– h(s) <= h(t) for all the successors t of n,
• then move from n to s. Otherwise, halt at n.
• Looks one step ahead to determine if any successor is better than the current state; if there is, move to the best successor.
• Similar to Greedy search in that it uses h, but does not allow backtracking or jumping to an alternative path since it doesn’t “remember” where it has been.
• Not complete since the search will terminate at "local minima," "plateaus," and "ridges."
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 68
Hill-climbing search• "Like climbing Everest in thick fog with
amnesia"
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 69
Hill climbing example
2 8 3
1 4
7 6 5
2 3
1 8 4
7 6 5
1 3
8 4
7 6 5
2
3
1 8 4
7 6 5
2
1 3
8 4
7 6 5
2
start goal
-5
h = -3
h = -3
h = -2
h = -1
h = 0h = -4
-5
-4
-4-3
-2
f(n) = -(number of tiles out of place)
2 8 3
1 6 4
7 5
1 3
8 4
7 6 5
2
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 70
Drawbacks of hill climbing• Problems:
– Local Maxima: peaks that aren’t the highest point in the space
– Plateaus: the space has a broad flat region that gives the search algorithm no direction (random walk)
– Ridges: flat like a plateau, but with dropoffs to the sides; steps to the North, East, South and West may go down, but a step to the NW may go up.
• Remedy: – Random restart.
• Some problem spaces are great for hill climbing and others are terrible.
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 71
Hill-climbing search• Problem: depending on initial state, can get
stuck in local maxima
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 72
Example of a local maximum
1 2 5
7 4
8 6 3
1 2 5
7 4
8 6 3
1 2 5
7 4
8 6 3
1 2 5
7 4
8 6 3
1 2 5
7 4
8 6 3
-3
-4
-4
-4
0
start goal
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 73
Simulated annealing• Simulated annealing (SA) exploits an analogy between the way
in which a metal cools and freezes into a minimum-energy crystalline structure (the annealing process) and the search for a minimum [or maximum] in a more general system.
• SA can avoid becoming trapped at local minima.
• SA uses a random search that accepts changes that increase objective function f, as well as some that decrease it.
• SA uses a control parameter T, which by analogy with the original application is known as the system “temperature.”
• T starts out high and gradually decreases toward 0.
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 74
Simulated annealing search• Idea: escape local maxima by allowing some "bad" moves but
gradually decrease their frequency
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 75
Properties of simulated annealing search• One can prove: If T decreases slowly enough, then
simulated annealing search will find a global optimum with probability approaching 1
• Widely used in VLSI layout, airline scheduling, etc
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 76
Local beam search• Keep track of k states rather than just one
• Start with k randomly generated states
• At each iteration, all the successors of all k states are generated
• If any one is a goal state, stop; else select the k best successors from the complete list and repeat.
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 77
Summary: Informed search• Best-first search is general search where the minimum-cost nodes
(according to some measure) are expanded first. • Greedy search uses minimal estimated cost h(n) to the goal state as
measure. This reduces search time, but is neither complete nor optimal. • A* search combines uniform-cost search and greedy search: f(n) =
g(n) + h(n). A* handles state repetitions and h(n) never overestimates. – A* is complete, optimal and optimally efficient, but its space
complexity is still bad. – The time complexity depends on the quality of the heuristic
function.
• Local Search techniques are useful when you don't care about path, only result. Examples include– Hill-climbing – Simulated annealing– Local Beam Search
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 78
Search in an Adversarial Environment
• Iterative deepening and A* useful for single-agent search problems
• What if there are TWO agents?
• Goals in conflict:– Adversarial Search
• Especially common in AI:– Goals in direct conflict– IE: GAMES.
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 79
Games vs. search problems
• "Unpredictable" opponent specifying a move for every possible opponent reply
• Time limits unlikely to find goal, must approximate
• Efficiency matters a lot
• HARD.
• In AI, typically "zero sum": one player wins exactly as much as other player loses.
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 80
Types of games Deterministic ChancePerfect Info Chess Monopoly
Checkers Backgammon Othello Tic-Tac-Toe
Imperfect Info Bridge Poker Scrabble
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 81
Tic-Tac-Toe• Tic Tac Toe is one of the classic AI examples.
Let's play some.
• A Tic Tac Toe game:– http://www.ourvirtualmall.com/tictac.htm
• Try it, at various levels of difficulty.– What kind of strategy are you using?– What kind does the computer seem to be using?– Did you win? Lose?
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 82
Problem Definition• Formally define a two-person game as:• Two players, called MAX and MIN.
– Alternate moves
– At end of game winner is rewarded and loser penalized.
• Game has– Initial State: board position and player to go first
– Successor Function: returns (move, state) pairs• All legal moves from the current state
• Resulting state
– Terminal Test
– Utility function for terminal states.
• Initial state plus legal moves define game tree.
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 83
Tic Tac Toe Game tree
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 84
Optimal Strategies• Optimal strategy is sequence of moves
leading to desired goal state.
• MAX's strategy is affected by MIN's play.
• So MAX needs a strategy which is the best possible payoff, assuming optimal play on MIN's part.
• Determined by looking at MINIMAX value for each node in game tree.
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 85
Minimax• Perfect play for deterministic games
• Idea: choose move to position with highest minimax value = best achievable payoff against best play
• E.g., 2-ply game:
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 86
Minimax algorithm
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 87
Properties of minimax• Complete? Yes (if tree is finite)
• Optimal? Yes (against an optimal opponent)
• Time complexity? O(bm)
• Space complexity? O(bm) (depth-first exploration)
• For chess, b ≈ 35, m ≈100 for "reasonable" games exact solution completely infeasible
• Even tic-tac-toe is much too complex to diagram here, although it's small enough to implement.
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 88
Pruning the Search• “If you have an idea that is surely bad, don't
take the time to see how truly awful it is.” -- Pat Winston
• Minimax exponential with # of moves; not feasible in real-life
• But we can PRUNE some branches.
• Alpha-Beta pruning– If it is clear that a branch can't improve on the value
we already have, stop analysis.
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 89
α-β pruning example
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 90
α-β pruning example
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 91
α-β pruning example
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 92
α-β pruning example
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 93
α-β pruning example
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 94
Properties of α-β• Pruning does not affect final result
• Good move ordering improves effectiveness of pruning
• With "perfect ordering," time complexity = O(bm/2) doubles depth of search which can be carried out for a given level
of resources.
• A simple example of the value of reasoning about which computations are relevant (a form of metareasoning)
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 95
Why is it called α-β?• α is the value of the
best (i.e., highest-value) choice found so far at any choice point along the path for max
• If v is worse than α, max will avoid it prune that branch
• Define β similarly for min
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 96
The α-β algorithm
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 97
The α-β algorithm
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 98
"Informed" Search• Alpha-Beta still not feasible for large game
spaces.
• Can we improve on performance with domain knowledge?
• Yes -- if we have a useful heuristic for evaluating game states.
• Conceptually analogous to A* for single-agent search.
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 99
Resource limitsSuppose we have 100 secs, explore 104 nodes/sec
106 nodes per move
Standard approach:• cutoff test:
e.g., depth limit (perhaps add quiescence search)
• evaluation function = estimated desirability of position
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 100
Evaluation function• Evaluation function or static evaluator is used to
evaluate the “goodness” of a game position.– Contrast with heuristic search where the evaluation function was a
non-negative estimate of the cost from the start node to a goal and passing through the given node
• The zero-sum assumption allows us to use a single evaluation function to describe the goodness of a board with respect to both players. – f(n) >> 0: position n good for me and bad for you
– f(n) << 0: position n bad for me and good for you
– f(n) near 0: position n is a neutral position
– f(n) = +infinity: win for me
– f(n) = -infinity: win for you
DesJardins: www.cs.umbc.edu/671/fall03/slides/c8-9_games.ppt
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 101
Evaluation function examples• Example of an evaluation function for Tic-Tac-Toe:
f(n) = [# of 3-lengths open for me] - [# of 3-lengths open for you]
where a 3-length is a complete row, column, or diagonal
• Alan Turing’s function for chess– f(n) = w(n)/b(n) where w(n) = sum of the point value of white’s pieces
and b(n) = sum of black’s
• Most evaluation functions are specified as a weighted sum of position features:f(n) = w1*feat1(n) + w2*feat2(n) + ... + wn*featk(n)
• Example features for chess are piece count, piece placement, squares controlled, etc.
• Deep Blue (which beat Gary Kasparov in 1997) had over 8000 features in its evaluation function
DesJardins: www.cs.umbc.edu/671/fall03/slides/c8-9_games.ppt
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 102
Cutting off searchMinimaxCutoff is identical to MinimaxValue except
1. Terminal? is replaced by Cutoff?2. Utility is replaced by Eval
Does it work in practice?For chess: bm = 106, b=35 m=4
4-ply lookahead is a hopeless chess player!– 4-ply ≈ human novice– 8-ply ≈ typical PC, human master– 12-ply ≈ Deep Blue, Kasparov
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 103
Deterministic games in practice• Checkers: Chinook ended 40-year-reign of human world champion
Marion Tinsley in 1994. Used a precomputed endgame database defining perfect play for all positions involving 8 or fewer pieces on the board, a total of 444 billion positions.
• Chess: Deep Blue defeated human world champion Garry Kasparov in a six-game match in 1997. Deep Blue searched 200 million positions per second, used very sophisticated evaluation, and undisclosed methods for extending some lines of search up to 40 ply.
• Othello: human champions refuse to compete against computers, who are too good.
• Go: Just beginning to be good enough to play human champions. In go, b > 300, so most programs use pattern knowledge bases to suggest plausible moves, but MoGo also uses a recently developed search method, Monte Carlo Tree Search.
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 104
Summary• Games are fun to work on!
• They illustrate several important points about AI
• perfection is unattainable must approximate
• good idea to think about what to think about
Paula Matuszek, CSC 8520, Fall 2008. Based in part on aima.eecs.berkeley.edu/slides-ppt and www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt 105
Search Summary• For uninformed search, tradeoffs between time
and space complexity, with iterative deepening often the best choice.
• For non-adversarial informed search, A* usually the best choice; the better the heuristic, the better the performance.
• For adversarial search, minimax with alpha-beta pruning is optimal where feasible
• Adding an evaluation-function-based cutoff increases range of feasibility.
• The better we can capture domain knowledge in the heuristic and evaluation functions, the better we can do.