+ All Categories
Home > Documents > genetic ALGORITHM

genetic ALGORITHM

Date post: 13-Nov-2014
Category:
Upload: sak9995
View: 15 times
Download: 2 times
Share this document with a friend
Description:
genetic ALGORITHM
Popular Tags:
15
Artificial Intelligence 2006 28 Aug. 2006 Đặng Xuân Hà <dxha at hau1.edu.vn> 1 Trí tunhân to (artificial intelligence) Chapter 4. Informed search Informed search Đặng Xuân Hà DSE, FIT, HAU1 Office phone: 8276346; Ext.:132 Office location: Room 317, Administration building. Email: dxha at hau1.edu.vn; dangxuanha at gmail.com Website: http://www.hau1.edu.vn/it/dxha Artificial Intelligence 2006 2 28 Aug. 2006 Review: Tree search A search strategy is defined by picking the order of node expansion Artificial Intelligence 2006 3 28 Aug. 2006 Review: Uninformed tree search Breath-first search: Shallowest node first Depth-first search Deepest node first Uniform-cost search Least-cost node first Depth-limited search Depth-first with limited depth Iterative deepening search Depth-limited with increasing limit.
Transcript
Page 1: genetic ALGORITHM

Artificial Intelligence 2006 28 Aug. 2006

Đặng Xuân Hà <dxha at hau1.edu.vn> 1

Trí tuệ nhân tạo (artificial intelligence)

Chapter 4. Informed searchInformed search

Đặng Xuân HàDSE, FIT, HAU1

Office phone: 8276346; Ext.:132

Office location: Room 317, Administration building.

Email: dxha at hau1.edu.vn; dangxuanha at gmail.com

Website: http://www.hau1.edu.vn/it/dxha

Artificial Intelligence 2006 228 Aug. 2006

Review: Tree search

A search strategy is defined by picking the order of node expansion

Artificial Intelligence 2006 328 Aug. 2006

Review: Uninformed tree search

Breath-first search:Shallowest node first

Depth-first searchDeepest node first

Uniform-cost searchLeast-cost node first

Depth-limited searchDepth-first with limited depth

Iterative deepening searchDepth-limited with increasing limit.

Page 2: genetic ALGORITHM

Artificial Intelligence 2006 28 Aug. 2006

Đặng Xuân Hà <dxha at hau1.edu.vn> 2

Artificial Intelligence 2006 428 Aug. 2006

Review: Exercise

LM

B

A

C D

J

K

FEHG I

N O

105

7

5 20

1012

6

108 15

15 25 20

- Initial: A; Goal: I , N- Find seq. for each tree search algrithm?:(1)BFS; (2)DFS; (3)uniform-cost;(4)Depth-limited (l=2); (5)Iterative deepening

Solution:(1) ABCDEFGHI(2) ABEKLFCGMN(3) ACDBGEN(4) ABEFCGDHI(5) A; ABCD; ABEKLFCGMN

Artificial Intelligence 2006 528 Aug. 2006

Informed search

Best-first search(Greedy) best-first searchA* search

Heuristics

Local searchHill climbingSimulated annealingGenetic algorithms

Artificial Intelligence 2006 628 Aug. 2006

Best-first search

Idea: use an evaluation function f(n) for each nodeestimate of "desirability"Expand most desirable unexpanded node

Implementation:Order the nodes in fringe in decreasing order of desirability

Special cases:greedy best-first searchA* search

Page 3: genetic ALGORITHM

Artificial Intelligence 2006 28 Aug. 2006

Đặng Xuân Hà <dxha at hau1.edu.vn> 3

Artificial Intelligence 2006 728 Aug. 2006

Romania with step costs in km

Artificial Intelligence 2006 828 Aug. 2006

Greedy best-first search

Evaluation function f(n) = h(n) (heuristic) = estimate of cost from n to goal

e.g., hSLD(n) = straight-line distance from n to Bucharest

Greedy best-first search expands the node that appears to be closest to goal

Artificial Intelligence 2006 928 Aug. 2006

Greedy best-first search example

Page 4: genetic ALGORITHM

Artificial Intelligence 2006 28 Aug. 2006

Đặng Xuân Hà <dxha at hau1.edu.vn> 4

Artificial Intelligence 2006 1028 Aug. 2006

Greedy best-first search example

Artificial Intelligence 2006 1128 Aug. 2006

Greedy best-first search example

Artificial Intelligence 2006 1228 Aug. 2006

Greedy best-first search example

Page 5: genetic ALGORITHM

Artificial Intelligence 2006 28 Aug. 2006

Đặng Xuân Hà <dxha at hau1.edu.vn> 5

Artificial Intelligence 2006 1328 Aug. 2006

Properties of greedy best-first search

Complete? No – can get stuck in loops, e.g., IasiNeamt Iasi Neamt

Time? O(bm), but a good heuristic can give dramatic improvement

Space? O(bm) -- keeps all nodes in memory

Optimal? No

Artificial Intelligence 2006 1428 Aug. 2006

A* search (A-star search)

Idea: avoid expanding paths that are already expensive

Evaluation function f(n) = g(n) + h(n)g(n) = cost so far to reach nh(n) = estimated cost from n to goal

f(n) = estimated total cost of path through n to goal

Artificial Intelligence 2006 1528 Aug. 2006

A* search example

f(n)= g(n) + h(n)

Page 6: genetic ALGORITHM

Artificial Intelligence 2006 28 Aug. 2006

Đặng Xuân Hà <dxha at hau1.edu.vn> 6

Artificial Intelligence 2006 1628 Aug. 2006

A* search example

f(n)= g(n) + h(n)

Artificial Intelligence 2006 1728 Aug. 2006

A* search example

f(n)= g(n) + h(n)

Artificial Intelligence 2006 1828 Aug. 2006

A* search example

f(n)= g(n) + h(n)

Page 7: genetic ALGORITHM

Artificial Intelligence 2006 28 Aug. 2006

Đặng Xuân Hà <dxha at hau1.edu.vn> 7

Artificial Intelligence 2006 1928 Aug. 2006

A* search example

f(n)= g(n) + h(n)

Artificial Intelligence 2006 2028 Aug. 2006

A* search example

f(n)= g(n) + h(n)

Artificial Intelligence 2006 2128 Aug. 2006

Admissible heuristics

A heuristic h(n) is admissible if for every node n,

h(n) ≤ h*(n), where h*(n) is the true cost to reach the goal state from n.

An admissible heuristic never overestimates the cost to reach the goal, i.e., it is optimistic

Example: hSLD(n) (never overestimates the actual road distance)Theorem: If h(n) is admissible, A* using TREE-SEARCHis optimal

Page 8: genetic ALGORITHM

Artificial Intelligence 2006 28 Aug. 2006

Đặng Xuân Hà <dxha at hau1.edu.vn> 8

Artificial Intelligence 2006 2228 Aug. 2006

Consistent heuristics

A heuristic is consistent if for every node n, every successor n' of ngenerated by any action a,

h(n) ≤ c(n,a,n') + h(n')

If h is consistent, we havef(n') = g(n') + h(n')

= g(n) + c(n,a,n') + h(n') ≥ g(n) + h(n) = f(n)

i.e., f(n) is non-decreasing along any path.Theorem: If h(n) is consistent, A* using GRAPH-SEARCH is optimal

triangle inequality

Artificial Intelligence 2006 2328 Aug. 2006

Optimality of A*

A* expands nodes in order of increasing f value

Gradually adds "f-contours" of nodes

Contour i has all nodes with f=fi, where fi < fi+1

Artificial Intelligence 2006 2428 Aug. 2006

Properties of A*

Complete? Yes (unless there are infinitely many nodes with f ≤ f(G) )

Time? Exponential

Space? Keeps all nodes in memory

Optimal? Yes

Page 9: genetic ALGORITHM

Artificial Intelligence 2006 28 Aug. 2006

Đặng Xuân Hà <dxha at hau1.edu.vn> 9

Artificial Intelligence 2006 2528 Aug. 2006

Admissible heuristicsE.g., for the 8-puzzle:

h1(n) = number of misplaced tilesh2(n) = total Manhattan distance

(i.e., no. of squares from desired location of each tile)

h1(S) = ? 8h2(S) = ? 3+1+2+2+2+3+3+2 = 18

Artificial Intelligence 2006 2628 Aug. 2006

Dominance

If h2(n) ≥ h1(n) for all n (both admissible)then h2 dominates h1

h2 is better for search

Typical search costs (average number of nodes expanded):

d=12: IDS = 3,644,035 nodesA*(h1) = 227 nodes A*(h2) = 73 nodes

d=24 : IDS = too many nodesA*(h1) = 39,135 nodes A*(h2) = 1,641 nodes

Artificial Intelligence 2006 2728 Aug. 2006

Relaxed problems

A problem with fewer restrictions on the actions is called a relaxed problemThe cost of an optimal solution to a relaxed problem is an admissible heuristic for the original problemIf the rules of the 8-puzzle are relaxed so that a tile can move anywhere, then h1(n) gives the shortest solutionIf the rules are relaxed so that a tile can move to any adjacent square, then h2(n) gives the shortest solution

Page 10: genetic ALGORITHM

Artificial Intelligence 2006 28 Aug. 2006

Đặng Xuân Hà <dxha at hau1.edu.vn> 10

Artificial Intelligence 2006 2828 Aug. 2006

Local search algorithms

In many optimization problems, the path to the goal is irrelevant; the goal state itself is the solution

State space = set of "complete" configurationsFind configuration satisfying constraints, e.g., n-queens

In such cases, we can use local search algorithms: keep a single "current" state, try to improve it

Artificial Intelligence 2006 2928 Aug. 2006

Example: n-queens

Put n queens on an n × n board with no two queens on the same row, column, or diagonal

Move queens to reduce conflicts

Artificial Intelligence 2006 3028 Aug. 2006

Hill-climbing search

"Like climbing Everest in thick fog with amnesia"

Page 11: genetic ALGORITHM

Artificial Intelligence 2006 28 Aug. 2006

Đặng Xuân Hà <dxha at hau1.edu.vn> 11

Artificial Intelligence 2006 3128 Aug. 2006

Hill-climbing search

"Like climbing Everest in thick fog with amnesia"

Artificial Intelligence 2006 3228 Aug. 2006

Hill-climbing search

Problem: depending on initial state, can get stuck in local maxima

Artificial Intelligence 2006 3328 Aug. 2006

Hill-climbing search: 8-queens problem

h = number of pairs of queens that are attacking each other, either directly or indirectly successor function: move a single queen to another square in the same columnh = 17 for the above state

Page 12: genetic ALGORITHM

Artificial Intelligence 2006 28 Aug. 2006

Đặng Xuân Hà <dxha at hau1.edu.vn> 12

Artificial Intelligence 2006 3428 Aug. 2006

Drawbacks of hill climbing

Ridge = sequence of local maxima difficult for greedy algorithms to navigatePlateaux = an area of the state space where the evaluation function is flat.Gets stuck 86% of the time.

Artificial Intelligence 2006 3528 Aug. 2006

Hill-climbing search: 8-queens problem

A local minimum with h = 1: no better successors

Artificial Intelligence 2006 3628 Aug. 2006

Hill-climbing variations

Stochastic hill-climbingRandom selection among the uphill moves.The selection probability can vary with the steepness of the uphill move.

First-choice hill-climbingcfr. stochastic hill climbing by generating successors randomly until a better one is found.

Random-restart hill-climbingTries to avoid getting stuck in local maxima.

Page 13: genetic ALGORITHM

Artificial Intelligence 2006 28 Aug. 2006

Đặng Xuân Hà <dxha at hau1.edu.vn> 13

Artificial Intelligence 2006 3728 Aug. 2006

Simulated annealing search

gradually decrease shaking to make sure the ballescape from local minima and fall into the global minimum

global minimum

local minimum

Artificial Intelligence 2006 3828 Aug. 2006

Simulated annealing search

Idea: escape local maxima by allowing some "bad" moves but gradually decrease their frequency

Implement:Randomly select a move instead of selecting best moveAccept a bad move with probability less than 1 (p<1)p decreases by time

Artificial Intelligence 2006 3928 Aug. 2006

Properties of simulated annealing search

One can prove: If T decreases slowly enough, then simulated annealing search will find a global optimum with probability approaching 1

Widely used in VLSI layout, airline scheduling, etc

Page 14: genetic ALGORITHM

Artificial Intelligence 2006 28 Aug. 2006

Đặng Xuân Hà <dxha at hau1.edu.vn> 14

Artificial Intelligence 2006 4028 Aug. 2006

Local beam search

Keep track of k states rather than just one in hill climbing

Start with k randomly generated states

At each iteration, all the successors of all k states are generated

If any one is a goal state, stop; else select the k best successors from the complete list and repeat.

Comparison to random restart hill climbing:Information is shared among k search threads: If one state generated good successor, but others did not “come here, the grass is greener!”

Artificial Intelligence 2006 4128 Aug. 2006

Genetic algorithms

Variant of local beam search with sexual recombination.

Artificial Intelligence 2006 4228 Aug. 2006

Genetic algorithms

A successor state is generated by combining two parent states

Start with k randomly generated states (population)

A state is represented as a string over a finite alphabet (often a string of 0s and 1s)

Evaluation function (fitness function). Higher values for better states.

Produce the next generation of states by selection, crossover, and mutation

Page 15: genetic ALGORITHM

Artificial Intelligence 2006 28 Aug. 2006

Đặng Xuân Hà <dxha at hau1.edu.vn> 15

Artificial Intelligence 2006 4328 Aug. 2006

Genetic algorithms

Fitness function: number of non-attacking pairs of queens (min = 0, max = 8 × 7/2 = 28)

24/(24+23+20+11) = 31%

23/(24+23+20+11) = 29% etc

Artificial Intelligence 2006 4428 Aug. 2006

Genetic algorithms

Artificial Intelligence 2006 4528 Aug. 2006

References

Slides provided by Prof. Russell at http://aima.cs.berkeley.edu/

AIMA chapter 4


Recommended