22
Introduction to Artificial Intelligence Heuristic Search Ruth Bergman Fall 2004

Introduction to Artificial Intelligence Heuristic Search Ruth Bergman Fall 2004

  • View
    224

  • Download
    2

Embed Size (px)

Citation preview

Page 1: Introduction to Artificial Intelligence Heuristic Search Ruth Bergman Fall 2004

Introduction to Artificial Intelligence

Heuristic Search

Ruth Bergman

Fall 2004

Page 2: Introduction to Artificial Intelligence Heuristic Search Ruth Bergman Fall 2004

• Uninformed search (= blind search)– have no information about the number of steps or the path

cost from the current state to the goal

• Informed search (= heuristic search)– have some domain-specific information– we can use this information to speed-up search– e.g. Bucharest is southeast of Arad.– e.g. the number of tiles that are out of place in an 8-puzzle

position– e.g. for missionaries and cannibals problem, select moves

that move people across the river quickly

Search Strategies

Page 3: Introduction to Artificial Intelligence Heuristic Search Ruth Bergman Fall 2004

Heuristic Search• Let us suppose that we have one piece of information: a

heuristic function • h(n) = 0, n a goal node • h(n) > 0, n not a goal node • we can think of h(n) as a “guess” as to how far n is from the goal

Best-First-Search(initial-state) ;; functions h, succ, and GoalTest defined

open <- MakePriorityQueue(initial-state, h(initial-state))while (not(empty(open)))

node pop(open), state node-state(node)if GoalTest(state) succeeds return nodefor each child in succ(state)

open <- push(child,h(child))return failure

Page 4: Introduction to Artificial Intelligence Heuristic Search Ruth Bergman Fall 2004

Heuristics: Example

• Travel: h(n) = distance(n, goal) Oradea (380)

Zerind (374)

Arad (366)

Sibiu (253)

Fagaras (178)

Rimnicu Vilcea (193)

Pitesti (98)

Timisoara (329)

Lugoj (244) Mehadia

Dobreta (242) Craiova

(160)

Neamt (234)

Iasi (226)

Vaslui (199)

Urziceni (80)

Bucharest (0)

Giurgiu (77)

Hirsova (151)

Eforie (161)

71

142

85

90

10197

99140

138146

120

75

70

111

118

75

211

151

86

98

92

87

Page 5: Introduction to Artificial Intelligence Heuristic Search Ruth Bergman Fall 2004

Heuristics: Example

• 8-puzzle: h(n) = tiles out of place

h(n) = 31 2 3

8 6

7 5 4

1 2 3

8 4

7 6 5

Goal state

Page 6: Introduction to Artificial Intelligence Heuristic Search Ruth Bergman Fall 2004

Example - cont

h(n) = 31 2 3

8 6

7 5 4

1 2

8 6 3

7 5 4

1 2 3

8 6 4

7 5

1 2 3

8 6

7 5 4

h(n) = 3h(n) = 2 h(n) = 4

Page 7: Introduction to Artificial Intelligence Heuristic Search Ruth Bergman Fall 2004

h(n) = 31 2 3

8 6

7 5 4

1 2

8 6 3

7 5 4

1 2 3

8 6 4

7 5

1 2 3

8 6

7 5 4

h(n) = 3h(n) = 2 h(n) = 4

1 2 3

8 6

7 5 4

1 2 3

8 6 4

7 5

h(n) = 1h(n) = 3

Page 8: Introduction to Artificial Intelligence Heuristic Search Ruth Bergman Fall 2004

h(n) = 31 2 3

8 6

7 5 4

1 2

8 6 3

7 5 4

1 2 3

8 6 4

7 5

1 2 3

8 6

7 5 4

h(n) = 3h(n) = 2 h(n) = 4

1 2 3

8 6

7 5 4

1 2 3

8 6 4

7 5

h(n) = 1

1 2 3

8 6 4

7 5

1 2 3

8 4

7 6 5

1 2 3

8 6 4

7 5

h(n) = 2

h(n) = 0 h(n) = 2

h(n) = 3

Page 9: Introduction to Artificial Intelligence Heuristic Search Ruth Bergman Fall 2004

Best-First-Search Performance

• Completeness– Complete if either finite depth, or minimum drop in h value for each operator

• Time complexity– Depends on how good the heuristic function is– A “perfect” heuristic function will lead search directly to the goal– We rarely have “perfect” heuristic function

• Space Complexity– Maintains fringe of search in memory– High storage requirement

• Optimality – Non-optimal solutions

3 2

1

x

1 1 1 xSuppose the heuristic drops to one everywhere except along the path along which the solution lies

Page 10: Introduction to Artificial Intelligence Heuristic Search Ruth Bergman Fall 2004

Iterative Improvement Algorithms

• Start with the complete configuration and make modifications to improve the quality

• Consider the states laid out on the surface of a landscape

• Keep track of only the current state=> Simplification of Best-First-Search

• Do not look ahead beyond the immediate neighbors of that state– Ex: amnesiac climb to summit in a thick fog

Page 11: Introduction to Artificial Intelligence Heuristic Search Ruth Bergman Fall 2004

Iterative Improvement Basic Principle

“Like climbing Everest in thick fog with amnesia”

Page 12: Introduction to Artificial Intelligence Heuristic Search Ruth Bergman Fall 2004

Hill-Climbing

• Simple loop that continually moves in the direction of decreasing value

• does not maintain a search tree, so the node data structure need only record the state and its evaluation.

• Always try to make changes that improve the current state• Steepest-descent: pick the steepest next state

Hill-Climbing(initial-state)

state initial-state

do forever

next minimum valued successor of state

if (h(next) >= h(state)) return current

state next

Page 13: Introduction to Artificial Intelligence Heuristic Search Ruth Bergman Fall 2004

8-queens

State contains 8 queens on the boardSuccessor function returns all states generated by moving a single

queen to another square in the same column (8*7 = 56 next states)

h(s) = number of queens that attack each other in state s.

H(s) = 17, best next is 12 H(s) = 1, local minimum

Page 14: Introduction to Artificial Intelligence Heuristic Search Ruth Bergman Fall 2004

Drawbacks

• Random-Restart Hill-ClimbingConducts a series of hill-climbing searches from randomly generated initial states.

• Local maxima/minima : halt with local maximum/minimum• Plateaux : random walk• Ridges : oscillate from side to side, limit progress

Page 15: Introduction to Artificial Intelligence Heuristic Search Ruth Bergman Fall 2004

Hill-Climbing Performance

• Completeness– Not complete, does not use systematic search

method• Time complexity

– Depends on heuristic function

• Space Complexity– Very low storage requirement

• Optimality – Non-optimal solutions– Often results in locally optimal solution

Page 16: Introduction to Artificial Intelligence Heuristic Search Ruth Bergman Fall 2004

Simulated-Annealing

• Take some upnhill steps to escape the local minimum• Instead of picking the best move, it picks a random

move• If the move improves the situation, it is executed.

Otherwise, move with some probability less than 1.• Physical analogy with the annealing process:

– Allowing liquid to gradually cool until it freezes

• The heuristic value is the energy, E• Temperature parameter, T, controls speed of

convergence.

Page 17: Introduction to Artificial Intelligence Heuristic Search Ruth Bergman Fall 2004

Simulated-Annealing Algorithm

Simulated-Annealing(initial-state,schedule)

state initial-state

For t=1,2,…

T = schedule(t)

If T=0 return state

next a randomly selected successor of state

E = h(state) - h(next)

if (E>0) state next

else state = next with probability TEe /

Page 18: Introduction to Artificial Intelligence Heuristic Search Ruth Bergman Fall 2004

Simulated Annealing

5 10 15 20

26

20

1418

10

T= 100 – t*5Probability > 0.9

Solution Quality

•The schedule determines the rate at which the temperature is lowered •If the schedule lowers T slowly enough, the algorithm will find a global optimum• High temperature T is characterized by a large portion of accepted uphill moves whereas at low temperature only downhill moves are accepted

=> If a suitable annealing schedule is chosen, simulated annealing has been found capable of finding a good solution, though this is not guaranteed to be the absolute minimum.

Page 19: Introduction to Artificial Intelligence Heuristic Search Ruth Bergman Fall 2004

Beam Search

• Overcomes storage complexity of Best-First-Search• Maintains the k best nodes in the fringe of the search

tree (sorted by the heuristic function)• When k = 1, Beam search is equivalent to Hill-

Climbing• When k is infinite, Beam search is equivalent to Best-

First-Search• If you add a check to avoid repeated states, memory

requirement remains high• Incomplete, search may delete the path to the

solution.

Page 20: Introduction to Artificial Intelligence Heuristic Search Ruth Bergman Fall 2004

Beam Search Algorithm

Beam-Search(initial-state, k) open <- MakePriorityQueue(initial-state, h(initial-state))

while (not(empty(open)))

node pop(open), state node-state(node)

if GoalTest(state) succeeds return node

for each child in succ(state)

open <- push(child,h(child))

If size(open) > k

delete last item in open

return failure

Page 21: Introduction to Artificial Intelligence Heuristic Search Ruth Bergman Fall 2004

Search Performance

Search Algorithm expanded solution length expanded solution lengthIterative Deepening 1105 9hill-climbing 2 no solution found 10 9best-first 495 24 9 9

Heuristic 1:Tiles out of place

Heuristic 2:Manhattan distance*

*Manhattan distance =.total number of horizontal and vertical moves required to move all tiles to their position in the goal state from their current position.

=> Choice of heuristic is critical to heuristic search algorithm performance.

h1 = 7

h2 = 2+1+1+2+1+1+1+0=9

3 2 5

7 1

4 6 8

1 2

3 4 5

6 7 8

8-Square

Page 22: Introduction to Artificial Intelligence Heuristic Search Ruth Bergman Fall 2004

Blast

• Basic Local Alignment Search Tool

• provides a method for rapid searching of nucleotide and protein databases

• A heuristic approach to the local alignment problem.