Upload
blake-rockhill
View
239
Download
1
Embed Size (px)
Citation preview
Heuristic Search 1
KU NLPKU NLP
Ch 4. Heuristic Search
4.0 Introduction(Heuristic)
4.1 An Algorithm for Heuristic Search 4.1.1 Implementing Best-First Search 4.1.2 Implementing Heuristic Evaluation Functions 4.1.3 Heuristic Search and Expert Systems
4.3 Using Heuristics in Games 4.3.1 Minimax Procedure 4.3.2 Minimaxing to Fixed Ply Depth 4.3.3 Alpha-Beta Procedure
Heuristic Search 2
KU NLPKU NLP
4.0 Heuristic (1)
Definition: the study of the methods and rules of
discovery and invention.
Heuristics are formalized as rules for choosing
those branches in a state space that are most
likely to lead to an acceptable problem solution.
Heuristics are employed in two cases. A problem may not have an exact solution because of its
inherent ambiguities e.g. medical diagnosis, vision
A problem may have an exact solution, but the computational cost of finding it may be prohibitive.
e.g. chess Heuristics attack this complexity by guiding the search
along the most promising path.
Heuristic Search 3
KU NLPKU NLP
4.0 Heuristic (2)
Heuristics are fallible A heuristic is only an informed guess of the next step to be
taken in solving a problem. Because heuristics use limited information, a heuristic can
lead a search algorithm to a suboptimal solution or fail to find any solution at all.
Application of AI using Heuristics Game playing and theorem proving
not feasible to examine every inference that can be made in a mathematics domain or every possible move that can be made on a chessboard.
Expert Systems Expert system designers extract and formalize the
heuristics which human expert uses to solve problems efficiently.
Heuristic Search 4
KU NLPKU NLP
4.0 Heuristic (3)
A Heuristic for the Tic-Tac-Toe Game Exhaustive Search: 9! Symmetry reduction can decrease the search space a little:
12 * 7! (Fig. 4.1, p125, tp5) There are nine but really three initial moves: center, corner, cen
ter of the grid. Symmetry reductions on the second level further reduce the nu
mber of paths. A simple heuristic (Fig. 4.2, p126, tp6) can eliminate two-thir
ds of the search space with the first move. (Fig. 4.3, p126)
Heuristic Search 5
KU NLPKU NLP
4.0 Heuristic (4)
Heuristic Search 6
KU NLPKU NLP
4.0 Heuristic (5)
Heuristic Search 7
KU NLPKU NLP
4.0 Heuristic (6)
Heuristic Search 8
KU NLPKU NLP
A Cryptarithmetic Problem
A useful heuristic can
help to select the best
guess to try first.
If there is a letter that
participate in many
constraints, then it is a
good idea to prefer it to a
letter that participates in
a few.
Initial State
M=1, S=8 or 9O=0 or 1 O=0N=E or E+1 N=E+1C2=1, N+R > 8, E<>9
N=3, R=8 or 92+D=Y or 2+D=10+Y
2+D=YN+R=10+ER=9, S=8
2+D=10+YD=8+YD=8 or 9
Y=1Y=0
ConflictConflict
E=2
C1=0 C1=1
SEND+ MORE
MONEY
D=8 D=9
Heuristic Search 9
KU NLPKU NLP
Hill Climbing Strategy(1)
Simplest way of implementing heuristic search
Expand the current state and evaluate the children and
select the best child for further expansion. Halt the search
when it reaches a state that is better than any of its children
(i.e. there is not a child who is better than the current state).
Blind mountain climber go uphill along the steepest possible path until we can go no
farther. Because it keeps no history, the algorithm cannot recover from
failures.
Heuristic Search 10
KU NLPKU NLP
Major Problem: tendency to become stuck at
local maxima If the algorithm reach a local maximum, the algorithm fails to
find a solution. An example of local maxima in 8-puzzle
In order to move a particular tile to its destination, other tiles that are already in goal position have to be moved. This move may temporarily worsen the board state.
If the evaluation function is sufficiently
informative to avoid local maxima and infinite
path, hill climbing can be used effectively.
Hill Climbing Strategy(2)
G
Heuristic Search 11
KU NLPKU NLP
Blocks World Problem(1)
A
B
C
D
E
F
G
H
initial state
A
B
C
D
E
F
G
H
goal state
A B
C
D
E
F
G
H
state 1
A
B
C
D
E
F
G
H
state 2-(a)
A B
C
D
E
F
G
H
state 2-(b)
A B
C
D
E
F
G
H
state 2-(c)
Heuristic Search 12
KU NLPKU NLP
Blocks World Problem(2)
Local: Add one point for every block that is resting on the thing it is supposed to be resting on. Subtract one point for every block that is sitting on the wrong thing. initial state
(-1) + 1 + 1 + 1 + 1 + 1 + 1 + (-1) = 4 goal state
1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 = 8 state 1
1 + 1 + 1 + 1 + 1 + 1 + (-1) = 5 + 1 6 state 2-(a)
(-1) + 1 + 1 + 1 + 1 + 1 + 1 + (-1) = 4 state 2-(b)
(-1) + 1 = 01 + 1 + 1 + 1 + 1 + (-1) = 4 4
state 2-(c)1, -1, 1 + 1 + 1 + 1 + 1 + (-1) = 4 4
Heuristic Search 13
KU NLPKU NLP
Blocks World Problem (3)
Global: For each block that has the correct support (i.e. the complete structure underneath it is exactly as it should be), add one point for every block in the support structure.
For each block that has an incorrect support structure, subtract one point for every block in the existing support structure. initial state
(-7) + (-6) + (-5) + (-4) + (-3) + (-2) + (-1) = -28 goal state
7 + 6 + 5 + 4 + 3 + 2 + 1 = 28 state 1
(-6) + (-5) + (-4) + (-3) + (-2) + (-1) = -21 state 2-(a)
(-7) + (-6) + (-5) + (-4) + (-3) + (-2) + (-1) = -28 state 2-(b)
(-5) + (-4) + (-3) + (-2) + (-1) = -15(-1) + 0 = -1 -16
state 2-(c)(-5) + (-4) + (-3) + (-2) + (-1) = -15
Heuristic Search 14
KU NLPKU NLP
4.1.1 Best-First Search (1)
Like the depth-first and breadth-first search, best-
first search uses two-lists. OPEN: to keep track of the current fringe of the search. CLOSED: to record states already visited.
Order the states on OPEN according to some
heuristic estimate of their closeness to a goal. Each iteration of the loop considers the most promising state
on the OPEN list.
Procedure best_first_search (p. 128, tp15~16) The states on OPEN is sorted according to their heuristic
values. If a child state is already on OPEN or CLOSED, the
algorithm checks if the child is reached by a shorter path.
Heuristic Search 15
KU NLPKU NLP
4.1.1 Best-First Search (2)
Heuristic Search 16
KU NLPKU NLP
4.1.1 Best-First Search (3)
Heuristic Search 17
KU NLPKU NLP
4.1.1 Best-First Search(4)
A hypothetical state space with heuristic evaluation (Fig. 4.4 p129)
Heuristic Search 18
KU NLPKU NLP
4.1.1 Best-First Search(4)
A trace of the execution of best_first_search1. OPEN=[A5]; CLOSED=[ ]2. evaluate A5; OPEN=[B4,C4,D6]; CLOSED=[A5]3. evaluate B4; OPEN=[C4,E5,F5,D6]; CLOSED=[B4,A5]4. evaluate C4; OPEN=[H3,G4,E5,F5,D6]; CLOSED=[C4,B4,A5]5. Evaluate H3; OPEN=[O2,P3,G4,E5,F5,D6];
CLOSED=[H3,C4,B4,A5] ………… In the event a heuristic leads the search down a path that
proves incorrect, the algorithm shifts its focus to another part of the space. A B … (E, F) C
Shift the focus from B to C, but the children of B (E and F) are kept on OPEN in case the algorithm returns to them later.
The goal of best-first search is to find the goal state by looking at as few states as possible. (Fig. 4.5, p130, tp19)
Heuristic Search 19
KU NLPKU NLP
4.1.1 Best-First Search (5)
Heuristic Search 20
KU NLPKU NLP
4.1.2. Heuristic Evaluation Functions (1)
Several heuristics for solving the 8-puzzle (Fig. 4.8,
p132, tp21) count the number of tiles out of place compared with the goal. sum all the distances by which the tiles are out of place. multiplies a small number times each direct tile reversal.(Fig.4.7) adds the sum of the distances out of place and 2 times the
number of direct reversals.
The design of good heuristics is an empirical
problem: the measure of a heuristic must be its
actual performance on problem instances.
Heuristic Search 21
KU NLPKU NLP
4.1.2 Heuristic Evaluation Functions (2)
Heuristic Search 22
KU NLPKU NLP
4.1.2 Heuristic Evaluation Functions (3)
With the same heuristic evaluations, it is
preferable to examine the state that is nearest to
the root. The distance from the starting state to its descendants can
be measured by maintaining a depth count for each state.
f(n) = g(n) + h(n) where g(n) measures the actual
length of the path from the state n to the start
state, h(n) is a heuristic estimate of the distance
from state n to a goal. Example of heuristic f in 8-puzzle (Fig. 4.9, p133, tp23)
Heuristic Search 23
KU NLPKU NLP
4.1.2 Heuristic Evaluation Functions (4)
Heuristic Search 24
KU NLPKU NLP
4.1.2 Heuristic Evaluation Function (5)
State space generated in heuristic search (Fig 4.10, p135, tp25)
Each state is labeled with a letter and its heuristic weight, f(n)
= g(n) + h(n) where g(n)=actual distance from n to the start
state, and h(n)=number of tiles out of place.
The successive stages of OPEN and CLOSED are:
1. OPEN=[a4]; CLOSED=[ ];
2. OPEN=[c4,b6,d6]; CLOSED=[a4];
3. OPEN=[e5,f5,g6,b6,d6]; CLOSED=[a4, c4];
4. OPEN=[f5,h6,g6,b6,d6,i7]; CLOSED=[a4,c4,e5]; ………….
Although the state h, the immediate child of e, has the same
number of tiles out of place as f, it is one level deeper in the state
space. The depth measure, g(n), causes the algorithm to select f
for evaluation.
Heuristic Search 25
KU NLPKU NLP
4.1.2 Heuristic Evaluation Function (6)
Heuristic Search 26
KU NLPKU NLP
4.1.2 Heuristic Evaluation Function (7)
1. Operations on states generate children of the state currently under examination.
2. Each new state is checked to see whether it has occurred before, thereby preventing loops
3. Each state n is given an f value equal to the sum of its depth in the search space g(n) and a heuristic estimate of its distance to a goal h(n). The h value guides search toward heuristically promising states while the g value prevents search from persisting indefinitely on a fruitless path.
4. States on OPEN are sorted by their f values. By keeping all states on OPEN until they are examined or a goal is found, the algorithm can go back from fruitless paths. At any one time, OPEN may contain states at different levels of the state space graph (Fig. 4.11), allowing full flexibility in changing the focus of the search.
5. The efficiency of the algorithm can be improved by careful maintenance of the OPEN and CLOSED lists.
Heuristic Search 27
KU NLPKU NLP
Example of Best First Search(1)
a
b d
e
c
f g h
i j k
l m n
o p
q
initial node: a goal node: q
3
2
3
3
3
2
21
22
2 23 5 4
1
3 1
1
3
2
32
2
‚
ƒ
„
…
† ƒ …
…„ ƒ
„ ‚
‚ ƒ
† G value: cost of getting from initial node to the current node(1, 2…)
H value: estimated cost to goal node(, ‚ ...)
F = G + H
Heuristic Search 28
KU NLPKU NLP
Example of Best First Search(2)
OPEN PATH1. ((A 2)) Node A2. ((B 4) (C 5) (D 8)) Node B (AB)3. ((C 5) (D 8) (F 10) (E 11)) Node C (AC)4. ((G 6) (D 8) (F 8) (E 11)) Node G (ACG)5. ((D 8) (F 8) (E 11) (J 11) (K 12)) Node D (AD)6. ((F 8) (H 10) (E 11) (J 11) (K 12)) Node F (ACF) /* (G
6) */7. ((H 10) (I 10) (J 10) (E 11) (K 12)) Node H (ADH)8. ((I 10) (J 10) (E 11) (K 11)) Node I (ACFI)9. ((J 10) (E 11) (K 11) (L 13)) Node J (ACFJ)10. ((M 10) (E 11) (K 11) (L 13)) Node M (ACFJM)11. ((E 11) (K 11) (O 11) (L 13)) Node E (ABE)12. ((K 11) (O 11) (L 13)) Node K (ADHK)13. ((O 11) (L 13) (N 15)) Node O(ACFJM O)14. ((Q 11) (L 13) (N 15)) (ACFJMOQ)
Heuristic Search 29
KU NLPKU NLP
Example of Best First Search(3)
OPEN CLOSED PATH[A2] [ ] A[B4, C5, D8] [A2] (AB)[C5, D8, F10, E11] [A2, B4] (AC)[G6, D8, F8, E11] [A2, B4, C5] (ACG)[D8, F8, E11, J11, K12] [A2, B4, C5, G6] (AD)[F8, H10, E11, J11, K12] [A2, B4, C5, G6, D8] (ACF)[H10, I10, J10, E11, K12] [A2,B4.C5.G6,D8,F8] (ADH)[I10, J10, E11, K11] [A2,B4,C5,G6,D8,F8,H10] (ACFI)[J10,E11,K11,L13] [A2,B4,C5,G6,D8,F8,H10,I10] (ACFJ)[M10,E11,K11,L13] [A2,B4,C5,G6,D8,F8,H10,I10,J10] (ACFJM)[E11,K11,O11,L13] [A2,B4,C5,G6,D8,F8,H10,I10,J10,M10] (ABE)[K11,O11,L13] [A2,B4,C5,G6,D8,F8,H10,I10,J10,M10,E11] (ADHK)[O11,L13,N15] [A2,B4,C5,G6,D8,F8,H10,I10,J10,M10,E11,K11]
(ACFJM O)[Q11,L13,N15] [A2,B4,C5,G6,D8,F8,H10,I10,J10,M10,E11,K11,O11]
(ACFJMOQ)