19
Chapter 2 Basic Problem-Solving Strategies —————————————— Christian Jacob [email protected] Department of Computer Science University of Calgary 2. Basic Problem-Solving Strategies 2.1 Basic search techniques 2.2 Best-first heuristic search 2.3 Problem decompositon and AND/OR graphs 2.4 Searching in Games 2.5 Efficient Searching (SMA*) 2.1 Basic Search Techniques 2.1.1 Introductory Concepts and Examples 2.1.2 Depth-First or Backtracking Search 2.1.3 Iterative Deepening Search 2.1.4 Breadth-First Search 2.1 Basic Search Techniques 2.1.1 Introductory Concepts and Examples 2.1.2 Depth-First or Backtracking Search 2.1.3 Iterative Deepening Search 2.1.4 Breadth-First Search Analyzing and Exploring Problem Spaces Problem space: a complete set of possible states, generated by exploring all possible steps, or moves, which may or may not lead from a given state or start state to a goal state. What should we know for a preliminary analysis of a search problem? What are the givens? Do we have all of them? – Are the givens specific to a particular situation? – Can we generalize? – Is there a notation that represents the givens and other states succinctly?

Basic Problem-Solving Strategies

  • Upload
    others

  • View
    9

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Basic Problem-Solving Strategies

Chapter 2

Basic Problem-SolvingStrategies

——————————————

Christian [email protected]

Department of Computer ScienceUniversity of Calgary

2. Basic Problem-Solving Strategies

2.1 Basic search techniques2.2 Best-first heuristic search2.3 Problem decompositon and

AND/OR graphs2.4 Searching in Games2.5 Efficient Searching (SMA*)

2.1 Basic Search Techniques

2.1.1 Introductory Concepts andExamples

2.1.2 Depth-First or BacktrackingSearch

2.1.3 Iterative Deepening Search2.1.4 Breadth-First Search

2.1 Basic Search Techniques

2.1.1 Introductory Concepts andExamples

2.1.2 Depth-First or BacktrackingSearch

2.1.3 Iterative Deepening Search2.1.4 Breadth-First Search

Analyzing and Exploring Problem Spaces

Problem space:a complete set of possible

states, generated byexploring all possiblesteps, or moves, whichmay or may not leadfrom a given state or startstate to a goal state.

What should we know for a preliminaryanalysis of a search problem?

What are the givens? Do we have all ofthem?– Are the givens specific to a particular

situation?– Can we generalize?– Is there a notation that represents the

givens and other states succinctly?

Page 2: Basic Problem-Solving Strategies

What should we know for a preliminaryanalysis of a search problem? (cont.)

What is the goal?– Is there a single goal, or are there several?– If there is a single goal, can it be split into

pieces?– If there are several goals or subgoals, are

they independent or are they connected?– Are there any constraints on developing a

solution?

Fox-Goose-Corn ProblemState Space Graph

P F G C /

F G C / P G C / P F F C / P G F G / P C

P G C / F P F C / G P F G / C

C / P F G G / P F C F / P G C

P C / F G P G / F C P F / G C

/ P F G C

Components of a State Space Graph

– Start: description with which to label thestart node

– Operators: functions that transform fromone state to another, within the constraintsof the search problem

– Goal condition: state description(s) thatcorrespond(s) to goal state(s)

Blocks Rearrangement Problem

[Bratko, 2001]

Blocks Rearrangement Problem: State Space Graph

[Bratko, 2001]

Towers of Hanoi

A

B

C

Page 3: Basic Problem-Solving Strategies

Towers of Hanoi:Problem Decomposition

A B C A B C

A B C A B C

move tower

with n-1 disks

move tower

with n-1 disks

move

disk

Eight-Queens Problem

8 queens on a chess boardNo queen attacked

States: any arrangement of0 to 8 queens on theboard.

Operators: add a queen toany square

96 solutions, 12 (wt. symmetry)

Travelling Salesperson Problem

www.cacr.caltech.edu/~manu/tsp.html

Eight Puzzle Problem

Start Goal

Start 1: 4 steps Start 2: 5 steps Start 3: 18 steps

Eight Puzzle Problem

[Bratko, 2001]

Chess

[Newborn, 1997][Kurzweil, 1990]

Anatoly Karpow and Gary Kasparow, 1986

Page 4: Basic Problem-Solving Strategies

Rubik’s Cube 2.1 Basic Search Techniques

2.1.1 Introductory Concepts andExamples

2.1.2 Depth-First or BacktrackingSearch

2.1.3 Iterative Deepening Search2.1.4 Breadth-First Search

2.1.2 Depth-First Search

[Bratko, 2001]

Depth-First Search: Eight-Puzzle

[Nilsson, 1998]

Depth-First Search in Cyclic Graphs

Add cycle-detection![Bratko, 2001]

Depth First Search Evaluation

Good:Since we don’t expand allnodes at a level, spacecomplexity is modest.

For branching factor b anddepth m, we require bmnumber of nodes to be stored inmemory.

However, the worst case is stillO(bm).

Bad:- If you have deep search

trees (or infinite – whichis quite possible), DFSmay end up running offto infinity and may not beable to recover.

- Thus DFS is neitheroptimal nor complete

Page 5: Basic Problem-Solving Strategies

Depth-Limited Search

Modifies DFS to avoid its pitfalls.Say that within a given area, we had to find the shortest path tovisit 10 cities. If we start at one of the cities, then there are atleast 9 other cities to visit. So 9 is the limit we impose.

Since we impose a limit, there are little changesfrom DFS — with the exception that we willavoid searching an infinite path.

DLS is complete if the limit we impose isgreater than or equal to the depth of oursolution.

Depth-Limited Search:Space and Time Complexity

DLS is O(bl) in time, where l is the limitwe impose.

Space complexity is bl.

Not optimal

2.1 Basic Search Techniques

2.1.1 Introductory Concepts andExamples

2.1.2 Depth-First or BacktrackingSearch

2.1.3 Iterative Deepening Search2.1.4 Breadth-First Search

2.1.3 Iterative Deepening Search

Depth bound = 1 2 3 4

[Nilsson, 1998]

Iterative Deepening Search:Evaluation

We look at the bottom most nodes once.We look at the level above that twice,… and the level above that thrice,… and so on, up to the root.

Therefore, we get:

(d+1)1 + (d)b + (d-1)b2 + … + 3bd-2 + 2bd-1 + 1bd

Like DFS, IDS is still O(bd) while space complexity isbd.

2.1 Basic Search Techniques

2.1.1 Introductory Concepts andExamples

2.1.2 Depth-First or BacktrackingSearch

2.1.3 Iterative Deepening Search2.1.4 Breadth-First Search

Page 6: Basic Problem-Solving Strategies

2.1.4 Breadth-First Search

[Bratko, 2001]

Breadth-FirstSearch:Eight-Puzzle

[Nilsson, 1998]

Breadth First Search

Time and space complexity :- If we look at how BFS expands from the root we

see that it first expands on a fixed number ofnodes, say b.

- On the second level we expand b2 nodes.- On the third level we expand b3nodes.- And so on, until it reaches bd for some depth d.

1+ b + b2 + b3 + . . . . + bd , which is O(bd)

Since all leaf nodes need to be stored in memory, spacecomplexity is the same as time complexity.

Bi-directional Search

Simultaneously search from the start node andfrom the goal(s).

The search ends somewhere in the middlewhen the two touch each other.

Time & space complexity: O(2bd/2) = O(bd/2)

It is complete and optimal.

Bi-directional Search

Problems:• Do we know where the goal is?• What if there is more than one possible

goal state?• We may be able to apply a multiple state

search but this sounds a lot easier said thandone. Example: How many checkmatestates are there in chess?

• We may utilize many different methods ofsearch, but which one is the choice?

Uniform Cost Search

Uniform Cost search is a modification ofBFS.

BFS returns a solution, but it may not beoptimal.

UCS takes into account the cost ofmoving from one node to the next.

Page 7: Basic Problem-Solving Strategies

Uniform Cost Search (Greedy Search):Example

[Russel & Norvig, 1995]

Categories of Search

Uninformed SearchWe can distinguish thegoal state(s) from the non-goal state.The path and cost to findthe goal is unknown.

Also known as blindsearch.

Informed SearchWe know somethingabout the nature of ourpath that might increasethe effectiveness of oursearch.

Generally superior touninformed search.

Uninformed Search Strategies

We covered these uninformed strategies :

• Depth First Search• Depth Limited Search• Iterative Deepening Search• Breadth First Search• Bidirectional Search• Uniform Cost Search

2. Basic Problem-Solving Strategies

2.1 Basic search techniques2.2 Best-first heuristic search2.3 Problem decompositon and

AND/OR graphs2.4 Searching in Games2.5 Efficient Searching (SMA*)

2.2 Best-First Heuristic Search

2.2.1 Greedy Search

2.2.2 Best-First Heuristic Search (A*)– Routing Problem– Best-First Search for Scheduling

2.2.1 Greedy Searchwith Straight-Line Distance Heuristic

hSLD(n) = straight-line distance between n and the goal location

75

118

111

140

80

97

99

101

211

A

B

C

D

E F

G

H I

State h(n)

A 366B 374C 329D 244E 253F 178G 193H 98I 0

Page 8: Basic Problem-Solving Strategies

75

118

111

140

80

97

99

101

211

A

B

C

D

E F

G

H I

State h(n)

A 366B 374C 329D 244E 253F 178G 193H 98I 0

A

E BC

h=253 329 374

A

E C B

A F G

h=253

A E F

Total distance = 253 + 178= 431

h = 366 h = 178 h = 193

h = 253 h = 178

A

A F

BCE

G

IE

A E IF

h = 253

h =366 h=178 h=193

h = 253 h = 0h = 178

h = 0h = 253

Total distance=253 + 178 + 0=431

Optimality?Cost(A — E — F — I )= 431

Cost(A — E — G — H — I) = 418

vs.

75

118

111

140

80

97

99

101

211

A

B

C

D

E F

G

H I

State h(n)

A 366B 374C 329D 244E 253F 178G 193H 98I 0

CompletenessGreedy Search is incompleteWorst-case time complexity

O(bm)

Straight-line distance

A

B

C

D

h(n)

0

5

7

6 A

D

B

CTarget node

Starting node

2.2.2 Best-First Search Heuristics

f(n) = g(n) + h(n)

heuristic estimator

g

h

[Bratko, 2001]

Page 9: Basic Problem-Solving Strategies

Best-First Heuristic Search

Process 1 Process 2Activate-DeactivateMechanism:

Routing Example

A

E C B

f = 75 +374 =449

f = 118+329 =447

f = 140 + 253 = 393

f(n) = g(n) + h(n)

75

118

111

140

80

97

99

101

211

A

B

C

D

EF

G

H I

State h(n)

A 366B 374C 329D 244E 253F 178G 193H 98I 0

Routing Example

A

E C B

A F G

f = 140+253 = 393

f = 0 + 366 = 366

f = 280+366 = 646

f =239+178 =417

f = 220+193 = 413

75

118

111

140

80

97

99

101

211

A

B

C

D

EF

G

H I

State h(n)

A 366B 374C 329D 244E 253F 178G 193H 98I 0

Routing Example

A

A F

BCE

G

HE

f = 140 + 253 = 393

f =220+193 =413

f = 317+98 = 415

f = 300+253 = 553

f = 0 + 366 = 366

75

118

111

140

80

97

99

101

211

A

B

C

D

EF

G

H I

State h(n)

A 366B 374C 329D 244E 253F 178G 193H 98I 0

Routing Example75

118

111

140

80

97

99

101

211

A

B

C

D

EF

G

H I

State h(n)

A 366B 374C 329D 244E 253F 178G 193H 98I 0

A

A F

BCE

G

HE

f = 140 + 253 = 393

f =220+193 =413

f = 300+253 = 553

f =0 + 366= 366

f = 317+98 = 415

If = 418+0 = 418

Optimality: Yes!Cost(A — E — F — I )= 431

Cost(A — E — G — H — I) = 418

vs.

75

118

111

140

80

97

99

101

211

A

B

C

D

E F

G

H I

State h(n)

A 366B 374C 329D 244E 253F 178G 193H 98I 0

Page 10: Basic Problem-Solving Strategies

Best-First Search for Scheduling

Precedence

Solution 1

Solution 2

[Bratko, 2001]

2. Basic Problem-Solving Strategies

2.1 Basic search techniques2.2 Best-first heuristic search2.3 Problem decompositon and

AND/OR graphs2.4 Searching in Games2.5 Efficient Searching (SMA*)

2.3 Problem Decomposition andAND/OR Graphs

[Bratko, 2001] [Bratko, 2001]

AND/OR Graphs and Solution Trees

Solution Tree T:– The original problem, P, is the root node of T.– If P is an OR node then exactly one of its

successors (in the AND/OR graph), together withits own solution tree, is in T.

– If P is an AND node then all of its successors (inthe AND/OR graph), together with their solutiontrees, are in T.

OR AND

[Bratko, 2001]

Solution Trees: Example

[Bratko, 2001]

Route Problem: Cheepest Solution Tree

[Bratko, 2001]

Page 11: Basic Problem-Solving Strategies

2. Basic Problem-Solving Strategies

2.1 Basic search techniques2.2 Best-first heuristic search2.3 Problem decompositon and

AND/OR graphs2.4 Searching in Games

Two-Person Game AND/OR Graph

[Bratko, 2001]

Tic Tac Toe Example

[Russel & Norvig, 1995]

2. Basic Problem-Solving Strategies

2.1 Basic search techniques2.2 Best-first heuristic search2.3 Problem decompositon and

AND/OR graphs2.4 Searching in Games

- MiniMax Strategy- Pruning

Minimax Strategy for Game Playing

3 2 2

3

3 12 8 2 4 6 14 5 2

[Russel & Norvig, 1995]

Minimax Strategy

This algorithm is only good for games with a lowbranching factor.

In general, the complexity is:O(bd) where: b = average branching factor

d = number of plies

Chess on average has:• 35 branches and• usually at least 100 moves

• so the game space is: 35100

Is this a realistic game space to search?(1000 positions/sec., 150 seconds/move --> 4 ply look-ahead)

Page 12: Basic Problem-Solving Strategies

MinimaxChessTree

MinimaxSearchinChess

How to Judge Quality

– Evaluation functions must agree with the utility functions onthe terminal states (evaluation of board configuration).

– They must not take too long to evaluate ( trade off betweenaccuracy and time cost).

• They should reflect the actual chance of winning.

Use the probability of winning as the value to return.

• One has to design a heuristic value for any given position ofany object in the game.

Examples: Chess and Othello

– Weighted Linear Functions

• w1f1 + w2f2 + … wnfn

wi: weight of piece ifi: features of a particular position

• Chess : Material Value – each piece on the board isassociated with a value ( Pawn = 100, Knights = 3 …etc)

• Othello : Values given to the number of certain colorson the board and the number of colors that will beconverted

Chess Tree 2. Basic Problem-Solving Strategies

2.1 Basic search techniques2.2 Best-first heuristic search2.3 Problem decompositon and

AND/OR graphs2.4 Searching in Games

- MiniMax Strategy- Pruning

Page 13: Basic Problem-Solving Strategies

Pruning the Search Tree: Speedup Search

What is pruning?– The process of eliminating a branch of the search tree from

consideration without examining it.

Why prune?– To eliminate searching nodes that are potentially

unreachable.– To speedup the search process.

Chess:1000 positions/sec., 150 seconds/move --> 4 ply look-ahead

Alpha-beta pruning returns the same choice asminimax cutoff decisions, but examines fewernodes.

Pruning : Speedup Search

3 2 2

3

3 12 8 2 4 6 14 5 2

A2 is worth at most 2 to MAX

[Russel & Norvig, 1995]

Alpha-Beta Pruning

Gets its name from the two variables that are passedalong during the search which restrict the set ofpossible solutions.

Alpha: the value of the best choice (highest value) sofar along the path for MAX.

Beta: the value of the best choice (lowest value) so faralong the path for MIN.

Alpha-Beta Pruning Example

α = − ∞

β = + ∞

Alpha-Beta Pruning Example

MIN

MAX

α = − ∞

β = + ∞

α = − ∞

β = + ∞

α = − ∞

β = + ∞

α = − ∞ β = + ∞

α = − ∞

β = + ∞

Alpha-Beta Pruning Example

MIN

MAX

Page 14: Basic Problem-Solving Strategies

Alpha-Beta Pruning Example

α = − ∞

β = + ∞

α = − ∞

β = + ∞

α = − ∞

β = + ∞

α = − ∞

β = + ∞

MIN

MAX

Utility: 8

Alpha-Beta Pruning Example

α = − ∞

β = + ∞

α = − ∞

β = + ∞

α = − ∞

β = + ∞

α = 8

β = + ∞

MIN

MAX

Utility: 8

Alpha-Beta Pruning Example

α = − ∞

β = + ∞

α = − ∞

β = + ∞

α = − ∞ β = 8

α = 8

β = + ∞

MIN

MAX

Utility: 8

Alpha-Beta Pruning Example

α = − ∞

β = + ∞

α = − ∞

β = + ∞

α = − ∞ β = 8

α = 8 α = − ∞

β = + ∞ β = 8

MIN

MAX

Utility: 8 3

Alpha-Beta Pruning Example

α = − ∞

β = + ∞

α = − ∞

β = + ∞

α = − ∞ β = 8

α = 8 α = 3

β = + ∞ β = 8

MIN

MAX

Utility: 8 3

Alpha-Beta Pruning Example

α = − ∞

β = + ∞

α = − ∞

β = + ∞

α = − ∞ β = 3

α = 8 α = 3

β = + ∞ β = 8

MIN

MAX

Utility: 8 3

Page 15: Basic Problem-Solving Strategies

Alpha-Beta Pruning Example

α = − ∞

β = + ∞

α = 3

β = + ∞

α = − ∞ β = 3

α = 8 α = 3

β = + ∞ β = 8

MIN

MAX

Utility: 8 3

Alpha-Beta Pruning Example

α = − ∞

β = + ∞

α = 3

β = + ∞

α = − ∞ α = 3

β = 3 β = + ∞

α = 8 α = 3

β = + ∞ β = 8

MIN

MAX

Utility: 8 3

Alpha-Beta Pruning Example

α = − ∞

β = + ∞

α = 3

β = + ∞

α = − ∞ α = 3

β = 3 β = + ∞

α = 8 α = 3 α = 3

β = + ∞ β = 8 β = + ∞

MIN

MAX

Utility: 8 3 2

Alpha-Beta Pruning Example

α = − ∞

β = + ∞

α = 3

β = + ∞

α = − ∞ α = 3

β = 3 β = + ∞

α = 8 α = 3 α = 2

β = + ∞ β = 8 β = + ∞

MIN

MAX

Utility: 8 3 2

Alpha-Beta Pruning Example

α = − ∞

β = + ∞

α = 3

β = + ∞

α = − ∞ α = 3

β = 3 β = 2

α = 8 α = 3 α = 2

β = + ∞ β = 8 β = + ∞

MIN

MAX

Utility: 8 3 2

Alpha-Beta Pruning Example

α = − ∞

β = + ∞

α = 3

β = + ∞

α = − ∞ α = 3

β = 3 β = 2

α = 8 α = 3 α = 2

β = + ∞ β = 8 β = + ∞

MIN

MAX

Utility: 8 3 2

Page 16: Basic Problem-Solving Strategies

Alpha-Beta Pruning Example

α = − ∞

β = 3

α = 3

β = + ∞

α = − ∞ α = 3

β = 3 β = 2

α = 8 α = 3 α = 2

β = + ∞ β = 8 β = + ∞

MIN

MAX

Utility: 8 3 2

Alpha-Beta Pruning Example

α = − ∞

β = 3

α = 3 α = − ∞

β = + ∞ β = 3

α = − J α = 3

β = 3 β = 2

α = 8 α = 3 α = 2

β = + ∞ β = 8 β = + ∞

MIN

MAX

Utility: 8 3 2

Alpha-Beta Pruning Example

α = − ∞

β = 3

α = 3 α = − ∞

β = + ∞ β = 3

α = − ∞ α = 3 α = − ∞

β = 3 β = 2 β = 3

α = 8 α = 3 α = 2

β = + ∞ β = 8 β = + ∞

MIN

MAX

Utility: 8 3 2

Alpha-Beta Pruning Example

α = − ∞

β = 3

α = 3 α = − ∞

β = + ∞ β = 3

α = − ∞ α = 3 α = − ∞

β = 3 β = 2 β = 3

α = 8 α = 3 α = 2 α = − ∞

β = + ∞ β = 8 β = + ∞ β = 3

MIN

MAX

Utility: 8 3 2 14

Alpha-Beta Pruning Example

α = − ∞

β = 3

α = 3 α = − ∞

β = + ∞ β = 3

α = − ∞ α = 3 α = − ∞

β = 3 β = 2 β = 3

α = 8 α = 3 α = 2 α = 14

β = + ∞ β = 8 β = + ∞ β = 3

MIN

MAX

Utility: 8 3 2 14

Alpha-Beta Pruning Example

α = − ∞

β = 3

α = 3 α = − ∞

β = + ∞ β = 3

α = − ∞ α = 3 α = − ∞

β = 3 β = 2 β = 3

α = 8 α = 3 α = 2 α = 14

β = + ∞ β = 8 β = + ∞ β = 3

MIN

MAX

Utility: 8 3 2 14

Page 17: Basic Problem-Solving Strategies

Alpha-Beta Pruning Example

α = − ∞

β = 3

α = 3 α = 3

β = + ∞ β = 3

α = − ∞ α = 3 α = − ∞

β = 3 β = 2 β = 3

α = 8 α = 3 α = 2 α = 14

β = + ∞ β = 8 β = + ∞ β = 3

MIN

MAX

Utility: 8 3 2 14

Alpha-Beta Pruning Example

α = − ∞

β = 3

α = 3 α = 3

β = + ∞ β = 3

α = − ∞ α = 3 α = − ∞

β = 3 β = 2 β = 3

α = 8 α = 3 α = 2 α = 14

β = + ∞ β = 8 β = + ∞ β = 3

MIN

MAX

Utility: 8 3 2 14

Alpha-BetaSearchinChess

2. Basic Problem-Solving Strategies

2.1 Basic search techniques2.2 Best-first heuristic search2.3 Problem decompositon and

AND/OR graphs2.4 Searching in Games

- MiniMax Strategy- Pruning- Chance

Give Chance and Chance

[Russel & Norvig, 1995]

Chance Nodes: Example

Max

Chance

Min

Max

Chance

2 3 14 3 2 1 2 3 5 2 7 5 61 2

4 3 3 3 5 7 6 2

3.6 3.0 5.8 4.4

3.0 4.4

3.56

.6 .6 .6 .6.4 .4 .4 .4

.6 .4

Page 18: Basic Problem-Solving Strategies

2. Basic Problem-Solving Strategies

2.1 Basic search techniques2.2 Best-first heuristic search2.3 Problem decompositon and

AND/OR graphs2.4 Searching in Games2.5 Efficient Searching (SMA*)

SMA* — Simplified Memory-Bounded A*

– SMA* will utilize whatever memory is madeavailable to it

– SMA* avoids repeated states as far as its memoryallows

– SMA* is complete if the available memory issufficient to store the shallowest solution path

– SMA* is optimal if enough memory is available tostore the shallowest optimal solution path

SMA* in Action

A 12

----------------------------------------------------------------------------------------------------------

12A

B G15

DC

E F

H I

J K

25 20

35 30

18 24

19 29

Label: current f-cost

13

Objective:

Find the lowest -cost goal node. A: root node

D,F,I,J: goal nodes

Constraint:

Maximum of 3nodes

SMA* in Action

A 12 A

B

12

15

----------------------------------------------------------------------------------------------------------

12A

B G15

DC

E F

H I

J K

25 20

35 30

18 24

19 29

Label: current f-cost

13

Objective:

Find the lowest -cost goal node. A: root node

D,F,I,J: goal nodes

Constraint:

Maximum of 3nodes

SMA* in Action

A 12 A

B

12

15

----------------------------------------------------------------------------------------------------------

A

B

13

15 G 13

• Memory is full.

• Update f-cost of A.

• Drop the higher f-cost leaf (B).

• Memorize B.

12A

B G15

DC

E F

H I

J K

25 20

35 30

18 24

19 29

Label: current f-cost

13

Objective:

Find the lowest -cost goal node. A: root node

D,F,I,J: goal nodes

Constraint:

Maximum of 3nodes

SMA* in Action

A 12 A

B

12

15

A

G

13 (15)

13

H

----------------------------------------------------------------------------------------------------------

A

B

13

15 G 13

(Infinite)

• Memory is full.

• Update f-cost of A.

• Drop the higher f-cost leaf (B).

• Memorize B.

• Expand G.

• Memory is full.

• h not a goal node.

• Mark h as infinite.

12A

B G15

DC

E F

H I

J K

25 20

35 30

18 24

19 29

Label: current f-cost

13

Objective:

Find the lowest -cost goal node. A: root node

D,F,I,J: goal nodes

Constraint:

Maximum of 3nodes

Page 19: Basic Problem-Solving Strategies

SMA* in Action

A

G

15 (15)

24 (infinite)

Objective:

Find the lowest -cost goal node.

I 24

• Drop H and add I.

• G memorizes H.

• Update f-cost for G.

• Update f-cost for A.

A: root node

D,F,I,J: goal nodes

----------------------------------------------------------------------------------------------------------

12A

B G15

DC

E F

H I

J K

25 20

35 30

18 24

19 29

Label: current f-cost

13

Constraint:

Maximum of 3nodes

SMA* in Action

A

G

G

15 (15)

24 (infinite)24

A

B

15

15

Objective:

Find the lowest -cost goal node.

I 24

• Drop H and add I.

• G memorizes H.

• Update f-cost for G.

• Update f-cost for A.

• I is a goal node , butmay not be the bestsolution.

• So B is worthwhile to tryfor the second time

A: root node

D,F,I,J: goal nodes

----------------------------------------------------------------------------------------------------------

12A

B G15

DC

E F

H I

J K

25 20

35 30

18 24

19 29

Label: current f-cost

13

Constraint:

Maximum of 3nodes

SMA* in Action

A

G

G

15 (15)

24 (infinite)24

A

B

15

15

A

B

15 (24)

15

Objective:

Find the lowest -cost goal node.

I 24

C (Infinite)

• Drop H and add I.

• G memorizes H.

• Update f-cost for G.

• Update f-cost for A.

• I is a goal node , butmay not be the bestsolution.

• So B is worthwhile to tryfor the second time

• Drop G and add C.

• A memorizes G.

• C is not a goal node.

• C marked as infinite.

A: root node

D,F,I,J: goal nodes

----------------------------------------------------------------------------------------------------------

12A

B G15

DC

E F

H I

J K

25 20

35 30

18 24

19 29

Label: current f-cost

13

Constraint:

Maximum of 3nodes

SMA* in Action

A

G

G

15 (15)

24 (infinite)24

A

B

15

15

A

B

15 (24)

15

A

B

20 (24)

20 (infinite)

D

20

Objective:

Find the lowest -cost goal node.

I 24

C (Infinite)

• Drop H and add I.

• G memorizes H.

• Update f-cost for G.

• Update f-cost for A.

• I is a goal node , butmay not be the bestsolution.

• So B is worthwhile to tryfor the second time

• Drop G and add C.

• A memorizes G.

• C is not a goal node.

• C marked as infinite.

• Drop C and add D.

• B memorizes C.

• D is a goal node.

• Terminate.

A: root node

D,F,I,J: goal nodes

----------------------------------------------------------------------------------------------------------

12A

B G15

DC

E F

H I

J K

25 20

35 30

18 24

19 29

Label: current f-cost

13

Constraint:

Maximum of 3nodes

SMA* in Action

A

G

G

15 (15)

24 (infinite)24

A

B

15

15

A

B

15 (24)

15

A

B

20 (24)

20 (infinite)

D

20

Objective:

Find the lowest -cost goal node.

I 24

C (Infinite)

• Drop H and add I.

• G memorizes H.

• Update f-cost for G.

• Update f-cost for A.

• I is a goal node , butmay not be the bestsolution.

• So B is worthwhile to tryfor the second time

• Drop G and add C.

• A memorizes G.

• C is not a goal node.

• C marked as infinite.

• Drop C and add D.

• B memorizes C.

• D is a goal node.

• Terminate.

A: root node

D,F,I,J: goal nodes

----------------------------------------------------------------------------------------------------------

12A

B G15

DC

E F

H I

J K

25 20

35 30

18 24

19 29

Label: current f-cost

13

Constraint:

Maximum of 3nodes What about

this node?

References

• Bratko, I. (2001). PROLOG Programming for ArtificialIntelligence. New York, Addison-Wesley.

• Kurzweil, R. (1990). The Age of Intelligent Machines.Cambridge, MA, MIT Press.

• Newborn, M. (1997). Kasparov versus Deep Blue.Berlin, Springer-Verlag.

• Nilsson, N. (1998). Artificial Intelligence — A NewSynthesis. San Francisco, CA, Morgan Kaufmann.

• Russel, S., and Norvig, P. (1995). Artificial Intelligence— A Modern Approach. Upper Saddle River, NJ,Prentice Hall.