44
Complete Search General Game Playing Lecture 3 Michael Genesereth Spring 2012

Complete Search

  • Upload
    mandek

  • View
    38

  • Download
    0

Embed Size (px)

DESCRIPTION

General Game PlayingLecture 3. Complete Search. Michael Genesereth Spring 2012. Game Description. init (cell(1,1,b)) init (cell(1,2,b)) init (cell(1,3,b)) init (cell(2,1,b)) init (cell(2,2,b)) init (cell(2,3,b)) init (cell(3,1,b)) init (cell(3,2,b)) init (cell(3,3,b)) - PowerPoint PPT Presentation

Citation preview

Page 1: Complete Search

Complete Search

General Game Playing Lecture 3

Michael Genesereth Spring 2012

Page 2: Complete Search

2

Game Descriptioninit(cell(1,1,b))init(cell(1,2,b))init(cell(1,3,b))init(cell(2,1,b))init(cell(2,2,b))init(cell(2,3,b))init(cell(3,1,b))init(cell(3,2,b))init(cell(3,3,b))init(control(x))

legal(P,mark(X,Y)) :- true(cell(X,Y,b)) & true(control(P))

legal(x,noop) :- true(control(o))

legal(o,noop) :- true(control(x))

next(cell(M,N,P)) :- does(P,mark(M,N))

next(cell(M,N,Z)) :- does(P,mark(M,N)) & true(cell(M,N,Z)) & Z#b

next(cell(M,N,b)) :- does(P,mark(J,K)) & true(cell(M,N,b)) & (M#J | N#K)

next(control(x)) :- true(control(o))

next(control(o)) :- true(control(x))

terminal :- line(P)terminal :- ~open

goal(x,100) :- line(x)goal(x,50) :- drawgoal(x,0) :- line(o)

goal(o,100) :- line(o)goal(o,50) :- drawgoal(o,0) :- line(x)

row(M,P) :- true(cell(M,1,P)) & true(cell(M,2,P)) & true(cell(M,3,P))

column(N,P) :- true(cell(1,N,P)) & true(cell(2,N,P)) & true(cell(3,N,P))

diagonal(P) :- true(cell(1,1,P)) & true(cell(2,2,P)) & true(cell(3,3,P))

diagonal(P) :- true(cell(1,3,P)) & true(cell(2,2,P)) & true(cell(3,1,P))

line(P) :- row(M,P)line(P) :- column(N,P)line(P) :- diagonal(P)

open :- true(cell(M,N,b))

draw :- ~line(x) & ~line(o)

Page 3: Complete Search

3

Game Manager

Game Manager

GameDescriptions

MatchRecords

TemporaryState Data

Player

Graphics forSpectators

. . .. . .

Tcp/ip

Page 4: Complete Search

4

Start - manager sends Start message to players start(id,role,game,startclock,playclock) ready

Play - manager sends Play messages to players play(id,[action,…,action]) action

Stop - manager sends Stop message to players stop(id,[action,…,action]) done

Game Playing Protocol

Page 5: Complete Search

5

Your Mission

Your mission is to write definitions for the three basic event handlers (start, play, stop).

To help you get started, we provide subroutines for accessing the implicit game graph instead of the explicit description.

NB: The game graph is virtual - it is computed on the fly from the game description. Hence, our subroutines are not cheap. Best to cache results rather than call repeatedly.

Page 6: Complete Search

6

findroles(game) [role,...,role]

findbases(game) {proposition,...,proposition}

findinputs(game) {action,...,action}

findinitial(game) state

findterminal(state,game) boolean

findgoal(role,state,game) number

findlegals(role,state,game) {action,...,action}

findnext([action,...,action],state,game) state

Basic Player Subroutines

Page 7: Complete Search

7

Legal Play

Page 8: Complete Search

8

var game;var player;var state;

function start (id,role,rules,start,play) {game = rules; player = role; state = null; return 'ready'}

Starting

Page 9: Complete Search

9

function play (id,move) {if (move.length == 0) {state=findinitial(game)} else {state=findnext(move,state,game)}; var actions=findlegals(role,state,game); return actions[0]}

Playing

Page 10: Complete Search

10

function stop (id,move) {return 'done'}

Stopping

Page 11: Complete Search

11

Single Player Games

Page 12: Complete Search

12

Game Graph

s3

s2 s5

s6

s8

s9

s4s7 s10

s1 s11

ba

a

d b

c

a

b a

b

a

a b

b

aa a

c

d b

Page 13: Complete Search

13

Depth First Search

a

c

hg

d

ji

b

fe

a b e f c g h d i j

Advantage: Small intermediate storageDisadvantage: Susceptible to garden pathsDisadvantage: Susceptible to infinite loops

Page 14: Complete Search

14

Breadth First Search

a b c d e f g h i j

a

c

hg

d

ji

b

fe

Advantage: Finds shortest pathDisadvantage: Consumes large amount of space

Page 15: Complete Search

15

Time Comparison

Branching 2 and depth d and solution at depth k

Time Best Worst

Depth k 2d −2d−k

Breadth 2k−1 2k−1

Page 16: Complete Search

16

Time Comparison

Analysis for branching b and depth d and solution at depth k.

Time Best Worst

Depth kbd−bd−k

b−1

Breadthbk−1−1b−1

+1bk−1b−1

Page 17: Complete Search

17

Space Comparison

Total depth d and solution depth k.

Space Binary General

Depth d (b−1)×(d−1)+1

Breadth 2k−1 bk−1

Page 18: Complete Search

18

Iterative Deepening

Run depth-limited search repeatedly,

starting with a small initial depth,

incrementing on each iteration,

until success or run out of alternatives.

Page 19: Complete Search

19

Example

a

c

hg

d

ji

b

fe

aa b c da b e f c g h d i j

Advantage: Small intermediate storageAdvantage: Finds shortest pathAdvantage: Not susceptible to garden pathsAdvantage: Not susceptible to infinite loops

Page 20: Complete Search

20

Time Comparison

Depth Iterative Depth

1 1 1

2 4 3

3 11 7

4 26 15

5 57 31

n 2n+1−n−2 2n−1

Page 21: Complete Search

21

General Results

Theorem [Korf]: The cost of iterative deepening search is b/(b-1) times the cost of depth-first search (where b is the branching factor).

Theorem: The space cost of iterative deepening is no greater than the space cost for depth-first search.

Page 22: Complete Search

22

Game Graph I

f

j

k

c

d

a

e

g

m

b

l

n

h

i

Page 23: Complete Search

23

Game Graph II

f

j

k

c

d

n

e

g

b

b

a

c

h

i

Page 24: Complete Search

24

Bidirectional Search

ma

Page 25: Complete Search

25

Minimax

Page 26: Complete Search

26

Basic Idea

Intuition - Select a move that is guaranteed to produce the highest possible return no matter what the opponents do.

In the case of a one move game, a player should choose an action such that the value of the resulting state for any opponent action is greater than or equal to the value of the resulting state for any other action and opponent action.

In the case of a multi-move game, minimax goes to the end of the game and “backs up” values.

Page 27: Complete Search

27

Bipartite Game Graph/Tree

Max node

Min node

Page 28: Complete Search

28

State Value

The value of a max node for player p is either the utility of that state if it is terminal or the maximum of all values for the min nodes that result from its legal actions.

value(p,x) = goal(p,x) if terminal(x) max({value(p,minnode(p,a,x)) | legal(p,a,x)})

The value of a min node is the minimum value that results from any legal opponent action.

value(p,n) = min({value(p,maxnode(P-p,b,n))legal(P-p ,b,n)})

Page 29: Complete Search

29

Bipartite Game Tree

3 3

3

1 2 3 4

1 3

3

5 6 7 8

5 7

7

8 7 6 5

7 5

7

4 3 2 1

3 1

3

Page 30: Complete Search

30

Best Action

A player should choose an action a that leads to a min node with maximal value, i.e.the player should prefer action a to action a’ if and only if the following holds.

value(p,minnode(p,a, x)) > value(p,minnode(p,a’, x))

Page 31: Complete Search

31

Best Action

function bestaction (role,state) {var value = []; var acts = legals(role,state); for (act in acts) {value[act]=minscore(role,act,state)}; return act that maximizes value[act]}

Page 32: Complete Search

32

Scores

function maxscore (role,state) {if (terminalp(state)) return goal(state); var value = []; for (act in legals(role,state)) {value[act] = minscore(role,act,state); return max(value)}

function minscore (role, action, state) {var value = []; for (move in moves(role,action,state)) {value[move] = maxscore(role,next(move,state))}; return min(value)}

Page 33: Complete Search

33

Bounded Minimax

If the minvalue for an action is determined to be 100, then there is no need to consider other actions.

In computing the minvalue for a state if the value to the first player of an opponent’s action is 0, then there is no need to consider other possibilities.

Page 34: Complete Search

34

Bounded Minimax Example

100

100

0 100 100

0 100

100

100 50 100 100

50 100

100

Page 35: Complete Search

35

Alpha Beta Pruning

Page 36: Complete Search

36

Alpha Beta Pruning

Alpha-Beta Search - Same as Bounded Minimax except that bounds are computed dynamically and passed along as parameters. See Russell and Norvig for details.

Page 37: Complete Search

37

Alpha Beta Example

15 4

15

16 15 14

15 14

15

12 6

50

50

2 3 4 5

2 4

4

Page 38: Complete Search

38

Benefits

Best case of alpha-beta pruning can reduce search space to square root of the unpruned search space, thereby dramatically increasing depth searchable within given time bound.

Page 39: Complete Search

39

Caching

Page 40: Complete Search

40

Virtual Game Graph

The game graph is virtual - it is computed on the fly from the game description. Hence, our subroutines are not cheap.

May be good to cache results (build an explicit game graph) rather than call repeatedly.

Page 41: Complete Search

41

Benefit - Avoiding Duplicate Work

Consider a game in which the actions allow one to explore an 8x8 board by making moves left, right, up, and down.

Game tree has branching factor 4 and depth 16. Fringe of tree has 4^16 = 4,294,967,296 nodes.

Only 64 states. Space searched much faster if duplicate states detected.

Page 42: Complete Search

42

Disadvantage - Extra Work

Finding duplicate states takes time proportional to the log of the number of states and takes space linear in the number of states.

Work wasted when duplicate states rare. (We shall see two common examples of this next week.)

Memory might be exhausted. Need to throw away old states.

Page 43: Complete Search

43

Page 44: Complete Search

44