CS 387: GAME AI

Preview:

Citation preview

CS 387: GAME AI BOARD GAMES

5/24/2016 Instructor: Santiago Ontañón

santi@cs.drexel.edu Class website:

https://www.cs.drexel.edu/~santi/teaching/2016/CS387/intro.html

Reminders • Check BBVista site for the course regularly • Also: https://www.cs.drexel.edu/~santi/teaching/2016/CS387/intro.html

•  Thursday, project 4 submission deadline

Outline • Board Games • Game Tree Search • Portfolio Search • Monte Carlo Search • UCT

Outline • Board Games • Game Tree Search • Portfolio Search • Monte Carlo Search • UCT

Game AI Architecture

AI

World Interface

(perception)

Strategy

Decision Making

Movement

So far, we have seen: • Perception • Movement (Steering behaviors):

•  FPS, Car driving

• Pathfinding •  FPS, RTS, RPG, etc.

• Decision Making •  FPS, RPG, RTS, etc.

•  Tactics and Strategy •  FPS, RTS

• PCG •  Many genres.

Board Games • Main characteristic: turn-based

•  The AI has a lot of time to decide the next move

Board Games • Not just chess…

Board Games • Not just chess…

Board Games • Not just chess…

Board Games • Not just chess…

Board Games • Not just chess…

Board Games • Not just chess…

Board Games •  From an AI point of view:

•  Turn-based •  Discrete actions •  Complete information (mostly)

•  Those features make these games amenable to game tree search!

Outline • Board Games • Game Tree Search • Portfolio Search • Monte Carlo Search • UCT

Game Tree

Current Situation

Player 1 action

U(s) U(s) U(s)

Game Tree

Current Situation

Player 1 action

U(s) U(s) U(s) Pick the action that

leads to the state with maximum expected

utility

Game Tree

Current Situation

Player 1 action

Player 2 action

U(s) U(s) U(s) U(s) U(s) U(s)

•  Game trees capture the effects of successive action executions:

Game Tree

Current Situation

Player 1 action

Player 2 action

U(s) U(s) U(s) U(s) U(s) U(s)

•  Game trees capture the effects of successive action executions:

Pick the action that leads to the state with maximum expected

utility after taking into account what the other

players might do

Game Tree

Current Situation

Player 1 action

Player 2 action

U(s) U(s) U(s) U(s) U(s) U(s)

•  Game trees capture the effects of successive action executions:

In this example, we look ahead only one player 1 action and one player 2 action.

But we could grow the tree arbitrarily deep

Minimax Principle

Current Situation

Player 1 action

Player 2 action

U(s) = -1 U(s) = 0 U(s) = -1 U(s) = 0 U(s) = 0 U(s) = 0

•  Positive utility is good for player 1, and negative for player 2 •  Player 1 chooses actions that maximize U, player 2 chooses

actions that minimize U

Minimax Principle

Current Situation

Player 1 action

Player 2 action

U(s) = -1 U(s) = 0 U(s) = -1 U(s) = 0 U(s) = 0 U(s) = 0

•  Positive utility is good for player 1, and negative for player 2 •  Player 1 chooses actions that maximize U, player 2 chooses

actions that minimize U

Only looking at the utility values, which

move should player 1 choose?

Minimax Principle

Current Situation

Player 1 action

Player 2 action (min)

U(s) = -1 U(s) = 0 U(s) = -1 U(s) = 0 U(s) = 0 U(s) = 0

•  Positive utility is good for player 1, and negative for player 2 •  Player 1 chooses actions that maximize U, player 2 chooses

actions that minimize U

Minimax Principle

Current Situation

Player 1 action

Player 2 action (min)

U(s) = -1 U(s) = 0 U(s) = -1 U(s) = 0 U(s) = 0 U(s) = 0

•  Positive utility is good for player 1, and negative for player 2 •  Player 1 chooses actions that maximize U, player 2 chooses

actions that minimize U

U(s) = -1 U(s) = -1 U(s) = 0

Minimax Principle

Current Situation

Player 1 action (max)

Player 2 action (min)

U(s) = -1 U(s) = 0 U(s) = -1 U(s) = 0 U(s) = 0 U(s) = 0

•  Positive utility is good for player 1, and negative for player 2 •  Player 1 chooses actions that maximize U, player 2 chooses

actions that minimize U

U(s) = -1 U(s) = -1 U(s) = 0

Minimax Algorithm Minimax(state, player, MAX_DEPTH)

IF MAX_DEPTH == 0 RETURN (U(state),-) BestAction = null BestScore = null FOR Action in actions(player, state)

(Score,Action2) = Minimax(result(action, state), nextplayer(player), MAX_DEPTH-1)

IF BestScore == null || (player == 1 && Score>BestScore) || (player == 2 && Score<BestScore) BestScore = Score BestAction = Action

ENDFOR RETURN (BestScore, BestAction)

Minimax Algorithm Minimax(state, player, MAX_DEPTH)

IF MAX_DEPTH == 0 RETURN (U(state),-) BestAction = null BestScore = null FOR Action in actions(player, state)

(Score,Action2) = Minimax(result(action, state), nextplayer(player), MAX_DEPTH-1)

IF BestScore == null || (player == 1 && Score>BestScore) || (player == 2 && Score<BestScore) BestScore = Score BestAction = Action

ENDFOR RETURN (BestScore, BestAction)

Minimax Algorithm •  Needs:

•  Utility function U •  Way to determine which actions can a player execute in a given state

•  MAX_DEPTH controls how deep is the search tree going to be: •  Size of the tree is exponential in MAX_DEPTH •  Branching factor is the number of moves that can be executed per

state

•  The higher MAX_DEPTH, the better the AI will play

•  There are ways to increase speed: alpha-beta pruning

Minimax Algorithm •  Needs:

•  Utility function U •  Way to determine which actions can a player execute in a given state

•  MAX_DEPTH controls how deep is the search tree going to be: •  Size of the tree is exponential in MAX_DEPTH •  Branching factor is the number of moves that can be executed per

state

•  The higher MAX_DEPTH, the better the AI will play

•  There are ways to increase speed: alpha-beta pruning

-  Given: -  Branching factor: B -  Maximum tree depth: D

-  What is the time complexity? -  What is the memory complexity?

Successes of Minimax • Deep Blue defeated Kasparov in Chess (1997)

• Checkers was completely solved by Jonathan Shaeffert (2007): •  If no players make mistakes, the game is a draw (like tick-tack-toe)

• Go: •  Using a variant of minimax, based on Monte Carlo Tree Search •  In 2011 The program Zen19S reached 4 dan (professional humans

are rated between 1 to 9 dan) •  In 2016 AlphaGO defeated Lee SeDol (one of the best players in

the world)

Interesting Uses of Minimax •  “bastet” (Bastard Tetris):

http://blahg.res0l.net/2009/01/bastet-bastard-tetris/

Iterative Deepening • As described before, minimax receives a MAX_DEPTH

and it is impossible to predict how much time will it take to execute

•  In a game, minimax will receive a certain amount of time (e.g. 20 seconds) that it can use to decide the next move

• Solution: iterative deepening

Iterative Deepening •  Idea:

•  Open the tree at depth 1 •  If there is still time, open it at depth 2 •  If there is still time, open it at depth 3 •  Etc.

Iterative Deepening •  Idea:

•  Open the tree at depth 1 •  If there is still time, open it at depth 2 •  If there is still time, open it at depth 3 •  Etc.

If we end up searching up to depth, say 5, how much time is wasted?

Iterative Deepening •  Idea:

•  Open the tree at depth 1 •  If there is still time, open it at depth 2 •  If there is still time, open it at depth 3 •  Etc.

• Given the branching factor d, each subsequent iteration is d times larger in average than the previous.

•  For typical values of d (larger than 10), the extra cost of iterative deepening is negligible

If we end up searching up to depth, say 5, how much time is wasted?

Alpha-Beta Pruning • Not all the nodes in the search tree are relevant for

deciding the next move

5 2 4 1 3 4 2 6 1

Alpha-Beta Pruning • Not all the nodes in the search tree are relevant for

deciding the next move

2

2 1 1

5 2 4 1 3 4 2 6 1

Alpha-Beta Pruning • Not all the nodes in the search tree are relevant for

deciding the next move

2

2 1 1

5 2 4 1 3 4 2 6 1

What would happen is this value was higher?

What would happen if this value was lower?

Alpha-Beta Pruning • Not all the nodes in the search tree are relevant for

deciding the next move

2

2 1 1

5 2 4 1 3 4 2 6 1

What would happen is this value was higher?

What would happen if this value was lower?

NOTHING!

Alpha-Beta Pruning • Not all the nodes in the search tree are relevant for

deciding the next move

2

2 1 1

5 2 4 1 3 4 2 6 1

These two nodes are irrelevant! They do not have to be explored!

This is because the first node has a “1”,

which is lower than the lowest found in any other branch so far

Minimax Algorithm α = - infinity β = infinity alphabeta(state, MAX_DEPTH, α, β, player) if MAX_DEPTH = 0 or state is a terminal state return U(state) if player= 1 for action in actions(player, state) α := max(α, alphabeta(result(action,state), MAX_DEPTH-1, α, β, 2)) if β ≤ α break return α else for action in actions(player, state) β := min(β, alphabeta(result(action,state), MAX_DEPTH-1, α, β, 1)) if β ≤ α break return β

Minimax Algorithm α = - infinity β = infinity alphabeta(state, MAX_DEPTH, α, β, player) if MAX_DEPTH = 0 or state is a terminal state return U(state) if player= 1 for action in actions(player, state) α := max(α, alphabeta(result(action,state), MAX_DEPTH-1, α, β, 2)) if β ≤ α break return α else for action in actions(player, state) β := min(β, alphabeta(result(action,state), MAX_DEPTH-1, α, β, 1)) if β ≤ α break return β

Alpha-Beta Pruning

Alpha-Beta Pruning • Does pruning occur independently of the order in which

nodes are visited?

2

2 1 1

5 2 4 1 3 4 2 6 1

Alpha-Beta Pruning • Notice that pruning depends on the order in which the

children are explored

2

2 1 1

5 2 4 1 3 4 2 6 1

If we expand the “1” first, then “2” and “6”

do not have to be explored

Alpha-Beta Pruning • How to decide a good order for children expansion?

Alpha-Beta Pruning • How to decide a good order for children expansion? •  Idea: Iterative deepening

•  Explore first the children that was selected as the best move in the previous iteration of iterative deepening

•  With this modification, iterative deepening is actually faster in practice than just opening the tree at a given depth!

• Other domain specific heuristics exist for well known games such as Chess.

Outline • Board Games • Game Tree Search • Portfolio Search • Monte Carlo Search • UCT

What is an Action? • Consider a complex board game like Settlers or

Scrabble, what is the set of actions a player can perform in her turn?

What is an Action? • Consider a complex board game like Settlers or

Scrabble, what is the set of actions a player can perform in her turn?

Way too many actions to consider in minimax!!

Portfolio Search • Consider the game of Monopoly •  The set of possible actions is too large (just imagine all

possible deals we can offer any player!) • However, we can do the following:

•  We can devise 3 or 4 strategies to play the game: A.  Never do any deal nor build any house, just roll dies and buy streets. B.  Never do any deal, but build one house in the most expensive street

we can. C.  Never do any deal, but build as many houses as we can, in the

cheapest street we can. D.  Do not build houses, but offer a deal to get the cheapest full set we

could get by trading a single card with one player (offering her some predefined amount of money, a factor of the price of what we are getting)

Portfolio Search • Consider the game of Monopoly •  The set of possible actions is too large (just imagine all

possible deals we can offer any player!) • However, we can do the following:

•  We can devise 3 or 4 strategies to play the game: A.  Never do any deal nor build any house, just roll dies and buy streets. B.  Never do any deal, but build one house in the most expensive street

we can. C.  Never do any deal, but build as many houses as we can, in the

cheapest street we can. D.  Do not build houses, but offer a deal to get the cheapest full set we

could get by trading a single card with one player (offering her some predefined amount of money, a factor of the price of what we are getting)

Certainly, these different strategies would do better in different situations.

The key idea of portfolio search is to consider these strategies as the “actions”.

Minimax Portfolio Search •  At each level of the tree, use each of the predefined strategies

to generate the next action, and only consider those actions.

Action proposed By strategy A

Action proposed By strategy B

Action proposed By strategy C

Minimax Portfolio Search •  At each level of the tree, use each of the predefined strategies

to generate the next action, and only consider those actions.

Action proposed By strategy A

Action proposed By strategy B

Action proposed By strategy C

This simple idea can make minimax search feasible in games with a set of actions that is too large to consider

the whole tree.

Simple Portfolio Search •  Forget about game trees, just do this: given a set of

strategies S

•  For each s1 in S: •  For each s2 in S:

•  Simulate a game for D game cycles where player 1 uses s1, and player 2 uses s2

• Compute the average reward obtained by each strategy s1, and select the one that achieved the highest average.

Portfolio Search •  Imagine this situation:

•  Branching factor B •  Search up to a depth D •  We have a set of S strategies (S << B)

• What is the time / memory complexity of: •  Minimax? •  Minimax portfolio search? •  Simple portfolio search?

Portfolio Search •  Imagine this situation:

•  Branching factor B •  Search up to a depth D •  We have a set of S strategies (S << B)

• What is the time / memory complexity of: •  Minimax? B^D , D •  Minimax portfolio search? S^D , D •  Simple portfolio search? D * S^2 , 2

Portfolio Search •  Imagine this situation:

•  Branching factor B •  Search up to a depth D •  We have a set of S strategies

• What is the time / memory complexity of: •  Minimax? B^D , D •  Minimax portfolio search? S^D , D •  Simple portfolio search? D * S^2 , 2

In terms of play strength: Minimax > minimax portfolio search > simple portfolio search

In terms of computational needs:

Minimax > minimax portfolio search > simple portfolio search

Thus, if you can use minimax, that’s the simplest thing to do. But if you cannot (due to CPU constraints),

portfolio search is a good option to consider.

Recommended