56
Computer Science CPSC 502 Lecture 5 SLS: wrap-up Planning (Ch 8)

Computer Science CPSC 502 Lecture 5 SLS: wrap-up Planning (Ch 8)

Embed Size (px)

Citation preview

Computer Science CPSC 502

Lecture 5

SLS: wrap-up

Planning

(Ch 8)

CPSC 502, Lecture 5

Lecture Overview

• Finish Stochastic Local Search (SLS)• Planning

• STRIPS

• Heuristics

• STRIPS -> CSP

Course OverviewEnvironment

Problem Type

Query

Planning

Deterministic Stochastic

Constraint Satisfaction Search

Arc Consistency

Search

Search

Logics

STRIPS

Vars + Constraints

Value Iteration

Variable

Elimination

Belief Nets

Decision Nets

Markov Processes

Static

Sequential

Representation

ReasoningTechnique

Variable

Elimination

Focus on Planning

• Goal

• Description of states of the world

• Description of available actions => when each action can be applied and what its effects are

• Planning: build a sequence of actions that, if executed, takes the agent from the current state to a state that achieves the goal

Planning Problem

But, haven’t we seen this before?

Yes, in search, but we’ll look at a new R&R suitable for planning

Standard Search vs. Specific R&R systems• Constraint Satisfaction (Problems):

• State: assignments of values to a subset of the variables• Successor function: assign values to a “free” variable• Goal test: all variables assigned a value and all constraints satisfied?• Solution: possible world that satisfies the constraints• Heuristic function: none (all solutions at the same distance from start)

• Planning : • State• Successor function• Goal test• Solution• Heuristic function

• Inference• State• Successor function• Goal test• Solution• Heuristic function

CSP problems had some specific properties

• States are represented in terms of features (variables with a possible range of values)

• Goal : no longer a black box => expressed in terms of constraints (satisfaction of)

• But actions are limited to assignments of values to variables

• No notion of path to a solution: only final assignment matters

Standard Search vs. Specific R&R systems

• “Open-up” the representation of states, goals and actions– Both states and goals as set of features– Actions as preconditions and effects defined on state

features

• agent can reason more deliberately on what actions to consider to achieve its goals.

Key Idea of Planning

• This representation lends itself to solve planning problems either

• As pure search problems• As CSP problems

• We will look at one technique for each approach• this will only scratch the surface of planning techniques • But will give you an idea of the general approaches in this

important area of AI

Key Idea of Planning

CPSC 322, Lecture 19

Planning Techniques and Application from:

• Ghallab, Nau, and TraversoAutomated Planning: Theory and PracticeMorgan Kaufmann, May 2004ISBN 1-55860-856-7

• Web site: http://www.laas.fr/planning

applications

Slide 10

Delivery Robot Example (textbook)• Consider a delivery robot named Rob, who must

navigate the following environment, and can deliver coffee and mail to Sam, in his office

• For another example see Practice Exercise 8c

Delivery Robot Example: features• RLoc - Rob's location

• Domain: {coffee shop, Sam's office, mail room, laboratory}short {cs, off, mr, lab}

• RHC – Rob has coffee• Domain: {true, false}.

Alternatively notation: rhc indicate that Rob has coffee, and by that Rob doesn’t have coffee

• SWC – Sam wants coffee {true, false}• MW – Mail is waiting {true, false}• RHM – Rob has mail {true, false}

• An example state is • How many states are there?

rhc

Delivery Robot Example:Actions

The robot’s actions are:Move - Rob's move action

• move clockwise (mc ), move anti-clockwise (mac )PUC - Rob picks up coffee

• must be at the coffee shop and not have coffeeDelC - Rob delivers coffee

• must be at the office, and must have coffeePUM - Rob picks up mail

• must be in the mail room, and mail must be waitingDelM - Rob delivers mail

• must be at the office and have mail

Preconditions for action application

STRIPS representation(STanford Research Institute Problem Solver )

STRIPS - the planner in Shakey, first AI robothttp://en.wikipedia.org/wiki/Shakey_the_robot

In STRIPS, an action has two parts:

1. Preconditions: a set of assignments to variables that must be satisfied in order for the action to be legal

2. Effects: a set of assignments to variables that are caused by the action

STRIPS actions: Example

STRIPS representation of the action pick up coffee, PUC:

• preconditions Loc = cs and RHC = F • effects RHC =

STRIPS representation of the action deliver coffee, Del :

• preconditions Loc = and RHC = • effects RHC = and SWC =

Note in this domain Sam doesn't have to want coffee for Rob to deliver it; one way or another, Sam doesn't want coffee after delivery.

STRIPS actions: MC and MACSTRIPS representation of the action

MoveClockwise ?

• mc-cs

preconditions Loc = cs

effects Loc = off• mc-off • mc-lab• mc-mc

The STRIPS Representation

• For reference:The book also discusses a feature-centric representation (not required for this course)• for every feature, where does its value come from?• causal rule: expresses ways in which a feature’s value can be changed by

taking an action.• frame rule: requires that a feature’s value is unchanged if no action

changes it.

• STRIPS is an action-centric representation:• for every action, what does it do?• This leaves us with no way to state frame rules.

• The STRIPS assumption:• all features not explicitly changed by an action stay unchanged

18

STRIPS Actions (cont’)

The STRIPS assumption:all features not explicitly changed by an action stay unchanged

• So if the feature V has value vi in state Si , after action a has been performed, • what can we conclude about a and/or the state of the world Si-1

immediately preceding the execution of a?

Si-1

V = vi

Sia

STRIPS Actions (cont’)

The STRIPS assumption:all features not explicitly changed by an action stay unchanged

• So if the feature V has value vi in state Si , after action a has been performed, • what can we conclude about a and/or the state of the world Si-

1 ,immediately preceding the execution of a?

Si-1

V = vi

Sia

1 V = vi was TRUE in Si-1 OR

2 One of the effects of a is to set V = vi OR

3 both

Forward planning

• Idea: search in the state-space graph• The nodes represent the states• The arcs correspond to the actions:

The arcs from a state s represent all of the actions that are possible in state s

• A plan is a path from the state representing the initial state to a state that satisfies the goal

What actions a are possible in a state s?

Example state-space graph: first level

Goal:swc

Example for state space graph

What is a solution to this planning problem?

Goal:

a sequence of actions that gets us from the start to a goal

Solution:

swc

Example for state space graph

What is a solution to this planning problem?

Goal:

(puc, mc, dc)

a sequence of actions that gets us from the start to a goal

Solution:

swc

Standard Search vs. Specific R&R systemsConstraint Satisfaction (Problems):

• State: assignments of values to a subset of the variables• Successor function: assign values to a “free” variable• Goal test: set of constraints• Solution: possible world that satisfies the constraints• Heuristic function: none (all solutions at the same distance from start)

Planning : • State: full assignment of values to features• Successor function: states reachable by applying valid actions• Goal test: partial assignment of values to features• Solution: a sequence of actions• Heuristic function

Inference• State• Successor function• Goal test• Solution• Heuristic function

Forward Planning

• Any of the search algorithms we have seen can be used in Forward Planning

• Problem?• Complexity is defined by the branching factor, e.g.…

• Can be very large

• Solution?

Standard Search vs. Specific R&R systemsConstraint Satisfaction (Problems):

• State: assignments of values to a subset of the variables• Successor function: assign values to a “free” variable• Goal test: set of constraints• Solution: possible world that satisfies the constraints• Heuristic function: none (all solutions at the same distance from start)

Planning : • State: full assignment of values to features• Successor function: states reachable by applying valid actions• Goal test: partial assignment of values to features• Solution: a sequence of actions• Heuristic function

Inference• State• Successor function• Goal test• Solution• Heuristic function

Heuristics for Forward PlanningNot in textbook, but you can see details in Russel&Norvig, 10.3.2

• Heuristic function: estimate of the distance from a state to the goal

• In planning this is the……………….

• Finding a good heuristics is what makes forward planning feasible in practice

• Factored representation of states and actions allows for definition of domain-independent heuristics

• We will look at one example of such domain-independent heuristic that has proven to be quite successful in practice

Recall general method for creating admissible heuristics• …………………………………………

One example: ignore delete lists

• Assumptions for simplicity:All features are binary: T / FGoals and preconditions can only be assignments to T

• We remove all action effects that make a variable = F.

• If we find the path from the initial state to the goal using this relaxed version of the actions:

• the length of the solution is an underestimate of the actual solution length

• Why?

Action a effects (B=F, C=T)

Heuristics for Forward Planning:

Heuristics for Forward Planning:ignore delete-list

• Name of this heuristic derives from complete STRIPS representation

• Action effects are divided into those that add elements to the new state (add list) and those that remove elements (delete list)

• But how do we compute the actual heuristics values?• solve relaxed planning problem without delete lists!

• Often fast enough to be worthwhilePlanning is PSPACE-hard (that’s really hard, includes NP-hard)Without delete lists: often very fast

Example Planner

• FF or Fast Forward • Jörg Hoffmann: Where 'Ignoring Delete Lists' Works: Local Search

Topology in Planning Benchmarks. J. Artif. Intell. Res. (JAIR) 24: 685-758 (2005)

• Winner of the 2001 AIPS Planning Competition

• Estimates the heuristics by solving the relaxed planning problem with a planning graph method (next class)

• Uses Best First search with this heuristic to find a solution

• When it hits a plateau or local minimum

• Uses IDS to get away (i..e, to reach a state that is better)

Final Comment

• You should view Forward Planning as one of the basic planning techniques (we’ll see another one next week)

• By itself, it cannot go far, but it can work very well in combination with other techniques, for specific domains

• See, for instance, descriptions of competing planners in the presentation of results for the 2002 and 2008 Planning Competition

• STRIPS lends itself to solve planning problems either

• As pure search problems• As CSP problems

• We will look at one technique for each approach

Solving planning problems

Today Sept 22

• Finish Stochastic Local Search (SLS)• Planning

• STRIPS

• Heuristics

• STRIPS -> CSP

Planning as a CSP

• We simply reformulate a STRIPS model as a set of variables and constraints

• What would you chose as• Variables• Constraints

Planning as a CSP: General Idea• Action preconditions and effects are virtually constraints

between • the action, • the states in which it can be applied • the states that it can generate

• Thus, we can make both states and actions into the variables of our CSP formulations

• However, constraints based on action preconditions and effects relate to states at a given time t, the corresponding valid actions and the resulting states at t +1

• need to have as many state and action variables as there are planning steps

Planning as a CSP: General Idea• Both features and actions are CSP variables• Action preconditions and effects are constraints among

• the action, • the states in which it can be applied • the states that it can generate

Planning as a CSP: General Idea• These action constraints relate to states at a given time t, the

corresponding valid actions and the resulting states at t +1• we need to have as many state and action variables as we have planning

steps

Planning as a CSP: Variables• We need to ‘unroll the plan’ for a fixed number of steps: this is called

the horizon k• To do this with a horizon of k:

• construct a CSP variable for each STRIPS state variable at each time step from 0 to k

• construct a boolean CSP variable for each STRIPS action at each time step from 0 to k - 1.

Initial and Goal Constraints

• initial state constraints: unary constraints on the values of the state variables at time 0

• goal constraints: unary constraints on the values of the state variables at time k

CSP Planning: Precondition Constraints precondition constraints

• between state variables at time t and action variables at time t• specify when actions may be taken

E.g. robot can only pick up coffee when Loc=cs (coffee shop) and RHC = false (don’t have coffee already)

RLoc0 RHC0 PUC0

cs T

cs F

cs F

mr *

lab *

off *

Truth table for this constraint: list allowed combinations of values

CSP Planning: Precondition Constraints precondition constraints

• between state variables at time t and action variables at time t• specify when actions may be taken

E.g. robot can only pick up coffee when Loc=cs (coffee shop) and RHC = false (don’t have coffee already)

RLoc0 RHC0 PUC0

cs T

cs F

cs F

mr *

lab *

off *Need to allow for the option of *not* taking an action even when it is valid

F

F

F

F

F

T

Truth table for this constraint: list allowed combinations of values

CSP Planning: Effect ConstraintsEffect constraints

• Between action variables at time t and state variables at time t+1• Specify the effects of an action• Also depends on state variables at time t (frame rule!)

E.g. let’s consider RHC at time t and t+1Let’s fill in a few rows in this table

RHCt DelCi PUCi RHCt+1

T T T

T T F

T F T

T F F

F T T

F T F

F F T

F F F47

CSP Planning: Effect ConstraintsEffect constraints

• Between action variables at time t and state variables at time t+1• Specify the effects of an action• Also depends on state variables at time t (frame rule!)

E.g. let’s consider RHC at time t and t+1Let’s fill in a few rows in this table

RHCt DelCi PUCi RHCt+1

T T T

T T F

T F T

T F F

F T T

F T F

F F T

F F F F

T

F

T

T

T

F

T

Does not quite matter what we put here since other constraints enforce that these configurations won’t happen

Additional constraints in CSP Planning Other constraints we may want are action constraints:

• specify which actions cannot occur simultaneously• these are often called mutual exclusion (mutex)

constraints

DelMi

DelCi

E.g. in the Robot domainDelM and DelC can occur in any sequence (or simultaneously)But we can enforce that they do not happen simultaneously

49

Handling mutex constraints in Forward Planning

E.g., let’s say we don’t want DelM and DelC to occur simultaneously

How would we encode this into STRIPS for forward planning?

DelMi

DelCi

T F

TF

53

Additional constraints in CSP PlanningOther constraints we may want are state constraints

• hold between variables at the same time step• they can capture physical constraints of the system (e.g.,

robot cannot hold coffee and mail at the same time)

RHCi RHMi

T F

F T

F F

Handling state constraints in Forward Planning

RHCi RHMi

T F

F T

F F

How could we handle these constraints in STRIPS forforward planning?

How could we handle these constraints in STRIPS forforward planning?

We need to use preconditions• Robot can pick up coffee only if

it does not have coffee and it does not have mail• Robot can pick up mail only if

it does not have mail and it does not have coffee 55

Handling state constraints in Forward Planning

RHCi RHMi

T F

F T

F F

CSP Planning: Solving the problem

56

Map STRIPS Representation for horizon 1, 2, 3, …, until solution found Run arc consistency and search or stochastic local search!

k = 0Is State0 a goal?If yes, DONE!If no,

CSP Planning: Solving the problem

57

Map STRIPS Representation for horizon k =1Run arc consistency and search or stochastic local search!

k = 1Is State1 a goalIf yes, DONE!If no,

CSP Planning: Solving the problem

58

Map STRIPS Representation for horizon k = 2Run arc consistency, search, stochastic local search!

k = 2: Is State2 a goalIf yes, DONE!If no….continue

Solve Planning as CSP: pseudo code

59

solved = falsehorizon = 0While solved = false map STRIPS into CSP with horizon solve CSP -> solution if solution then solved = T else horizon = horizon + 1

Return solution

STRIPS to CSP applet

60

Allows you to:• specify a planning problem in STRIPS• map it into a CSP for a given horizon

the CSP translation is automatically loaded into the CSP applet where it can be solved

Slide 61

State of the art planner

• A similar process is implemented (more efficiently) in the Graphplan planner

• In general, Planning graphs are an efficient way to create a representation of a planning problem that can be used to • Achieve better heuristic estimates• Directly construct plans

You know the key ideas!

• Ghallab, Nau, and Traverso

Automated Planning: Theory and PracticeWeb site:

http://www.laas.fr/planning

Also has lecture notes

62

You know

You know a little

West

North

East

South

62

8Q

QJ65

97

AK53

A9

Some applications of planning

• Emergency Evacuation• Robotics• Space Exploration• Manufacturing Analysis• Games (e.g., Bridge)?• Generating Natural language

• Product Recommendations ….

Learning Goals for Planning

• STRIPS• Represent a planning problem with the STRIPS representation • Explain the STRIPS assumption

• Forward planning• Solve a planning problem by search (forward planning). Specify states,

successor function, goal test and solution.• Construct and justify a heuristic function for forward planning

• CSP planning• Translate a planning problem represented in STRIPS into a

corresponding CSP problem (and vice versa)• Solve a planning problem with CSP by expanding the horizon

On to the next topic: logic!