33
Extending PDDL to Model Stochastic Decision Processes Håkan L. S. Younes Carnegie Mellon University

Extending PDDL to Model Stochastic Decision Processes Håkan L. S. Younes Carnegie Mellon University

Embed Size (px)

Citation preview

Page 1: Extending PDDL to Model Stochastic Decision Processes Håkan L. S. Younes Carnegie Mellon University

Extending PDDL to Model Stochastic Decision Processes

Håkan L. S. YounesCarnegie Mellon University

Page 2: Extending PDDL to Model Stochastic Decision Processes Håkan L. S. Younes Carnegie Mellon University

Introduction

PDDL extensions for modeling stochastic decision processes Only full observability is considered

Formalism for expressing probabilistic temporally extended goals

No commitment on plan representation

Page 3: Extending PDDL to Model Stochastic Decision Processes Håkan L. S. Younes Carnegie Mellon University

Simple Example

(:action flip-coin:parameters (?coin):precondition (holding ?coin):effect (and(not (holding ?coin))

(probabilistic 0.5 (head-up ?coin)0.5 (tail-up ?coin))))

Page 4: Extending PDDL to Model Stochastic Decision Processes Håkan L. S. Younes Carnegie Mellon University

Stochastic Actions

Variation of factored probabilistic STRIPS operators [Dearden & Boutilier 97]

An action consists of a precondition and a consequence set C = {c1, …, cn}

Each ci has a trigger condition i and an effects list Ei = p1

i, E1i; …; pk

i, Eki

j pj = 1 for each Ei

Page 5: Extending PDDL to Model Stochastic Decision Processes Håkan L. S. Younes Carnegie Mellon University

Stochastic Actions:Semantics

An action is enabled in a state s if its precondition holds in s

Executing a disabled action is allowed, but does not change the state Different from deterministic PDDL Motivation: partial observability Precondition becomes factored trigger

condition

Page 6: Extending PDDL to Model Stochastic Decision Processes Håkan L. S. Younes Carnegie Mellon University

Stochastic Actions:Semantics (cont.)

When applying an enabled action to s: Select an effect set for each consequence

with enabled trigger condition The combined effects of the selected

effect sets are applied atomically to s Unique next state if consequences with

mutually consistent trigger conditions have commutative effect sets

Page 7: Extending PDDL to Model Stochastic Decision Processes Håkan L. S. Younes Carnegie Mellon University

Syntax of Probabilistic Effects<effect> ::= <d-effect><effect> ::= (and <effect>*)<effect> ::= (forall (<typed list(variable)>) <effect>)<effect> ::= (when <GD> <d-effect>)<d-effect> ::= (probabilistic <prob-eff>+)<d-effect> ::= <a-effect><prob-eff> ::= <probability> <a-effect><a-effect> ::= (and <p-effect>*)<a-effect> ::= <p-effect><p-effect> ::= (not <atomic formula(term)>)<p-effect> ::= <atomic formula(term)><p-effect> ::= (<assign-op> <f-head> <f-exp>)<probability> ::= Any rational number in the interval [0, 1]

Page 8: Extending PDDL to Model Stochastic Decision Processes Håkan L. S. Younes Carnegie Mellon University

Correspondence to Components of Stochastic Actions

Effects list:(probabilistic p1

i E1i … pk

i Eki)

Consequence:(when (probabilistic p1

i E1i … pk

i Ek

i))

Page 9: Extending PDDL to Model Stochastic Decision Processes Håkan L. S. Younes Carnegie Mellon University

Stochastic Actions: Example(:action move

:parameters ():effect (and (when (office)

(probabilistic 0.9 (not (office))))(when (not (office))

(probabilistic 0.9 (office)))(when (and (rain) (not (umbrella)))

(probabilistic 0.9 (wet)))))

officerain umbrella

wet

officerain umbrella

wet

officerain umbrella

wet

officerain umbrella

wet

0.09

0.09

0.810.01

0.09

0.810.09 0.01

0.9

0.90.10.1

officerain umbrella

wet

officerain umbrella

wet

officerain umbrella

wet

officerain umbrella

wet

0.9

0.90.10.1

0.1 0.1

0.9

0.9

Page 10: Extending PDDL to Model Stochastic Decision Processes Håkan L. S. Younes Carnegie Mellon University

Stochastic Actions: Example(:action move

:parameters ():effect (and (when (office)

(probabilistic 0.9 (not (office)) 0.1 (and)))(when (not (office))

(probabilistic 0.9 (office) 0.1 (and)))(when (and (rain) (not (umbrella)))

(probabilistic 0.9 (wet) 0.1 (and)))))

officerain umbrella

wet

officerain umbrella

wet

officerain umbrella

wet

officerain umbrella

wet

0.09

0.09

0.810.01

officerain umbrella

wet

officerain umbrella

wet

officerain umbrella

wet

officerain umbrella

wet

Page 11: Extending PDDL to Model Stochastic Decision Processes Håkan L. S. Younes Carnegie Mellon University

Stochastic Actions: Example(:action move

:parameters ():effect (and (when (office)

(probabilistic 0.9 (not (office)) 0.1 (and)))(when (not (office))

(probabilistic 0.9 (office) 0.1 (and)))(when (and (rain) (not (umbrella)))

(probabilistic 0.9 (wet) 0.1 (and)))))

officerain umbrella

wet

officerain umbrella

wet

officerain umbrella

wet

officerain umbrella

wet

0.09

0.09

0.810.01

officerain umbrella

wet

officerain umbrella

wet

officerain umbrella

wet

officerain umbrella

wet

Page 12: Extending PDDL to Model Stochastic Decision Processes Håkan L. S. Younes Carnegie Mellon University

Stochastic Actions: Example(:action move

:parameters ():effect (and (when (office)

(probabilistic 0.9 (not (office)) 0.1 (and)))(when (not (office))

(probabilistic 0.9 (office) 0.1 (and)))(when (and (rain) (not (umbrella)))

(probabilistic 0.9 (wet) 0.1 (and)))))

officerain umbrella

wet

officerain umbrella

wet

officerain umbrella

wet

officerain umbrella

wet

0.09

0.09

0.810.01

officerain umbrella

wet

officerain umbrella

wet

officerain umbrella

wet

officerain umbrella

wet

Page 13: Extending PDDL to Model Stochastic Decision Processes Håkan L. S. Younes Carnegie Mellon University

Stochastic Actions: Example(:action move

:parameters ():effect (and (when (office)

(probabilistic 0.9 (not (office)) 0.1 (and)))(when (not (office))

(probabilistic 0.9 (office) 0.1 (and)))(when (and (rain) (not (umbrella)))

(probabilistic 0.9 (wet) 0.1 (and)))))

officerain umbrella

wet

officerain umbrella

wet

officerain umbrella

wet

officerain umbrella

wet

0.09

0.09

0.810.01

officerain umbrella

wet

officerain umbrella

wet

officerain umbrella

wet

officerain umbrella

wet

Page 14: Extending PDDL to Model Stochastic Decision Processes Håkan L. S. Younes Carnegie Mellon University

Stochastic Actions: Example(:action move

:parameters ():effect (and (when (office)

(probabilistic 0.9 (not (office)) 0.1 (and)))(when (not (office))

(probabilistic 0.9 (office) 0.1 (and)))(when (and (rain) (not (umbrella)))

(probabilistic 0.9 (wet) 0.1 (and)))))

officerain umbrella

wet

officerain umbrella

wet

officerain umbrella

wet

officerain umbrella

wet

0.09

0.09

0.810.01

officerain umbrella

wet

officerain umbrella

wet

officerain umbrella

wet

officerain umbrella

wet

Page 15: Extending PDDL to Model Stochastic Decision Processes Håkan L. S. Younes Carnegie Mellon University

Exogenous Events

Like stochastic actions, but beyond the control of the decision maker

Defined using :event keyword instead of :action keyword

Common in control theory to say that everything is an event, and that some are controllable (what we call actions)

Page 16: Extending PDDL to Model Stochastic Decision Processes Håkan L. S. Younes Carnegie Mellon University

Exogenous Events: Example(:action move

:parameters ():effect (and (when (office)

(probabilistic 0.9 (not (office))))(when (not (office))

(probabilistic 0.9 (office)))))

(:event make-wet:parameters ():precondition (and (rain) (not (umbrella))):effect (probabilistic 0.9 (wet)))

Page 17: Extending PDDL to Model Stochastic Decision Processes Håkan L. S. Younes Carnegie Mellon University

Expressiveness

Discrete-time MDPs Exogenous events are so far only a

modeling convenience and do not add to the expressiveness

Page 18: Extending PDDL to Model Stochastic Decision Processes Håkan L. S. Younes Carnegie Mellon University

Adding Time

States have stochastic duration Transitions are instantaneous

Page 19: Extending PDDL to Model Stochastic Decision Processes Håkan L. S. Younes Carnegie Mellon University

Actions and Events with Stochastic Delay

Associate a delay distribution F(t) with each action a

F(t) is the cumulative distribution function for the delay from when a is enabled until it triggers

Analogous for exogenous events

Page 20: Extending PDDL to Model Stochastic Decision Processes Håkan L. S. Younes Carnegie Mellon University

Delayed actions and events: Example(:delayed-action move

:parameters ():delay (geometric 0.9):effect (and (when (office) (not (office)))

(when (not (office)) (office))))

office office

G(0.9)

G(0.9)

Page 21: Extending PDDL to Model Stochastic Decision Processes Håkan L. S. Younes Carnegie Mellon University

Delayed actions and events: Example(:delayed-action move

:parameters ():delay (geometric 0.9):effect (and (when (office) (not (office)))

(when (not (office)) (office))))

office office

0.9

0.9

0.1

0.1

Page 22: Extending PDDL to Model Stochastic Decision Processes Håkan L. S. Younes Carnegie Mellon University

Expressiveness

Geometric delay distributions Discrete-time MDP

Exponential delay distributions Continuous-time MDP

General delay distributions At least semi-Markov decision process

Page 23: Extending PDDL to Model Stochastic Decision Processes Håkan L. S. Younes Carnegie Mellon University

General Delay Distribution: Example(:delayed-action move

:parameters ():delay (uniform 0 6):effect (and (when (office) (not (office)))

(when (not (office)) (office))))

(:delayed-event make-wet:parameters ():delay (weibull 2):precondition (and (rain) (not (umbrella))):effect (wet))

Page 24: Extending PDDL to Model Stochastic Decision Processes Håkan L. S. Younes Carnegie Mellon University

ConcurrentSemi-Markov Processes

Each action and event separately

office office

U(0, 6)

U(0, 6)

wet wetW(2)

move

make-wet

Page 25: Extending PDDL to Model Stochastic Decision Processes Håkan L. S. Younes Carnegie Mellon University

GeneralizedSemi-Markov Process

Putting it together

office wet

office wet

U(0, 6)

U(0, 6)

officewet

officewet

W(2)U(0, 6)

U(0, 6)

W(2)

Page 26: Extending PDDL to Model Stochastic Decision Processes Håkan L. S. Younes Carnegie Mellon University

GeneralizedSemi-Markov Process (cont.)

Why generalized?office wet

office wet

U(0, 6)

U(0, 6)

officewet

officewet

W(2)U(0, 6)

U(0, 6)

W(2)

office wet

t=0

Page 27: Extending PDDL to Model Stochastic Decision Processes Håkan L. S. Younes Carnegie Mellon University

GeneralizedSemi-Markov Process (cont.)

Why generalized?office wet

office wet

U(0, 6)

U(0, 6)

officewet

officewet

W(2)U(0, 6)

U(0, 3)

W(2)

office wet

t=0

office wet

t=3

make-wet

Page 28: Extending PDDL to Model Stochastic Decision Processes Håkan L. S. Younes Carnegie Mellon University

Expressiveness

Hierarchy of stochastic decision processes

GSMDP

SMDP

MDP memoryless delay distributionsprobabilistic effects

general delay distributionsprobabilistic effects

concurrencygeneral delay distributionsprobabilistic effects

Page 29: Extending PDDL to Model Stochastic Decision Processes Håkan L. S. Younes Carnegie Mellon University

ProbabilisticTemporally Extended Goals

Goal specified as a CSL (PCTL) formula ::= true | a | | | Prp() ::= U≤t | ◊≤t | □≤t

Page 30: Extending PDDL to Model Stochastic Decision Processes Håkan L. S. Younes Carnegie Mellon University

Goals: Examples Achievement goal

Pr≥0.9(◊ office) Achievement goal with deadline

Pr≥0.9(◊≤5 office) Achievement goal with safety

constraint Pr≥0.9(wet U office)

Maintenance goal Pr≥0.8(□ wet)

Page 31: Extending PDDL to Model Stochastic Decision Processes Håkan L. S. Younes Carnegie Mellon University

Summary

PDDL Extensions Probabilistic effects Exogenous events Delayed actions and events

CSL/PCTL goals

Thursday:Planning!

Page 32: Extending PDDL to Model Stochastic Decision Processes Håkan L. S. Younes Carnegie Mellon University

Panel Statement

Stochastic decision processes are useful Robotic control, queuing systems, …

Baseline PDDL should be designed with stochastic decision processes in mind

Formalisms are needed to express complex constrains on valid plans PCTL/CSL goals

Page 33: Extending PDDL to Model Stochastic Decision Processes Håkan L. S. Younes Carnegie Mellon University

Role of PDDL

discrete dynamics

continuous dynamicsstochastic dynamics