48
Summary of MDPs (until Now) • Finite-horizon MDPs – Non-stationary policy Value iteration • Compute V 0 ..V k .. V T the value functions for k stages to go •V k is computed in terms of V k-1 • Policy P k is MEU of V k • Infinite-horizon MDPs Stationary policy Value iteration • Converges because of contraction property of Bellman operator Policy iteration Indefinite horizon MDPs --Stochastic Shortest Path problems (with initial state given) Proper policies --Can exploit start state

Summary of MDPs (until Now)

  • Upload
    komala

  • View
    43

  • Download
    0

Embed Size (px)

DESCRIPTION

Summary of MDPs (until Now). Finite-horizon MDPs Non-stationary policy Value iteration Compute V 0 .. V k .. V T the value functions for k stages to go V k is computed in terms of V k-1 Policy P k is MEU of V k. Infinite-horizon MDPs Stationary policy Value iteration - PowerPoint PPT Presentation

Citation preview

Page 1: Summary of MDPs (until Now)

Summary of MDPs (until Now)

• Finite-horizon MDPs– Non-stationary

policy– Value iteration

• Compute V0 ..Vk .. VT the value functions for k stages to go

• Vk is computed in terms of Vk-1

• Policy Pk is MEU of Vk

• Infinite-horizon MDPs– Stationary policy– Value iteration

• Converges because of contraction property of Bellman operator

– Policy iterationIndefinite horizon MDPs --Stochastic Shortest Path problems (with initial state given)

Proper policies --Can exploit start state

Page 2: Summary of MDPs (until Now)

Ideas for Efficient Algorithms..• Use heuristic search

(and reachability information)– LAO*, RTDP

• Use execution and/or Simulation– “Actual Execution”

Reinforcement learning (Main motivation for RL is to

“learn” the model)– “Simulation” –simulate the

given model to sample possible futures

• Policy rollout, hindsight optimization etc.

• Use “factored” representations– Factored representations for Actions, Reward Functions,

Values and Policies– Directly manipulating factored representations during

the Bellman update

Page 3: Summary of MDPs (until Now)

Heuristic Search vs. Dynamic Programming (Value/Policy Iteration)• VI and PI approaches use Dynamic Programming Update• Set the value of a state in terms of the maximum expected

value achievable by doing actions from that state. • They do the update for every state in the state space

– Wasteful if we know the initial state(s) that the agent is starting from

• Heuristic search (e.g. A*/AO*) explores only the part of the state space that is actually reachable from the initial state

• Even within the reachable space, heuristic search can avoid visiting many of the states. – Depending on the quality of the heuristic used..

• But what is the heuristic?– An admissible heuristic is a lowerbound on the cost to reach

goal from any given state– It is a lowerbound on J*!

Page 4: Summary of MDPs (until Now)

Real Time Dynamic Programming [Barto, Bradtke, Singh’95]

• Trial: simulate greedy policy starting from start state;

perform Bellman backup on visited states

• RTDP: repeat Trials until cost function convergesRTDP was originally introduced for Reinforcement Learning For RL, instead of “simulate” you “execute” You also have to do “exploration” in addition to “exploitation”

with probability p, follow the greedy policy with 1-p pick a random action

Page 5: Summary of MDPs (until Now)

00

Stochastic Shortest Path MDP

Page 6: Summary of MDPs (until Now)

Min

?

?s0

Jn

Jn

Jn

Jn

Jn

Jn

Jn

Qn+1(s0,a)

Jn+1(s0)

agreedy = a2

Goala1

a2

a3

RTDP Trial

?

Page 7: Summary of MDPs (until Now)

Greedy “On-Policy” RTDP without execution

Using the current utility values, select the action with the highest expected utility (greedy action) at each state, until you reach a terminating state. Update the values along this path. Loop back—until the values stabilize

Page 8: Summary of MDPs (until Now)

Comments• Properties

– if all states are visited infinitely often then Jn → J*– Only relevant states will be considered

• A state is relevant if the optimal policy could visit it.• Notice emphasis on “optimal policy”—just because a rough neighborhood

surrounds National Mall doesn’t mean that you will need to know what to do in that neighborhood

• Advantages– Anytime: more probable states explored quickly

• Disadvantages– complete convergence is slow!– no termination condition

Do we care about complete convergence? Think Cpt. Sullenberger

Page 9: Summary of MDPs (until Now)

Labeled RTDP [Bonet&Geffner’03]

• Initialise J0 with an admissible heuristic– ⇒ Jn monotonically increases

• Label a state as solved – if the Jn for that state has converged

• Backpropagate ‘solved’ labeling• Stop trials when they reach any solved state• Terminate when s0 is solved

s G

high Q

costs

best action

) J(s) won’t change!

s G?

t

both s and tget solved together

high Q

costs

Converged means bellman residual is less than e

Page 10: Summary of MDPs (until Now)
Page 11: Summary of MDPs (until Now)

Probabilistic Planning

--The competition (IPPC)--The Action language..

(PPDDL)

Page 12: Summary of MDPs (until Now)
Page 13: Summary of MDPs (until Now)

Factored Representations: Actions• Actions can be represented directly in

terms of their effects on the individual state variables (fluents). The CPTs of the BNs can be represented compactly too!– Write a Bayes Network relating the value of fluents

at the state before and after the action• Bayes networks representing fluents at different time

points are called “Dynamic Bayes Networks”• We look at 2TBN (2-time-slice dynamic bayes nets)

• Go further by using STRIPS assumption– Fluents not affected by the action are not

represented explicitly in the model– Called Probabilistic STRIPS Operator (PSO) model

Page 14: Summary of MDPs (until Now)

Action CLK

Page 15: Summary of MDPs (until Now)
Page 16: Summary of MDPs (until Now)
Page 17: Summary of MDPs (until Now)

Not ergodic

Page 18: Summary of MDPs (until Now)
Page 19: Summary of MDPs (until Now)
Page 20: Summary of MDPs (until Now)

How to compete?Off-line policy generation• First compute the whole policy

– Get the initial state– Compute the optimal policy given

the initial state and the goals• Then just execute the policy

– Loop• Do action recommended by the

policy• Get the next state

– Until reaching goal state• Pros: Can anticipate all

problems;• Cons: May take too much time

to start executing

Online action selection• Loop

– Compute the best action for the current state

– execute it– get the new state

• Pros: Provides fast first response

• Cons: May paint itself into a corner..

Policy Computation Exec

Select

ex

Select

ex

Select

ex

Select

ex

Page 21: Summary of MDPs (until Now)

1st IPPC & Post-Mortem..IPPC Competitors• Most IPPC competitors used

different approaches for offline policy generation.

• One group implemented a simple online “replanning” approach in addition to offline policy generation

– Determinize the probabilistic problem • Most-likely vs. All-outcomes

– Loop• Get the state S; Call a classical

planner (e.g. FF) with [S,G] as the problem

• Execute the first action of the plan

• Umpteen reasons why such an approach should do quite badly..

Results and Post-mortem• To everyone’s surprise, the

replanning approach wound up winning the competition.

• Lots of hand-wringing ensued..– May be we should require that the

planners really really use probabilities?

– May be the domains should somehow be made “probabilistically interesting”?

• Current understanding:– No reason to believe that off-line

policy computation must dominate online action selection

– The “replanning” approach is just a degenerate case of hind-sight optimization

Page 22: Summary of MDPs (until Now)

FF-Replan

• Simple replanner• Determinizes the probabilistic problem• Solves for a plan in the determinized

problem

S Ga1 a2 a3 a4

a2a3

a4G

a5

Page 23: Summary of MDPs (until Now)

All Outcome Replanning (FFRA)

Action

Effect 1

Effect 2

Probability1

Probability2

Action1

Effect 1

Action2

Effect 2

ICAPS-07

27

Page 24: Summary of MDPs (until Now)

Reducing calls to FF..• We can reduce calls to FF by memoizing

successes– If we were given s0 and sG as the problem, and

solved it using our determinization to get the plan s0—a0—s1—a1—s2—a2—s3…an—sG

– Then in addition to sending a1 to the simulator, we can memoize {si—ai} as the partial policy.

• Whenever a new state is given by the simulator, we can see if it is already in the partial policy

• Additionally, FF-replan can consider every state in the partial policy table as a goal state (in that if it reaches them, it knows how to get to goal state..)

Page 25: Summary of MDPs (until Now)

Hindsight Optimization for Anticipatory Planning/Scheduling

• Consider a deterministic planning (scheduling) domain, where the goals arrive probabilistically– Using up resources and/or doing greedy actions may preclude you

from exploiting the later opportunities• How do you select actions to perform?

– Answer: If you have a distribution of the goal arrival, then • Sample goals upto a certain horizon using this distribution• Now, we have a deterministic planning problem with known goals• Solve it; do the first action from it.

– Can improve accuracy with multiple samples• FF-Hop uses this idea for stochastic planning. In anticipatory

planning, the uncertainty is exogenous (it is the uncertain arrival of goals). In stochastic planning, the uncertainty is endogenous (the actions have multiple outcomes)

Page 26: Summary of MDPs (until Now)

Probabilistic Planning(goal-oriented)

Action

ProbabilisticOutcome

Time 1

Time 2

Goal State

30

ActionState

Maximize Goal Achievement

Dead End

A1 A2

I

A1 A2 A1 A2 A1 A2 A1 A2

Left Outcomes are more

likely

Page 27: Summary of MDPs (until Now)

Problems of FF-Replan and better alternative sampling

33

FF-Replan’s Static Determinizations don’t respect probabilities.

We need “Probabilistic and Dynamic Determinization”

Sample Future Outcomes and

Determinization in HindsightEach Future Sample Becomes a

Known-Future Deterministic Problem

Page 28: Summary of MDPs (until Now)

Hindsight Optimization(Online Computation of VHS )

• Pick action a with highest Q(s,a,H) where– Q(s,a,H) = R(s,a) + ST(s,a,s’)V*(s’,H-1)

• Compute V* by sampling – H-horizon future FH for M = [S,A,T,R]

• Mapping of state, action and time (h<H) to a state

– S × A × h → S• Common-random number (correlated) vs.

independent futures..• Time-independent vs. Time-dependent futures

• Value of a policy π for FH – R(s,FH, π)

• V*(s,H) = maxπ EFH [ R(s,FH,π) ]

– But this is still too hard to compute..– Let’s swap max and expectation

• VHS(s,H) = EFH [maxπ R(s,FH,π)]

– maxπ R(s,FH-1,π) is approximated by FF plan

• VHS overestimates V*• Why?

– Intuitively, because VHS can assume that it can use different policies in different futures; while V* needs to pick one policy that works best (in expectation) in all futures.

• But then, VFFRa overestimates VHS

– Viewed in terms of J*, VHS is a more informed admissible heuristic..

34

Page 29: Summary of MDPs (until Now)

Implementation FF-Hindsight

Constructs a set of futures• Solves the planning problem using the

H-horizon futures using FF• Sums the rewards of each of the plans• Chooses action with largest Qhs value

Page 30: Summary of MDPs (until Now)

Probabilistic Planning(goal-oriented) Action

ProbabilisticOutcome

Time 1

Time 2

Goal State

38

ActionState

Maximize Goal Achievement

Dead End

Left Outcomes are more

likely A1 A2

A1 A2 A1 A2 A1 A2 A1 A2

I

Page 31: Summary of MDPs (until Now)

Improvement Ideas• Reuse

– Generated futures that are still relevant– Scoring for action branches at each step– If expected outcomes occur, keep the plan

• Future generation– Not just probabilistic– Somewhat even distribution of the space

• Adaptation– Dynamic width and horizon for sampling– Actively detect and avoid unrecoverable failures

on top of sampling

Page 32: Summary of MDPs (until Now)

Hindsight Sample 1 Action

ProbabilisticOutcome

Time 1

Time 2

Goal State

40

ActionState

Maximize Goal Achievement

Dead EndA1: 1A2: 0

Left Outcomes are more

likely A1 A2

A1 A2 A1 A2 A1 A2 A1 A2

I

Page 33: Summary of MDPs (until Now)

Exploiting Determinism

G

S1

G

S1

G

S1

a* a* a*

Plans generated for chosen action, a*

Longest prefixfor each planis identified andexecuted withoutrunning ZSL, OSLor FF!

Page 34: Summary of MDPs (until Now)

Handling unlikely outcomes:All-outcome Determinization

• Assign each possible outcome an action

• Solve for a plan• Combine the plan with the plans from

the HOP solutions

Page 35: Summary of MDPs (until Now)

Relaxations for Stochastic Planning• Determinizations can also be used as a basis

for heuristics to initialize the V for value iteration [mGPT; GOTH etc]

• Heuristics come from relaxation• We can relax along two separate dimensions:

– Relax –ve interactions• Consider +ve interactions alone using relaxed

planning graphs– Relax uncertainty

• Consider determinizations– Or a combination of both!

Page 36: Summary of MDPs (until Now)

Solving Determinizations• If we relax –ve interactions

– Then compute relaxed plan• Admissible if optimal relaxed plan is

computed• Inadmissible otherwise

• If we keep –ve interactions– Then use a deterministic planner (e.g.

FF/LPG)• Inadmissible unless the underlying planner

is optimal

Page 37: Summary of MDPs (until Now)

Dimensions of Relaxation

Uncertainty

Neg

ativ

e In

tera

ctio

ns

1 2

3

1 Relaxed Plan Heuristic2 McLUG

3 FF/LPG

Reducing UncertaintyBound the number of stochastic outcomes

Stochastic “width”

4

4 Limited width stochastic planning?

Incr

easi

ng

cons

ider

atio

n

Page 38: Summary of MDPs (until Now)

Dimensions of Relaxation

None Some FullNone Relaxed

PlanMcLUG

SomeFull FF/LPG Limited

width Stoch Planning

Uncertainty

-ve

inte

ract

ions

Page 39: Summary of MDPs (until Now)

Expressiveness v. Cost

h = 0

McLUG

FF-Replan

FF

Limited width stochastic planning

Node Expansions v. Heuristic Computation Cost

Nodes Expanded

Computation Cost

FFR FF

Page 40: Summary of MDPs (until Now)
Page 41: Summary of MDPs (until Now)

Reducing Heuristic Computation Cost by exploiting factored representations

• The heuristics computed for a state might give us an idea about the heuristic value of other “similar” states– Similarity is possible to determine in terms of the

state structure• Exploit overlapping structure of heuristics for

different states– E.g. SAG idea for McLUG– E.g. Triangle tables idea for plans (c.f. Kolobov)

Page 42: Summary of MDPs (until Now)

A Plan is a Terrible Thing to Waste• Suppose we have a plan

– s0—a0—s1—a1—s2—a2—s3…an—sG– We realized that this tells us not just the estimated value of s0,

but also of s1,s2…sn– So we don’t need to compute the heuristic for them again

• Is that all?– If we have states and actions in factored representation, then

we can explain exactly what aspects of si are relevant for the plan’s success.

– The “explanation” is a proof of correctness of the plan» Can be based on regression (if the plan is a sequence) or causal proof (if

the plan is a partially ordered one. • The explanation will typically be just a subset of the literals making

up the state– That means actually, the plan suffix from si may actually be relevant in many

more states that are consistent with that explanation

Page 43: Summary of MDPs (until Now)

Triangle Table Memoization• Use triangle tables / memoization

C

B

A A B C

If the above problem is solved, then we don’t need to call FF again for the below:

B

A A B

Page 44: Summary of MDPs (until Now)

Explanation-based Generalization (of Successes and Failures)

• Suppose we have a plan P that solves a problem [S, G].

• We can first find out what aspects of S does this plan actually depend on– Explain (prove) the correctness of the

plan, and see which parts of S actually contribute to this proof

– Now you can memoize this plan for just that subset of S

Page 45: Summary of MDPs (until Now)

Factored Representations: Reward, Value and Policy Functions

• Reward functions can be represented in factored form too. Possible representations include– Decision trees (made up of fluents)– ADDs (Algebraic decision diagrams)

• Value functions are like reward functions (so they too can be represented similarly)

• Bellman update can then be done directly using factored representations..

Page 46: Summary of MDPs (until Now)
Page 47: Summary of MDPs (until Now)

SPUDDs use of ADDs

Page 48: Summary of MDPs (until Now)

Direct manipulation of ADDs in SPUDD