59
Factored Approches for MDP & RL (Some Slides taken from Alan Fern’s course)

Factored Approches for MDP & RL

  • Upload
    lavonn

  • View
    64

  • Download
    0

Embed Size (px)

DESCRIPTION

Factored Approches for MDP & RL. (Some Slides taken from Alan Fern’s course). Factored MDP/RL. Representations. Advantages. Specification: is far easier Inference: Novel lifted versions of the Value and Policy iterations possible Bellman backup directly in terms of ADDs - PowerPoint PPT Presentation

Citation preview

Page 1: Factored  Approches  for MDP & RL

Factored Approches for MDP & RL

(Some Slides taken from Alan Fern’s course)

Page 2: Factored  Approches  for MDP & RL

Factored MDP/RLRepresentations• States made of features

– Boolean vs. Continuous • Actions modify the features

(probabilistically)– Representations include

Probabilistic STRIPS, 2-Time-slice Dynamic Bayes Nets etc.

• Reward and Value functions– Representations include

ADDs, linear weighted sums of features etc.

Advantages• Specification: is far easier• Inference: Novel lifted versions

of the Value and Policy iterations possible

– Bellman backup directly in terms of ADDs

– Policy gradient approach where you do direct search in the policy space

• Learning: Generalization possibilities

– Q-learning etc. will now directly update the factored representations (e.g. weights of the features)

• Thus giving implicit generalization – Approaches such as FF-HOP can

recognize and reuse common substructure

Page 3: Factored  Approches  for MDP & RL

Problems with transition systems

• Transition systems are a great conceptual tool to understand the differences between the various planning problems

• …However direct manipulation of transition systems tends to be too cumbersome– The size of the explicit graph corresponding to a transition

system is often very large– The remedy is to provide “compact” representations for

transition systems• Start by explicating the structure of the “states”

– e.g. states specified in terms of state variables• Represent actions not as incidence matrices but rather

functions specified directly in terms of the state variables– An action will work in any state where some state

variables have certain values. When it works, it will change the values of certain (other) state variables

Page 4: Factored  Approches  for MDP & RL

State Variable Models• World is made up of states which are

defined in terms of state variables– Can be boolean (or multi-ary or continuous)

• States are complete assignments over state variables– So, k boolean state variables can represent

how many states?• Actions change the values of the state

variables– Applicability conditions of actions are also

specified in terms of partial assignments over state variables

Page 5: Factored  Approches  for MDP & RL

Blocks world

State variables: Ontable(x) On(x,y) Clear(x) hand-empty holding(x)

Stack(x,y) Prec: holding(x), clear(y) eff: on(x,y), ~cl(y), ~holding(x), hand-empty

Unstack(x,y) Prec: on(x,y),hand-empty,cl(x) eff: holding(x),~clear(x),clear(y),~hand-empty

Pickup(x) Prec: hand-empty,clear(x),ontable(x) eff: holding(x),~ontable(x),~hand-empty,~Clear(x)

Putdown(x) Prec: holding(x) eff: Ontable(x), hand-empty,clear(x),~holding(x)

Initial state: Complete specification of T/F values to state variables

--By convention, variables with F values are omittedGoal state: A partial specification of the desired state variable/value combinations --desired values can be both positive and negative

Init: Ontable(A),Ontable(B), Clear(A), Clear(B), hand-empty

Goal: ~clear(B), hand-empty

All the actions here have only positive preconditions; but this is not necessary

STRIPS ASSUMPTION: If an action changes a state variable, this must be explicitly mentioned in its effects

Page 6: Factored  Approches  for MDP & RL

Why is STRIPS representation compact?(than explicit transition systems)

• In explicit transition systems actions are represented as state-to-state transitions where in each action will be represented by an incidence matrix of size |S|x|S|

• In state-variable model, actions are represented only in terms of state variables whose values they care about, and whose value they affect.

• Consider a state space of 1024 states. It can be represented by log21024=10 state variables. If an action needs variable v1 to be true and makes v7 to be false, it can be represented by just 2 bits (instead of a 1024x1024 matrix)– Of course, if the action has a complicated

mapping from states to states, in the worst case the action rep will be just as large

– The assumption being made here is that the actions will have effects on a small number of state variables.

Sit. Calc

STRIPS rep

Transition rep

Firstorder

Rel/Prop

Atomic

Page 7: Factored  Approches  for MDP & RL

Factored Representations fo MDPs: Actions

• Actions can be represented directly in terms of their effects on the individual state variables (fluents). The CPTs of the BNs can be represented compactly too!– Write a Bayes Network relating the value of fluents at the state

before and after the action• Bayes networks representing fluents at different time points are called

“Dynamic Bayes Networks”• We look at 2TBN (2-time-slice dynamic bayes nets)

• Go further by using STRIPS assumption– Fluents not affected by the action are not represented explicitly in the

model– Called Probabilistic STRIPS Operator (PSO) model

Page 8: Factored  Approches  for MDP & RL

Action CLK

Page 9: Factored  Approches  for MDP & RL
Page 10: Factored  Approches  for MDP & RL
Page 11: Factored  Approches  for MDP & RL

Factored Representations: Reward, Value and Policy Functions

• Reward functions can be represented in factored form too. Possible representations include– Decision trees (made up of fluents)– ADDs (Algebraic decision diagrams)

• Value functions are like reward functions (so they too can be represented similarly)

• Bellman update can then be done directly using factored representations..

Page 12: Factored  Approches  for MDP & RL
Page 13: Factored  Approches  for MDP & RL

SPUDDs use of ADDs

Page 14: Factored  Approches  for MDP & RL

Direct manipulation of ADDs in SPUDD

Page 15: Factored  Approches  for MDP & RL

Ideas for Efficient Algorithms..• Use heuristic search

(and reachability information)– LAO*, RTDP

• Use execution and/or Simulation– “Actual Execution”

Reinforcement learning (Main motivation for RL is to

“learn” the model)– “Simulation” –simulate the

given model to sample possible futures

• Policy rollout, hindsight optimization etc.

• Use “factored” representations– Factored representations for Actions, Reward Functions,

Values and Policies– Directly manipulating factored representations during

the Bellman update

Page 16: Factored  Approches  for MDP & RL

Probabilistic Planning--The competition (IPPC)--The Action language..

PPDDL was based on PSOA new standard RDDL is based on 2-TBN

Page 17: Factored  Approches  for MDP & RL
Page 18: Factored  Approches  for MDP & RL

Not ergodic

Page 19: Factored  Approches  for MDP & RL
Page 20: Factored  Approches  for MDP & RL
Page 21: Factored  Approches  for MDP & RL

Reducing Heuristic Computation Cost by exploiting factored representations

• The heuristics computed for a state might give us an idea about the heuristic value of other “similar” states– Similarity is possible to determine in terms of the

state structure• Exploit overlapping structure of heuristics for

different states– E.g. SAG idea for McLUG– E.g. Triangle tables idea for plans (c.f. Kolobov)

Page 22: Factored  Approches  for MDP & RL

A Plan is a Terrible Thing to Waste• Suppose we have a plan

– s0—a0—s1—a1—s2—a2—s3…an—sG– We realized that this tells us not just the estimated value of s0,

but also of s1,s2…sn– So we don’t need to compute the heuristic for them again

• Is that all?– If we have states and actions in factored representation, then

we can explain exactly what aspects of si are relevant for the plan’s success.

– The “explanation” is a proof of correctness of the plan» Can be based on regression (if the plan is a sequence) or causal proof (if

the plan is a partially ordered one. • The explanation will typically be just a subset of the literals making

up the state– That means actually, the plan suffix from si may actually be relevant in many

more states that are consistent with that explanation

Page 23: Factored  Approches  for MDP & RL

Triangle Table Memoization• Use triangle tables / memoization

C

B

A A B C

If the above problem is solved, then we don’t need to call FF again for the below:

B

A A B

Page 24: Factored  Approches  for MDP & RL

Explanation-based Generalization (of Successes and Failures)

• Suppose we have a plan P that solves a problem [S, G].

• We can first find out what aspects of S does this plan actually depend on– Explain (prove) the correctness of the

plan, and see which parts of S actually contribute to this proof

– Now you can memoize this plan for just that subset of S

Page 25: Factored  Approches  for MDP & RL

Relaxations for Stochastic Planning• Determinizations can also be used as a basis

for heuristics to initialize the V for value iteration [mGPT; GOTH etc]

• Heuristics come from relaxation• We can relax along two separate dimensions:

– Relax –ve interactions• Consider +ve interactions alone using relaxed

planning graphs– Relax uncertainty

• Consider determinizations– Or a combination of both!

Page 26: Factored  Approches  for MDP & RL

--Factored TD and Q-learning--Policy search (has to be factored..)

Page 27: Factored  Approches  for MDP & RL

32

Large State Spaces

• When a problem has a large state space we can not longer represent the V or Q functions as explicit tables

• Even if we had enough memory – Never enough training data!– Learning takes too long

• What to do??

[Slides from Alan Fern]

Page 28: Factored  Approches  for MDP & RL

33

Function Approximation

• Never enough training data!– Must generalize what is learned from one situation to other

“similar” new situations• Idea:

– Instead of using large table to represent V or Q, use a parameterized function

• The number of parameters should be small compared to number of states (generally exponentially fewer parameters)

– Learn parameters from experience– When we update the parameters based on observations in one

state, then our V or Q estimate will also change for other similar states

• I.e. the parameterization facilitates generalization of experience

Page 29: Factored  Approches  for MDP & RL

34

Linear Function Approximation• Define a set of state features f1(s), …, fn(s)

– The features are used as our representation of states– States with similar feature values will be considered to be similar

• A common approximation is to represent V(s) as a weighted sum of the features (i.e. a linear approximation)

• The approximation accuracy is fundamentally limited by the information provided by the features

• Can we always define features that allow for a perfect linear approximation?– Yes. Assign each state an indicator feature. (I.e. i’th feature is 1 iff i’th state

is present and i represents value of i’th state)– Of course this requires far to many features and gives no generalization.

)(...)()()(ˆ 22110 sfsfsfsV nn

Page 30: Factored  Approches  for MDP & RL

35

Example• Consider grid problem with no obstacles, deterministic actions

U/D/L/R (49 states)• Features for state s=(x,y): f1(s)=x, f2(s)=y (just 2 features)• V(s) = 0 + 1 x + 2 y• Is there a good linear

approximation?– Yes. – 0 =10, 1 = -1, 2 = -1– (note upper right is origin)

• V(s) = 10 - x - ysubtracts Manhattan dist.from goal reward

10

0

0

6

6

Page 31: Factored  Approches  for MDP & RL

36

But What If We Change Reward …

• V(s) = 0 + 1 x + 2 y• Is there a good linear approximation?

– No.

10

0

0

Page 32: Factored  Approches  for MDP & RL

37

But What If We Change Reward …

• V(s) = 0 + 1 x + 2 y• Is there a good linear approximation?

– No.

10

0

0

Page 33: Factored  Approches  for MDP & RL

38

But What If…

• V(s) = 0 + 1 x + 2 y

10

+ 3 z

h Include new feature z5 z= |3-x| + |3-y|

5 z is dist. to goal location

h Does this allow a good linear approx?5 0 =10, 1 = 2 = 0,

0 = -1

0

0

3

3

Feature Engineering….

Page 34: Factored  Approches  for MDP & RL

41

Linear Function Approximation

• Define a set of features f1(s), …, fn(s)– The features are used as our representation of states– States with similar feature values will be treated similarly– More complex functions require more complex features

• Our goal is to learn good parameter values (i.e. feature weights) that approximate the value function well– How can we do this?– Use TD-based RL and somehow update parameters based

on each experience.

)(...)()()(ˆ 22110 sfsfsfsV nn

Page 35: Factored  Approches  for MDP & RL

42

TD-based RL for Linear Approximators

1. Start with initial parameter values2. Take action according to an explore/exploit

policy(should converge to greedy policy, i.e. GLIE)

3. Update estimated model4. Perform TD update for each parameter

5. Goto 2 What is a “TD update” for a parameter?

?i

Page 36: Factored  Approches  for MDP & RL

43

Aside: Gradient Descent• Given a function f(1,…, n) of n real values =(1,…, n) suppose we

want to minimize f with respect to • A common approach to doing this is gradient descent• The gradient of f at point , denoted by f(), is an

n-dimensional vector that points in the direction where f increases most steeply at point

• Vector calculus tells us that f() is just a vector of partial derivatives

where

can decrease f by moving in negative gradient direction

)(),,,,,(lim)( 111

0

fff niii

i

n

fff

)(,,)()(1

This will be usedAgain with Graphical ModelLearning

Page 37: Factored  Approches  for MDP & RL

44

Aside: Gradient Descent for Squared Error• Suppose that we have a sequence of states and target values for

each state– E.g. produced by the TD-based RL loop

• Our goal is to minimize the sum of squared errors between our estimated function and each target value:

• After seeing j’th state the gradient descent rule tells us that we can decrease error by updating parameters by:

2)()(ˆ21

jjj svsVE

squared error of example j our estimated valuefor j’th state

learning rate

target value for j’th state

i

j

j

j

i

j

i

jii

sVsVEEE

)(ˆ

)(ˆ,

,)(,,)(, 2211 svssvs

Page 38: Factored  Approches  for MDP & RL

45

Aside: continued

i

jjji

i

jii

sVsvsV

E

)(ˆ

)()(ˆ

)(ˆ j

j

sVE

• For a linear approximation function:

• Thus the update becomes:

• For linear functions this update is guaranteed to converge to best approximation for suitable learning rate schedule

)(...)()()(ˆ 22111 sfsfsfsV nn

)()(ˆ)( jijjii sfsVsv

)()(ˆ

jii

j sfsV

depends on form of approximator

Page 39: Factored  Approches  for MDP & RL

46

TD-based RL for Linear Approximators1. Start with initial parameter values2. Take action according to an explore/exploit policy

(should converge to greedy policy, i.e. GLIE) Transition from s to s’

3. Update estimated model4. Perform TD update for each parameter

5. Goto 2 What should we use for “target value” v(s)?

)()(ˆ)( sfsVsv iii

• Use the TD prediction based on the next state s’

this is the same as previous TD method only with approximation

)'(ˆ)()( sVsRsv

Note that we are generalizing w.r.t. possibly faulty data.. (the neighbor’s value may not be correct yet..)

Page 40: Factored  Approches  for MDP & RL

47

TD-based RL for Linear Approximators1. Start with initial parameter values2. Take action according to an explore/exploit policy

(should converge to greedy policy, i.e. GLIE) 3. Update estimated model4. Perform TD update for each parameter

5. Goto 2

)()(ˆ)'(ˆ)( sfsVsVsR iii

• Step 2 requires a model to select greedy action • For applications such as Backgammon it is easy to get a simulation-based model • For others it is difficult to get a good model• But we can do the same thing for model-free Q-learning

Page 41: Factored  Approches  for MDP & RL

48

Q-learning with Linear Approximators

1. Start with initial parameter values2. Take action a according to an explore/exploit policy

(should converge to greedy policy, i.e. GLIE) transitioning from s to s’

3. Perform TD update for each parameter

4. Goto 2

),(),(ˆ)','(ˆmax)('

asfasQasQsR iaii

),(...),(),(),(ˆ 22110 asfasfasfasQ nn

• For both Q and V, these algorithms converge to the closest linear approximation to optimal Q or V.

Features are a function of states and actions.

Page 42: Factored  Approches  for MDP & RL
Page 43: Factored  Approches  for MDP & RL
Page 44: Factored  Approches  for MDP & RL
Page 45: Factored  Approches  for MDP & RL
Page 46: Factored  Approches  for MDP & RL

56

Policy Gradient Ascent• Let () be the expected value of policy .

– () is just the expected discounted total reward for a trajectory of . – For simplicity assume each trajectory starts at a single initial state.

• Our objective is to find a that maximizes ()

• Policy gradient ascent tells us to iteratively update parameters via:

• Problem: () is generally very complex and it is rare that we

can compute a closed form for the gradient of ().• We will instead estimate the gradient based on experience

)(

Page 47: Factored  Approches  for MDP & RL

57

Gradient Estimation• Concern: Computing or estimating the gradient of

discontinuous functions can be problematic.

• For our example parametric policy

is () continuous? • No.

– There are values of where arbitrarily small changes, cause the policy to change.

– Since different policies can have different values this means that changing can cause discontinuous jump of ().

),(ˆmaxarg)( asQsa

Page 48: Factored  Approches  for MDP & RL

58

Example: Discontinous ()

• Consider a problem with initial state s and two actions a1 and a2 – a1 leads to a very large terminal reward R1 – a2 leads to a very small terminal reward R2

• Fixing 2 to a constant we can plot the ranking assigned to each action by Q and the corresponding value ()

),(),(ˆmaxarg)( 11 asfasQsa

1

)1,(ˆ asQ

)2,(ˆ asQ

1

()

R1

R2

Discontinuity in () when ordering of a1 and a2 change

Page 49: Factored  Approches  for MDP & RL

59

Probabilistic Policies• We would like to avoid policies that drastically change with small

parameter changes, leading to discontinuities• A probabilistic policy takes a state as input and returns a

distribution over actions – Given a state s (s,a) returns the probability that selects action a in s

• Note that () is still well defined for probabilistic policies– Now uncertainty of trajectories comes from environment and policy– Importantly if (s,a) is continuous relative to changing then () is also continuous

relative to changing

• A common form for probabilistic policies is the softmax function or Boltzmann exploration function

Aa

asQasQsaas

'

)',(ˆexp),(ˆexp)|Pr(),(

Aka Mixed Policy(not needed forOptimality…)

Page 50: Factored  Approches  for MDP & RL

60

Empirical Gradient Estimation• Our first approach to estimating () is to simply compute

empirical gradient estimates• Recall that = (1,…, n) and

so we can compute the gradient by empirically estimating each partial derivative

• So for small we can estimate the partial derivatives by

• This requires estimating n+1 values:

)(),,,,,(lim)( 111

0

niii

i

n

)(,,)()(1

)(),,,,,( 111 niii

niniii ,...,1|),,,,,(),( 111

Page 51: Factored  Approches  for MDP & RL

61

Empirical Gradient Estimation• How do we estimate the quantities

• For each set of parameters, simply execute the policy for N trials/episodes and average the values achieved across the trials

• This requires a total of N(n+1) episodes to get gradient estimate– For stochastic environments and policies the value of N must be relatively

large to get good estimates of the true value– Often we want to use a relatively large number of parameters– Often it is expensive to run episodes of the policy

• So while this can work well in many situations, it is often not a practical approach computationally

• Better approaches try to use the fact that the stochastic policy is differentiable. – Can get the gradient by just running the current policy multiple times

niniii ,...,1|),,,,,(),( 111

Doable without permanent damage if there is a simulator

Page 52: Factored  Approches  for MDP & RL

62

Applications of Policy Gradient Search

• Policy gradient techniques have been used to create controllers for difficult helicopter maneuvers

• For example, inverted helicopter flight.

• A planner called FPG also “won” the 2006 International Planning Competition– If you don’t count FF-Replan

Page 53: Factored  Approches  for MDP & RL

64

Policy Gradient Recap• When policies have much simpler representations than

the corresponding value functions, direct search in policy space can be a good idea– Allows us to design complex parametric controllers and optimize details of

parameter settings

• For baseline algorithm the gradient estimates are unbiased (i.e. they will converge to the right value) but have high variance– Can require a large N to get reliable estimates

h OLPOMDP offers can trade-off bias and variance via the discount parameter [Baxter & Bartlett, 2000]

• Can be prone to finding local maxima – Many ways of dealing with this, e.g. random restarts.

Page 54: Factored  Approches  for MDP & RL

65

Gradient Estimation: Single Step Problems• For stochastic policies it is possible to estimate () directly from trajectories of

just the current policy – Idea: take advantage of the fact that we know the functional form of the policy

• First consider the simplified case where all trials have length 1 – For simplicity assume each trajectory starts at a single initial state and reward

only depends on action choice– () is just the expected reward of action selected by .

where s0 is the initial state and R(a) is reward of action a• The gradient of this becomes

• How can we estimate this by just observing the execution of ?

a

o aRas )(),()(

a

oa

o aRasaRas )(),()(),()(

Page 55: Factored  Approches  for MDP & RL

66

• Rewriting

• The gradient is just the expected value of g(s0,a)R(a) over execution trials of – Can estimate by executing for N trials and averaging samples

aj is action selected by policy on j’th episode – Only requires executing for a number of trials that need not depend on the

number of parameters

aoo

a o

oo

ao

aRasas

aRasasas

aRas

)(),(log),(

)(),(),(),(

)(),()(

can get closed form g(s0,a)

)(),(1)(1

j

N

jjo aRasg

N

Gradient Estimation: Single Step Problems

Page 56: Factored  Approches  for MDP & RL

67

Gradient Estimation: General Case• So for the case of a length 1 trajectories we got:

• For the general case where trajectories have length greater than one and reward depends on state we can do some work and get:

• sjt is t’th state of j’th episode, ajt is t’th action of epidode j• The derivation of this is straightforward but messy.

)(),(1)(1

aRasgN

N

jjo

N

j

T

ttjjtjtj

j

sRasgN 1 1

,,, )(),(1)(

Observed total reward in trajectory jfrom step t to end

length of trajectory j

# of trajectoriesof current policy

Page 57: Factored  Approches  for MDP & RL

68

How to interpret gradient expression?

• So the overall gradient is a reward weighted combination of individual gradient directions– For large Rj(sj,t) will increase probability of aj,t in sj,t

– For negative Rj(sj,t) will decrease probability of aj,t in sj,t

• Intuitively this increases probability of taking actions that typically are followed by good reward sequences

N

j

T

ttjjtjtj

j

sRasgN 1 1

,,, )(),(1)(

Direction to move parameters in order to increase the probability that policy selectsajt in state sjt

Total reward observed after taking ajt in state sjt

),(log),( asasg

Page 58: Factored  Approches  for MDP & RL

69

Basic Policy Gradient Algorithm• Repeat until stopping condition

1. Execute for N trajectories while storing the state, action, reward sequences

• One disadvantage of this approach is the small number of updates per amount of experience– Also requires a notion of trajectory rather than an infinite sequence

of experience• Online policy gradient algorithms perform updates after each

step in environment (often learn faster)

N

j

T

ttjjtjtj

j

sRasgN 1 1

,,, )(),(1

Page 59: Factored  Approches  for MDP & RL

72

Computing the Gradient of Policy• Both algorithms require computation of

• For the Boltzmann distribution with linear approximation we have:

where

• Here the partial derivatives needed for g(s,a) are:

),(log),( asasg

Aa

asQasQas

'

)',(ˆexp),(ˆexp),(

),(...),(),(),(ˆ 22110 asfasfasfasQ nn

'

)',()',(),(),(loga

iii

asfasasfas