31
R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1 Chapter 3: The Reinforcement Learning Problem describe the RL problem we will be studying for the remainder of the course present idealized form of the RL problem for which we have precise theoretical results; introduce key components of the mathematics: value functions and Bellman equations; describe trade-offs between applicability and mathematical tractability. bjectives of this chapter:

Chapter 3: The Reinforcement Learning Problem

  • Upload
    lynsey

  • View
    59

  • Download
    1

Embed Size (px)

DESCRIPTION

Chapter 3: The Reinforcement Learning Problem. describe the RL problem we will be studying for the remainder of the course present idealized form of the RL problem for which we have precise theoretical results; - PowerPoint PPT Presentation

Citation preview

Page 1: Chapter 3: The Reinforcement Learning Problem

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1

Chapter 3: The Reinforcement Learning Problem

describe the RL problem we will be studying for the remainder of the course

present idealized form of the RL problem for which we have precise theoretical results;

introduce key components of the mathematics: value functions and Bellman equations;

describe trade-offs between applicability and mathematical tractability.

Objectives of this chapter:

Page 2: Chapter 3: The Reinforcement Learning Problem

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 2

The Agent-Environment Interface

Agent and environment interact at discrete time steps : t 0,1, 2, Agent observes state at step t : st S

produces action at step t : at A(st )

gets resulting reward : rt 1 and resulting next state : st 1

t

. . . st art +1 st +1

t +1art +2 st +2

t +2art +3 st +3

. . .t +3a

Page 3: Chapter 3: The Reinforcement Learning Problem

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 3

Policy at step t, t :

a mapping from states to action probabilities

t (s, a) probability that at a when st s

The Agent Learns a Policy

Reinforcement learning methods specify how the agent changes its policy as a result of experience.

Roughly, the agent’s goal is to get as much reward as it can over the long run.

Page 4: Chapter 3: The Reinforcement Learning Problem

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 4

Getting the Degree of Abstraction Right

Time steps need not refer to fixed intervals of real time. Actions can be low level (e.g., voltages to motors), or high

level (e.g., accept a job offer), “mental” (e.g., shift in focus of attention), etc.

States can low-level “sensations”, or they can be abstract, symbolic, based on memory, or subjective (e.g., the state of being “surprised” or “lost”).

An RL agent is not like a whole animal or robot, which consist of many RL agents as well as other components.

The environment is not necessarily unknown to the agent, only incompletely controllable.

Reward computation is in the agent’s environment because the agent cannot change it arbitrarily.

Page 5: Chapter 3: The Reinforcement Learning Problem

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 5

Goals and Rewards

Is a scalar reward signal an adequate notion of a goal?—maybe not, but it is surprisingly flexible.

A goal should specify what we want to achieve, not how we want to achieve it.

A goal must be outside the agent’s direct control—thus outside the agent.

The agent must be able to measure success: explicitly; frequently during its lifespan.

Page 6: Chapter 3: The Reinforcement Learning Problem

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 6

Returns

Suppose the sequence of rewards after step t is :

rt 1, rt2 , rt 3, What do we want to maximize?

In general,

we want to maximize the expected return, E Rt , for each step t.

Episodic tasks: interaction breaks naturally into episodes, e.g., plays of a game, trips through a maze.

Rt rt 1 rt 2 rT ,

where T is a final time step at which a terminal state is reached, ending an episode.

Page 7: Chapter 3: The Reinforcement Learning Problem

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 7

Returns for Continuing Tasks

Continuing tasks: interaction does not have natural episodes.

Discounted return:

Rt rt 1 rt2 2rt 3 krt k1,k 0

where , 0 1, is the discount rate.

shortsighted 0 1 farsighted

Page 8: Chapter 3: The Reinforcement Learning Problem

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 8

An Example

Avoid failure: the pole falling beyonda critical angle or the cart hitting end oftrack.

reward 1 for each step before failure

return number of steps before failure

As an episodic task where episode ends upon failure:

As a continuing task with discounted return:reward 1 upon failure; 0 otherwise

return k , for k steps before failure

In either case, return is maximized by avoiding failure for as long as possible.

Page 9: Chapter 3: The Reinforcement Learning Problem

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 9

Another Example

Get to the top of the hillas quickly as possible.

reward 1 for each step where not at top of hill

return number of steps before reaching top of hill

Return is maximized by minimizing number of steps reach the top of the hill.

Page 10: Chapter 3: The Reinforcement Learning Problem

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 10

A Unified Notation

In episodic tasks, we number the time steps of each episode starting from zero.

We usually do not have to distinguish between episodes, so we write instead of for the state at step t of episode j.

Think of each episode as ending in an absorbing state that always produces reward of zero:

We can cover all cases by writing

st st, j

Rt krt k 1,k 0

where can be 1 only if a zero reward absorbing state is always reached.

Page 11: Chapter 3: The Reinforcement Learning Problem

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 11

The Markov Property

By “the state” at step t, the book means whatever information is available to the agent at step t about its environment.

The state can include immediate “sensations,” highly processed sensations, and structures built up over time from sequences of sensations.

Ideally, a state should summarize past sensations so as to retain all “essential” information, i.e., it should have the Markov Property:

Pr st 1 s ,rt 1 r st ,at ,rt , st 1,at 1,,r1,s0 ,a0

Pr st 1 s ,rt 1 r st ,at for all s , r, and histories st ,at ,rt , st 1,at 1,,r1, s0 ,a0.

Page 12: Chapter 3: The Reinforcement Learning Problem

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 12

Markov Decision Processes

If a reinforcement learning task has the Markov Property, it is basically a Markov Decision Process (MDP).

If state and action sets are finite, it is a finite MDP. To define a finite MDP, you need to give:

state and action sets one-step “dynamics” defined by transition probabilities:

reward probabilities:

Ps s a Pr st 1 s st s,at a for all s, s S, a A(s).

Rs s a E rt 1 st s,at a,st 1 s for all s, s S, a A(s).

Page 13: Chapter 3: The Reinforcement Learning Problem

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 13

Recycling Robot

An Example Finite MDP

At each step, robot has to decide whether it should (1) actively search for a can, (2) wait for someone to bring it a can, or (3) go to home base and recharge.

Searching is better but runs down the battery; if runs out of power while searching, has to be rescued (which is bad).

Decisions made on basis of current energy level: high, low. Reward = number of cans collected

Page 14: Chapter 3: The Reinforcement Learning Problem

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 14

Recycling Robot MDP

S high,low A(high) search, wait A(low) search,wait, recharge

Rsearch expected no. of cans while searching

Rwait expected no. of cans while waiting

Rsearch Rwait

Page 15: Chapter 3: The Reinforcement Learning Problem

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 15

Value Functions

State - value function for policy :

V (s) E Rt st s E krt k 1 st sk 0

Action - value function for policy :

Q (s, a) E Rt st s, at a E krt k1 st s,at ak0

The value of a state is the expected return starting from that state; depends on the agent’s policy:

The value of taking an action in a state under policy is the expected return starting from that state, taking that action, and thereafter following :

Page 16: Chapter 3: The Reinforcement Learning Problem

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 16

Bellman Equation for a Policy

Rt rt 1 rt 2 2rt 3 3rt 4

rt 1 rt 2 rt 3 2rt 4 rt 1 Rt 1

The basic idea:

So: V (s) E Rt st s E rt 1 V st 1 st s

Or, without the expectation operator (Bellman equation for V:

V (s) (s,a) Ps s a Rs s

a V ( s ) s

a

Page 17: Chapter 3: The Reinforcement Learning Problem

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 17

More on the Bellman Equation

V (s) (s,a) Ps s a Rs s

a V ( s ) s

a

This is a set of equations (in fact, linear), one for each state.The value function for is its unique solution.

Backup diagrams:

for V for Q

Page 18: Chapter 3: The Reinforcement Learning Problem

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 18

Grid world

Actions: north, south, east, west; deterministic. If would take agent off the grid: no move but reward = –1 Other actions produce reward = 0, except actions that move agent out

of special states A (+10) and B (+5) to their goals as shown.

State-value function for equiprobable random policy;= 0.9

Note that learning the policy that maximizes the award is an objective of RL

Page 19: Chapter 3: The Reinforcement Learning Problem

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 19

Grid world

% Example 3.8% matrix is scanned column wise% action - going northclear all;R=zeros(4,25);% next statesS_prime(1,1)=1;S_prime(1,2:25)=(1:24);S_prime(1,11)=11;S_prime(1,21)=21;% reward in north directionfor i=0:4 R(1,i*5+1)=-1;end;% action - going eastS_prime(2,1:20)=(6:25);S_prime(2,21:25)=(21:25);% reward in east directionR(2,21:25)=-1;

% action - going southS_prime(3,1:24)=(2:25);S_prime(3,5:5:25)=(5:5:25);% reward in south directionR(3,5:5:25)=-1;% action - going westS_prime(4,6:25)=(1:20);S_prime(4,1:5)=(1:5);% reward in west directionR(4,1:5)=-1;

% special statesfor i=1:4 S_prime(i,6)=10; S_prime(i,16)=18; R(i,6)=10; R(i,16)=5;end;

1 6 11 16 21

2 7 12 17 22

3 8 12 18 23

4 9 14 19 24

5 10 15 20 25

Page 20: Chapter 3: The Reinforcement Learning Problem

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 20

Grid world% initialize variables

Veps=0.01;

delV=1;

gamma=0.9;

Vpi=zeros(1,25);

% iterate until convergence

while delV>Veps

% Bellman equation

Vpi_new=0.25*sum(R+gamma*Vpi(S_prime));

delV=norm(Vpi_new-Vpi);

Vpi=Vpi_new;

end;

Vpi=reshape(Vpi,5,5)% Vpi =% 3.3253 8.8065 4.4398 5.3320 1.4968% 1.5362 3.0064 2.2612 1.9157 0.5529% 0.0652 0.7517 0.6852 0.3681 -0.3946% -0.9582 -0.4202 -0.3407 -0.5720 -1.1703% -1.8410 -1.3287 -1.2127 -1.4068 -1.9593

V (s) (s,a) Ps s a Rs s

a V ( s ) s

a

0.25 R combines this product

Page 21: Chapter 3: The Reinforcement Learning Problem

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 21

More on the Bellman Equation

V (s) (s,a) Ps s a Rs s

a V ( s ) s

a

Since this is a set of linear equations, they can be solvedwithout iterations (and exactly) by direct inverse.

Vpi=reshape(Vpi,5,5)% Vpi =% 3.3090 8.7893 4.4276 5.3224 1.4922% 1.5216 2.9923 2.2501 1.9076 0.5474% 0.0508 0.7382 0.6731 0.3582 -0.4031% -0.9736 -0.4355 -0.3549 -0.5856 -1.1831% -1.8577 -1.3452 -1.2293 -1.4229 -1.9752

% direct solution can be obtained in a linear systemT=eye(25);for i=1:25 for j=1:4 T(i,S_prime(j,i)')=T(i,S_prime(j,i)')-0.25*gamma;end;end;Vpi=inv(T)*0.25*sum(R)';

Diagonal matrix for V

(s), and effect of V

(s’)

Rewards on the rhs

Page 22: Chapter 3: The Reinforcement Learning Problem

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 22

Golf

State is ball location Reward of –1 for each stroke

until the ball is in the hole Value of a state? Actions:

putt (use putter) driver (use driver)

putt succeeds anywhere on the green

Page 23: Chapter 3: The Reinforcement Learning Problem

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 23

if and only if V (s) V (s) for all s S

Optimal Value Functions

For finite MDPs, policies can be partially ordered:

There is always at least one (and possibly many) policies that is better than or equal to all the others. This is an optimal policy. We denote them all *.

Optimal policies share the same optimal state-value function:

Optimal policies also share the same optimal action-value function:

V (s) max

V (s) for all s S

Q(s,a) max

Q (s, a) for all s S and a A(s)

This is the expected return for taking action a in state s and thereafter following an optimal policy.

Page 24: Chapter 3: The Reinforcement Learning Problem

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 24

Optimal Value Function for Golf

We can hit the ball farther with driver than with putter, but with less accuracy

Q*(s,driver) gives the value or using driver first, then using whichever actions are best

Page 25: Chapter 3: The Reinforcement Learning Problem

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 25

Bellman Optimality Equation for V*

V (s) maxaA(s)

Q

(s,a)

maxaA(s)

E rt 1 V(st 1) st s, at a max

aA(s)Ps s

a

s Rs s

a V ( s )

The value of a state under an optimal policy must equalthe expected return for the best action from that state:

The relevant backup diagram:

is the unique solution of this system of nonlinear equations.V

Page 26: Chapter 3: The Reinforcement Learning Problem

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 26

Bellman Optimality Equation for Q*

Q(s,a) E rt 1 maxa

Q (st1, a ) st s,at a Ps s

a Rs s a max

a Q( s , a )

s

The relevant backup diagram:

is the unique solution of this system of nonlinear equations.Q*

Page 27: Chapter 3: The Reinforcement Learning Problem

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 27

Why Optimal State-Value Functions are Useful

V

V

Any policy that is greedy with respect to is an optimal policy.

Therefore, given , one-step-ahead search produces the long-term optimal actions.

E.g., back to the grid world:

Page 28: Chapter 3: The Reinforcement Learning Problem

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 28

What About Optimal Action-Value Functions?

Given , the agent does not evenhave to do a one-step-ahead search:

Q*

(s) arg maxaA (s)

Q(s,a)

% find the optimum state value

while delV>Veps

[Vpi_new,I]=max(R+gamma*Vpi(S_prime));

delV=norm(Vpi_new-Vpi);

Vpi=Vpi_new;

end;

% determine the optimum policy

Reward=R+gamma*Vpi(S_prime);

for i=1:25

Optimum_policy(:,i)=(Reward(:,i)==Reward(I(i),i));

end;

norm=sum(Optimum_policy);

for i=1:25

Optimum_policy(:,i)=Optimum_policy(:,i)./norm(i);

end

Page 29: Chapter 3: The Reinforcement Learning Problem

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 29

What About Optimal Action-Value Functions?

Vpi=reshape(Vpi,5,5)

% Vpi =

% 21.9694 24.4104 21.9694 19.4104 17.4694

% 19.7724 21.9694 19.7724 17.7952 16.0115

% 17.7952 19.7724 17.7952 16.0115 14.4104

% 16.0115 17.7952 16.0115 14.4104 12.9694

% 14.4104 16.0115 14.4104 12.9694 11.6724

Optimum_policy

% First 14 columns of Optimum_policy =

% Columns 1 through 7

% 0 0.5000 0.5000 0.5000 0.5000 0.2500 1.0000

% 1.0000 0.5000 0.5000 0.5000 0.5000 0.2500 0

% 0 0 0 0 0 0.2500 0

% 0 0 0 0 0 0.2500 0

% Columns 8 through 14

% 1.0000 1.0000 1.0000 0 0.5000 0.5000 0.5000

% 0 0 0 0 0 0 0

% 0 0 0 0 0 0 0

% 0 0 0 1.0000 0.5000 0.5000 0.5000

1 6 11 16 21

2 7 12 17 22

3 8 12 18 23

4 9 14 19 24

5 10 15 20 25

Page 30: Chapter 3: The Reinforcement Learning Problem

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 30

Solving the Bellman Optimality Equation

Finding an optimal policy by solving the Bellman Optimality Equation requires the following:

accurate knowledge of environment dynamics; we have enough space an time to do the computation; the Markov Property.

How much space and time do we need? polynomial in number of states (via dynamic programming

methods; Chapter 4), BUT, number of states is often huge (e.g., backgammon has about

10**20 states). We usually have to settle for approximations. Many RL methods can be understood as approximately solving the

Bellman Optimality Equation.

Page 31: Chapter 3: The Reinforcement Learning Problem

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 31

Summary

Agent-environment interaction States Actions Rewards

Policy: stochastic rule for selecting actions

Return: the function of future rewards agent tries to maximize

Episodic and continuing tasks Markov Property Markov Decision Process

Transition probabilities Expected rewards

Value functions State-value function for a policy Action-value function for a policy Optimal state-value function Optimal action-value function

Optimal value functions Optimal policies Bellman Equations The need for approximation