66
A Thought Experiment • 2 doors • .1 and .2 probability of getting a dollar respectively • Can get a dollar behind both doors on the same trial • Dollars stay there until collected, but never more than 1 dollar per door. • What order of doors do you choose?

A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

Embed Size (px)

Citation preview

Page 1: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

A Thought Experiment

• 2 doors• .1 and .2 probability of getting a dollar

respectively• Can get a dollar behind both doors on

the same trial• Dollars stay there until collected, but

never more than 1 dollar per door.• What order of doors do you choose?

Page 2: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

Patterns in the Data

• If choices are made moment by moment, should be orderly patterns in the choices: 2, 2, 1, 2, 2, 1…

• Results mixed but promising results when using time as the measure

Page 3: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

What Works Best Right Now

• Maximizing local rates and moment to moment choices can lower overall reinforcement rate.

• Short-term vs. long-term

Page 4: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

Delay and Self-Control

Page 5: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

Delayed Reinforcers

• Many of life’s reinforcers are delayed…– Eating right, studying, etc.

• Delay obviously devalues a reinforcer– How are effects of reinforcers affected by

delay?– Why choose the immediate, smaller

reward?– Why ever show self-control?

Page 6: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

Remember Superstition?

• Temporal, not causal– Causal, with delay, very hard

• Same with delay of reinforcement– Effects decrease with delay

• But how does it occur?

• Are there reliable and predictable effects?

• Can we quantify the effect?

Page 7: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

7

How Do We Measure Delay Effects? Studying preference of delayed reinforcers

Humans: - verbal reports at different points in time- “what if” questions

Humans AND nonhumans:

A. Concurrent chains

B: Titration

All are choice techniques.

Page 8: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

8

A. Concurrent chains

Concurrent chains are simply concurrent schedules -- usually concurrent equal VI VI -- in which reinforcers are delayed.

When a response is reinforced, usually both concurrent schedules stop and become unavailable, and a delay starts.

Sometimes the delays are in blackout with no response required to get the final reinforcer (an FT schedule); Sometimes the delays are actually schedules, with an associated stimulus, like an FI schedule, that requires responding.

Page 9: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

9

W W

W W WW

Conc VI VI

VI b s

FoodFood

VI a s

Initial links, Choice phase

Terminal links,

Outcome phase

The concurrent-chain procedure

Page 10: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

10

An example of a concurrent-chain experiment

MacEwen (1972) investigated choice between two terminal-link FI and two terminal-link VI schedules, one of which was always twice as long as the other.

The initial links were always concurrent VI 60-s VI 60-s schedules.

Page 11: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

11

FI 5 s FI 10 s FI 10 s FI 20 s FI 20 s FI 40 s FI 40 s FI 80 s VI 5 s VI 10 s VI 10 s VI 20 s VI 20 s VI 40 s VI 40 s VI 80 s

The terminal-link schedules were:

Constant reinforcer (delay and immediacy) ratio in the terminal links – all immediacy ratios are 2:1.

Page 12: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

12

BIRD M6

SMALLER FI or VI VALUE (s)

0 10 20 30 40

LOG

RE

SP

ON

SE

RA

TIO

0.0

0.5

1.0

1.5

2.0

FI TERMINAL LINKS

VI TERMINAL LINKS

Page 13: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

13

From the generalised matching law, we would expect:

If ad was constant, then because D2/D1 was kept constant, we would expect no change in choice with changes in the absolute size of the delays.

cD

Da

B

Bd logloglog

1

2

2

1

FI 5 s FI 10 s FI 10 s FI 20 s FI 20 s FI 40 s FI 40 s FI 80 s VI 5 s VI 10 s VI 10 s VI 20 s VI 20 s VI 40 s VI 40 s VI 80 s

D2/D1 was kept constant throughout.

Page 14: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

14

BIRD M6

SMALLER FI or VI VALUE (s)

0 10 20 30 40

LOG

RE

SP

ON

SE

RA

TIO

0.0

0.5

1.0

1.5

2.0

FI TERMINAL LINKS

VI TERMINAL LINKS

But choice did change, so ad did NOT remain constant:

But does give us some data to answer some other questions…

Page 15: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

Shape of the Delay Function

• Now that we have some data…

• How does reinforcer value change over time?

• What is the shape of the decay function?

Page 16: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

16

A concave-upwards graph

REINFORCER DELAY

0 5 10 15 20 25 30

RE

INF

OR

CE

R V

ALU

E

0

1

2

3

Basically, the effects that reinforcers have on behaviour decrease -- rapidly -- when the reinforcers are more and more delayed after the reinforced response.

This is how reinforcer value generally changes with delay:

Page 17: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

Delay Functions

• What is the “real” delay function?

Vt = V0 / (1 + Kt)

Vt = V0/(1 + Kt)s

Vt = V0/(M + Kts)

Vt = V0/(M + ts)

Vt = V0 exp(-Mt)

Page 18: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

18

Exponential versus hyperbolic decay

It is important to understand how the effects of reinforcers decay over time, because different sorts of decay predict different effects.

The two main candidates:

Exponential decay -- the rate of decay remains constant over time in this

Hyperbolic decay -- the rate of decay decreases over time-- as in memory, too

Page 19: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there
Page 20: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

20

btt VV -

0 e

Exponential decay

Vt : value of the delayed reinforcer at time t

Vo : value of the reinforcer at 0-s delay

t : delay in seconds

b : a parameter that determines the rate of decay

e : the base of natural logarithms.

Page 21: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

21

th

hVVt

0

Hyperbolic decay

In this equation, all the variables are the same as in the exponential decay, except that h is the half-life of the decay -- the time over which the value of Vo reduced to half its initial value.

Hyperbolic decay is strongly supported by Mazur’s research.

Page 22: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

REINFORCER DELAY (s)

0 5 10 15 20 25 30

RE

INF

OR

CE

R V

ALU

E

0.0

0.5

1.0

1.5

2.0

2.5

3.0 HYPERBOLIC DECAY

EXPONENTIAL DECAY

Page 23: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

23

Two sorts of decay fitted to McEwen's (1972) data

LOG

RE

SP

ON

SE

RA

TIO

0.00

0.25

0.50

0.75

1.00

SMALLER DELAY (s)

0 10 20 30 40

0.00

0.25

0.50

0.75

1.00

HYPERBOLIC DECAY

EXPONENTIAL DECAY

Hyperbolic is clearly better.

Not that clean, but… Rel

ativ

e R

ate

Page 24: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

Studying Delay Using Indifference

• Titration procedures.

Page 25: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

25

B: Titration - Finding the point of preference reversal

The titration procedure was introduced by Mazur:

- one standard (constant) delay and

- one adjusting delay.

These may differ in what schedule they are (e.g., FT versus VT with the same size reinforcers for both), or they may be the same schedule (both FT, say) with different magnitudes of reinforcers.

What the procedure does is to find the value of the adjusting delay that is equally preferred to the standard delay -- the indifference point in choice.

Page 26: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

26

For example:

- reinforcer magnitudes are the same

- standard schedule is VT 30 s

- adjusting schedule is FT

How long would the FT schedule need to become to make preference equal?

Page 27: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

27

Titration: Procedure

Trials are in blocks of 4.

The first 2 are forced choice, randomly one to each alternative

The last 2 are free choice.

If, on the last 2 trials, it chooses the adjusting schedule twice, the adjusting schedule is increased by a small amount.

If it chooses the standard twice, the adjusting schedule is decreased by a small amount.

If equal choice (1 of each) -- no change

(von Bekesy procedure in audition)

Page 28: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

28

W W W

W W W

W W W

Peck

Standard delay +

red houselight

Adjusting delay + green

houselight

6-s food

2-s food,

BO

Mazur's titration

procedure

Why the post-reinforcer blackout?

ITI

Trial start

Choice

Page 29: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

Mazur’s Findings

• Different magnitudes, finding delay– 2-sec rf delayed 8 sec = 6 sec rf delayed

20 sec.

• Equal magnitudes, variable vs. fixed delay– Fixed delay 20 sec = variable delay 30 sec

• Why preference for variable?– Hyperbolic decay and interval weighting.

Page 30: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

Moving onto Self-Control

• Which would you prefer?– $1 in an hour– $2 tomorrow

Page 31: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

Moving onto Self-Control

• Which would you prefer?– $1 in a month– $2 in a month and a day

Page 32: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

32

Here’s the problem:

Preference reversal

In positive self control, the further you are away from the smaller and larger reinforcers, the more likely you are to accept the larger, more delayed reinforcers.

But, the closer you get to the first one, the more likely you are to chose the smaller, more immediate one.

Page 33: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

33

Friday night:

“Alright, I am setting my alarm clock to wake me up at 6.00 am tomorrow morning, and then I’ll go jogging.” ...

Saturday 6.00 am:

“Hmm….maybe not today.”

Page 34: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there
Page 35: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

35

Outside the laboratory, the majority of reinforcers are delayed. Studying the effects of delayed reinforcers is therefore very important.

To be able to understand why preference reversal occurs, we need to know how the value of a reinforcer changes the time by which it is delayed...

Assume: At the moment in time when we make the choice, we choose the reinforcer that has the highest current value...

Page 36: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

36

Animal research: Preference reversal

Green, Fisher, Perlow, & Sherman (1981)

Choice between a 2-s and a 6-s reinforcer.

Larger reinforcer delayed 4 s more than the smaller.

Choice response (across conditions) required from 2 to 28 s before the smaller reinforcer.

We will call this time T.

Page 37: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

37

4 s

T

choi

ce

28 s 2 s

Small rf Large rf

Page 38: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

38

4 s

T

choi

ce

28 s 2 s

Small rf Large rf

Page 39: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

39

4 s

T

choi

ce

28 s 2 s

Small rf Large rf

Page 40: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

40

Green et al. (continued)

Thus, if T was 10 s, at the choice point, the smaller reinforcer was 10-s awaythe larger was 14-s away

So, as T is changed over conditions, we should see preference reversal.

Page 41: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

41

VALUE OF T

0 5 10 15 20 25

LOG

RE

SP

ON

SE

RA

TIO

-2

-1

0

1

2GREEN ET AL. (1981)

MEAN DATA

SELF CONTROL

IMPULSIVITY

Control condition: two equal-sized reinforcers were delayed, one 28 s the other 32 s. Preference was strongly towards the reinforcer that came sooner.

So, at delays that long, pigeons can still clearly tell which reinforcer is sooner and which one later. La

rger

, la

ter

/ S

mal

ler,

soo

ner

Page 42: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

42

EXPONENTIAL

SECONDS FROM SMALLER RF

0 10 20 30

RE

INF

OR

CE

R V

ALU

E

0

2

4

6MAG = 2MAG = 6

Which Delay Function Predicts This?

Page 43: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

HYPERBOLIC

SECONDS FROM SMALLER RF

0 10 20 30

RE

INF

OR

CE

R V

ALU

E

0

2

4

6

MAG = 2MAG = 6

43

Only hyperbolic decay can explain preference reversal

Page 44: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

44

Hyperbolic predictions shown the same way

TIME

0 1 2 3 4 5 6

RE

INF

OR

CE

R V

ALU

E

0

1

2

3

4

4 s

to s

mal

l, 6

s to

larg

e

1 s

to s

mal

l, 3

s to

larg

e

2-s

RF

T H

ER

E

4-s

RF

T H

ER

E

Choice reverses here

Page 45: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

45

12

21

2

1

DM

DM

B

B

Using strict matching theory to explain preference reversal

The concatenated strict matching law for reinforcer magnitude and delay (see the generalised matching lecture) is:

where M is reinforcer magnitude, and D is reinforcer delay.

Note that for delay, a longer delay is less preferred, and therefore D2 is on top.

(OK, we know SM isn’t right, and delay sensitivity isn’t constant)

Page 46: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

46

0

8

06

42.

1

2

2

1

2

1

x

x

D

D

M

M

B

B

The baseline is: M1 = 2, M2 = 6, D1 = 0, D2 = 4

We will take the situation used by Green et al. (1981), and work through what the STRICT matching law predicts:

The choice is infinite. Thus, the subject is predicted always to take the smaller, zero-delayed, reinforcer

Page 47: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

47

33

9

5.06

5.42.

1

2

2

1

2

1 x

x

D

D

M

M

B

B

Now, add T = 0.5 s, so M1 = 2, M2 = 6, D1 = 0.5, D2 = 4.5

The subject is predicted to prefer the smaller magnitude reinforcer three times more than the larger magnitude reinforcer, and again be impulsive. But its preference for the immediate reinforcer has decreased a lot.

Page 48: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

48

67.16

10

16

52

2

1 x

x

B

B

Then, when T = 1,

The choice is now less impulsive.

Page 49: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

49

For T = 2, the preference ratio B1/B2 is 1 -- so now, the generalised matching law predicts indifference between the two choices.

For T = 10, the preference ratio is 0. 47 -- more than 2:1 towards the larger, more delayed, reinforcer. That is, the subject is now showing self control

The whole function is shown next -- predictions for Green et al. (1981) assuming strict matching.

Page 50: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

50

VALUE OF T

0 5 10 15 20 25

LOG

RE

SP

ON

SE

RA

TIO

-1.0

-0.5

0.0

0.5

1.0MATCHING LAW PREDICTIONSThis graph shows

log (B2/B1), rather than (B1/B2), shows how self control increases as you go back in time from when the reinforcers are due.

Self control

Impulsive

Page 51: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

51

VALUE OF T

0 5 10 15 20 25

LOG

RE

SP

ON

SE

RA

TIO

-2

-1

0

1

2GREEN ET AL. (1981)

MEAN DATA

SELF CONTROL

IMPULSIVITY

Green et al.’s actual data

Page 52: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

52

Commitment

• Do this now

• Don’t have a choice to do the bad thing later

• Halloween candy

Page 53: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

53

Commitment in the laboratory

Rachlin & Green (1972)

Pigeons chose between:

EITHER

allowing themselves a later choice between a small short-delay (SS) reinforcer or a large long-delay reinforcer (LL),

OR

denying themselves this later choice, and can only get the LL reinforcer.

Page 54: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

54

W

W

Rachlin & Green (1972)

Blackout

ReinforcerT

Larger later

Smaller sooner

Larger later, no choice

Page 55: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

Operant Conditioning

Page 56: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

56

As they moved the time T at which the commitment response was offered earlier in time from the reinforcers (from 0.5 to 16 s), preference should reverse.

Indeed, Rachlin and Green found that 4 out of 5 birds developed commitment (what we might call a commitment strategy) when T was larger.

Page 57: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

Operant Conditioning

Page 58: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

58

Mischel & Baker (1975)

Experimenter puts one pretzel on a table and leaves the room for an unspecified amount of time.

If the child rings a bell, experimenter will come back and child can eat the pretzel.

If the child waits, experimenter will come back with 3 pretzels.

Most children chose the impulsive option.

But there is apparently a correlation with age, SES, IQ scores.(correlation!)

Page 59: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

Operant Conditioning

Page 60: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

60

Mischel & Baker (1975)

Self control less likely if children are instructed to think about the taste of the pretzels (e.g., how crunchy they are).

Self control was more likely if they were instructed to think about the shape or colour of the pretzels.

Page 61: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

61

Much human data replicated with animals by Neuringer & Grosch (1981).

For example, making food reinforcers visible upset self control, but an extraneous task helped self control.

Page 62: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

62

Can nonhumans be trained to show sustained self control?

Mazur & Logue (1978) - Fading in self control

Delay (s) Magnitude (s)Choice 1 6 2Choice 2 6 6

Preferred Choice 2 (larger magnitude, same delay) -- Self control

Over 11,000 trials, they faded the delay to the smaller magnitude (Choice 1) to 0 s -- and self control was maintained!

Page 63: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

63

Additionally, and this is important, self control was even maintained even when the outcomes were reversed between the keys.

In other words, the pigeons didn’t have to be re-taught to choose the self control option, but applied it to the new situation.

Page 64: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

64

Contingency contracting

A common therapeutic procedure:

e.g., “I give you my CD collection, and agree that if I don't lose 0.5 kg per week, you can chop up one of my CDs -- each week.”

You use the facts of self control -- i.e., you say "let's start this a couple of weeks from now" and the client will readily agree -- if you said, "starting today", they most likely would not.

It's easy to give up anything next week...

Page 65: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

Other Commitment Procedures

• Tell your friend to pick you up

• Let everyone know you’ve stopped smoking

• Avoid discriminative stimuli

• Train incompatible behaviors

• Bring consequences closer in time

Page 66: A Thought Experiment 2 doors.1 and.2 probability of getting a dollar respectively Can get a dollar behind both doors on the same trial Dollars stay there

66

Social dilemmas

A lot of the world’s problems are problems of self control on a macro scale.

-Investment strategies

Rachlin, H. (2006). Notes on discounting. Journal of the Experimental Analysis of Behavior, 85, 425- 435.“In general, if a variable can be expressed as a function of its own maximum value, that function may be called a discount function. Delay discounting and probability discounting are commonly studied in psychology, but memory, matching, and economic utility also may be viewed as discounting processes.”