Upload
damon-hubbard
View
214
Download
0
Embed Size (px)
Citation preview
Quiz #3Last class, we talked about 6 techniques for
self-control. Name and briefly describe 2 of those techniques.
1
Schedules of Reinforcement
2
Schedule of ReinforcementDelivery of reinforcementContinuous reinforcement (CRF)Fairly consistent patterns of behaviourCumulative recorder
3
Cumulative RecordUse a cumulative recorderNo response: flat lineResponse: slopeCumulative record
4
paper strip
pen
roller roller
5
6
7
VI-25
Schedules:4 Basic:
Fixed RatioVariable RatioFixed IntervalVariable Interval
Others and mixes (concurrent)
8
Fixed Ratio (FR)N responses required; e.g., FR 25CRF = FR1Rise-and-runPostreinforcement pauseSteady, rapid rate of responseRation strain
9
Time
no responses
reinforcement
responses“pen” resetting
slope
Variable Ratio (VR)Varies around mean number of responses; e.g.,
VR 25Rapid, steady rate of responseShort, if any postreinforcement pause
Longer schedule --> longer pauseNever know which response will be reinforced
10
Fixed Interval (FI)Depends on time; e.g., FI 25Postreinforcement pauseScallopingTime estimationClock doesn’t start until reinforcer given
11
Variable Interval (VI)Varies around mean time; e.g., VI 25Steady, moderate response rateDon’t know when time has elapsedClock doesn’t start until reinforcer given
12
13
FR 25
VR 25
FI 25
VI 25
75
50
25
VR FR VI FI
Slope = rise
run
Duration SchedulesContinuous responding for some time period
to receive reinforcementFixed duration (FD)
Period of duration is a set time periodVariable duration (VD)
Period of duration varies around a mean
14
Differential Rate SchedulesDifferential reinforcement of low rates (DRL)
Reinforcement only if X amount of time has passed since last response
Sometimes “superstitious behaviours” occurDifferential reinforcement of high rates (DRH)
Reinforcement only if more than X responses in a set time
Or, reinforcement if less that X amount of time has passed since last response
15
Noncontingent SchedulesReinforcement delivery not contingent upon a
response, but on passage of timeFixed time (FT)
Reinforcer given after set time elapsesVariable time (VT)
Reinforcer given after some time varying around a mean
16
Stretching the RatioIncreasing the number of responsese.g., FR 5 --> FR 50Extinction problemUse shapingIncrease in gradual incrementse.g., FR 5, FR 8, FR 14, FR 21, FR 35, FR 50“Low” or “high” schedules
17
ExtinctionCRF (FR 1) easiest to extinguish than
intermittent schedules (anything but FR 1)Partial reinforcement effect (PRE)High schedules harder to extinguish than lowVariable schedules harder to extinguish than
fixed
18
Discrimination HypothesisDifficult to discriminate between extinction
and intermittent scheduleHigh schedules more like extinction than low
schedulese.g., CRF vs. FR 50
19
Frustration HypothesisNon-reinforcement for response is frustratingOn CRF every response reinforced, so no
frustrationFrustration grows continually during
extinctionStop responding, stop frustration (neg. reinf.)
Any intermittent schedule always some non-reinforced responsesResponding leads to reinforcer (pos. reinf.)Frustration = S+ for reinforcement
20
Sequential HypothesisResponse followed by reinf. or nonreinf.On intermittent schedules, nonreinforced
responses are S+ for eventual delivery of reinforcer
High schedules increase resistance to extinction because many nonreinforced responses in a row leads to reinforced
Extinction similar to high schedule
21
Response Unit HypothesisThink in terms of behavioural “units”FR1: 1 response = 1 unit --> reinforcementFR2: 2 responses = 1 unit --> reinforcement
Not “response-failure, response-reinforcer” but “response-response-reinforcer”
Says PRE is an artifact
22
Mowrer & Jones (1945)
Response unit hypothesis
More responses in extinction on higher schedules disappears when considered as behavioural units
23
300
250
200
150
100
50
Nu
mb
er o
f re
spon
ses/
un
its
du
rin
g ex
tin
ctio
n
FR1 FR2 FR3 FR4
absolute number of responses
number of behavioural units
Complex SchedulesMultipleMixedChainTandemcooperative
24
ChoiceTwo-key procedure
Concurrent schedules of reinforcementEach key associated with separate scheduleDistribution of time and behaviour
The measure of choice and preference
25
Concurrent Ratio SchedulesTwo ratio schedulesSchedule that gives most rapid reinforcement
chosen exclusivelyRarely used in choice studies
26
Concurrent Interval SchedulesMaximize reinforcementMust shift between alternativesAllows for study of choice behaviour
27
Interval SchedulesFI-FI
Steady-state respondingLess useful/interesting
VI-VINot steady-state responding
Respond to both alternatives Sensitive to rate of reinforcemenet
Most commonly used to study choice
28
Alternation and the Changeover ResponseMaximize reinforcers from both alternativesFrequent shifting becomes reinforcing
Simple alternationConcurrent superstition
29
Changeover DelayCODPrevents rapid switchingTime delay after “changeover” before
reinforcement possible
30
Herrnstein’s (1961) ExperimentConcurrent VI-VI schedulesOverall rates of reinforcement held
constant40 reinforcers/hour between two
alternatives
31
The Matching LawThe proportion of responses directed toward
one alternative should equal the proportion of reinforcers delivered by that alternative.
32
Key 1
Key 2
VI-3minRein/hour = 20
Resp/hour = 2000
VI-3minRein/hour = 20
Resp/hour = 2000
Proportional Rate of Reinforcement
R1 = reinf. on key 1R2 = reinf. on key 2
R1
R1+R2
= 2020+20
= 0.5
Proportional Rate of Response
B1 = resp. on key 1B2 = resp. on key 2
B1
B1+B2
= 20002000+2000
= 0.5
MATCH!!!
Key 1
Key 2
VI-9minRein/hour = 6.7Resp/hour = 250
VI-1.8minRein/hour = 33.3Resp/hour = 3000
Proportional Rate of Reinforcement
R1 = reinf. on key 1R2 = reinf. on key 2
R1
R1+R2
= 6.76.7+33.3
= 0.17
Proportional Rate of Response
B1 = resp. on key 1B2 = resp. on key 2
B1
B1+B2
= 250250 + 3000
= 0.08
NO MATCH (but close…)
BiasSpend more time on one alternative than
predictedSide preferencesBiological predispositionsQuality and amount
35
Varying Quality of ReinforcersQ1: quality of first reinforcerQ2: quality of second reinforcer
36
Varying Amount of ReinforcersA1: amount of first reinforcerA2: amount of second reinforcer
37
Combining Qualities and Amounts
38
ApplicationsGambling
Reinforcement historyEconomics
Value of reinforcer and stretching the ratioMalingering
39