29
CHAPTER 9 Temporal Learning RUSSELL M. CHURCH Humans and other animals are adapted to a physical world that can be described in terms of events that occur at some time and in some location. The events are changes in physical stimuli, such as the onset or termination of a noise, that usually can be localized in time and space. This chapter concerns the ability of animals to learn about the dimension of time, a topic that includes questions about tempo- ral perception and temporal memory as well as decisions about temporal intervals. Most of the data come from asymptotic levels of performance, but some come from the initial learning and subsequent adjustment to new temporal intervals. The history of the study of behavioral adjustment to the temporal dimen- sion of the physical world has three origins: human psychophysics, biological rhythms, and animal learning. HISTORICAL BACKGROUND In a chapter on the perception of time, William James (1890) reviewed the psychophysical and introspective evidence available primarily from laboratories in Germany. James’ chap- ter contains many ideas that are worth the attention of a modern reader. These include chunking (p. 612), span of temporal attention (p. 613), particular intervals that are judged with maximal accuracy (p. 616), context ef- fects (p. 618), effect of filled versus empty in- tervals (p. 618), prospective versus retrospec- tive timing (p. 624), the effect of age on time perception (p. 625), neural processes in time perception (p. 635), and the effect of hashish intoxication on time perception (p. 639). In the first edition of his handbook, Woodrow (1951) reviewed knowledge about time perception. That chapter, based primarily on psychophys- ical research in the first half of the twentieth century, dealt with many of the problems de- scribed by James. Both of these treatments of temporal perception were focused on humans, and many of the testing methods required the use of language. The central problem of the psychophysical approach to the study of hu- man timing was to understand temporal per- ception, particularly the relationship between subjective time and physical time. The daily cycles of activity of animals were studied for over 50 years by Richter (1922, 1965, 1977). He developed methods used in the study of these rhythms and studied factors controlling the circadian clock. Although the phase of a circadian clock provides informa- tion about the time since an entraining event (such as onset of light or food), this clock did not seem to have properties useful for tim- ing short intervals from an arbitrary event. The central problem of the study of biological rhythms was to describe the animals’ adapta- tions to cyclical regularities in the physical 365

Temporal Learning Stevens Handbook Exp Psychology

Embed Size (px)

DESCRIPTION

Psicología experimental

Citation preview

pashler-44108 book January 17, 2002 14:40

CHAPTER 9

Temporal Learning

RUSSELL M. CHURCH

Humans and other animals are adapted to aphysical world that can be described in termsof events that occur at some time and in somelocation. The events are changes in physicalstimuli, such as the onset or termination of anoise, that usually can be localized in timeand space. This chapter concerns the ability ofanimals to learn about the dimension of time,a topic that includes questions about tempo-ral perception and temporal memory as wellas decisions about temporal intervals. Mostof the data come from asymptotic levels ofperformance, but some come from the initiallearning and subsequent adjustment to newtemporal intervals. The history of the study ofbehavioral adjustment to the temporal dimen-sion of the physical world has three origins:human psychophysics, biological rhythms,and animal learning.

HISTORICAL BACKGROUND

In a chapter on the perception of time, WilliamJames (1890) reviewed the psychophysicaland introspective evidence available primarilyfrom laboratories in Germany. James’ chap-ter contains many ideas that are worth theattention of a modern reader. These includechunking (p. 612), span of temporal attention(p. 613), particular intervals that are judgedwith maximal accuracy (p. 616), context ef-

fects (p. 618), effect of filled versus empty in-tervals (p. 618), prospective versus retrospec-tive timing (p. 624), the effect of age on timeperception (p. 625), neural processes in timeperception (p. 635), and the effect of hashishintoxication on time perception (p. 639). In thefirst edition of his handbook, Woodrow (1951)reviewed knowledge about time perception.That chapter, based primarily on psychophys-ical research in the first half of the twentiethcentury, dealt with many of the problems de-scribed by James. Both of these treatments oftemporal perception were focused on humans,and many of the testing methods required theuse of language. The central problem of thepsychophysical approach to the study of hu-man timing was to understand temporal per-ception, particularly the relationship betweensubjective time and physical time.

The daily cycles of activity of animals werestudied for over 50 years by Richter (1922,1965, 1977). He developed methods used inthe study of these rhythms and studied factorscontrolling the circadian clock. Although thephase of a circadian clock provides informa-tion about the time since an entraining event(such as onset of light or food), this clock didnot seem to have properties useful for tim-ing short intervals from an arbitrary event.The central problem of the study of biologicalrhythms was to describe the animals’ adapta-tions to cyclical regularities in the physical

365

pashler-44108 book January 17, 2002 14:40

366 Temporal Learning

environment, and their neural mechanisms(Moore-Ede, Sulzman, & Fuller, 1982).

In his lectures on conditioned reflexes,Pavlov (1927) reported the results of manyexperiments with dogs in which conditionedand unconditioned stimuli were presented andsalivary responses were measured. The proce-dures were described in terms of the type ofstimulus (e.g., rotating object, tone) and of thetime intervals between the onset or termina-tion of the stimulus and the delivery of the un-conditioned stimulus (usually food powder oracid). The tables of results typically includedinformation about the time of occurrence aswell as the nature of the event. Pavlov stud-ied temporal conditioning, in which there wasa fixed interval between successive deliveriesof the unconditioned stimulus; he also stud-ied delayed conditioning, in which a stimu-lus onset occurred a fixed time prior to thedelivery of the unconditioned stimulus andterminated with it; and he studied trace condi-tioning, which was the same as delayed con-ditioning except that the stimulus terminateda fixed time before the delivery of the uncon-ditioned stimulus (see Figure 9.1). In all casesthere was an increasing amount of salivary re-sponding as a function of time since an eventthat had a fixed time relation to the uncon-ditioned stimulus. The event that marked theonset of an interval terminating in an uncon-ditioned stimulus was either the previous un-

Figure 9.1 Three timing procedures used by Pavlov.NOTE: The filled rectangles indicate the presence of a stimulus; the filled triangles indicate the time ofa reinforcer. In temporal conditioning there was a constant interval between successive reinforcers; indelayed conditioning there was also a constant interval between the onset of a stimulus and a reinforcer;and in trace conditioning there was also a constant (nonzero) interval between the termination of astimulus and a reinforcer.

conditioned stimulus (in temporal condition-ing), a conditioned stimulus onset (in delayedconditioning), or both the conditioned stimu-lus onset and termination (in trace condition-ing). Pavlov noted the functional value of theanticipatory salivary response for digestion.He wrote, “When we come to seek an inter-pretation of these results, it seems pretty ev-ident that the duration of time has acquiredthe properties of a conditioned stimulus”(p. 41).

The early animal-learning studies of be-havioral adjustment to temporal intervals be-tween events included not only the classicalconditioning research of Pavlov and others,but also instrumental learning research. (Bydefinition, a classical conditioning procedureis one in which the interval between stimulusand reinforcement is specified and the inter-val between the response and reinforcement isnot; an instrumental learning procedure is onein which the interval between a response andreinforcement is specified.) The instrumen-tal (operant) procedures and results of B. F.Skinner have had the greatest influence oncontemporary research. The research onschedules of reinforcement by Skinner (1938)featured the importance of temporal intervals,as described later in the section on fixed-interval reinforcement schedules. A good re-view of the role of time in animal behavioris provided by Richelle and Lejeune (1980).

[Image not available in this electronic edition.]

pashler-44108 book January 17, 2002 14:40

Time as a Stimulus Attribute 367

The central problem of the study of animallearning was to understand the effect of arbi-trary intervals of time between stimuli on thebehavior of animals.

During most of the twentieth century thestudies of the temporal dimension of the phys-ical world by investigators of human psy-chophysics, biological rhythms, and animallearning progressed independently. The ex-tensive experimental research in each of thesefields typically was conducted by different in-vestigators using different methods and dif-ferent theories. Articles based on studies inthese three fields typically were published indifferent journals, and they rarely cited eachother. The secondary literature also typicallytreated these three fields as separate topics. Anexception is the monograph of Fraisse (1963)that contained sections on biological rhythms,classical and operant conditioning, introspec-tion, and psychophysics. Though eclectic, thismonograph did not develop the connectionsamong the approaches to the study of tempo-ral learning.

An edited volume that was based on a sym-posium sponsored by the New York Academyof Sciences (Gibbon & Allan, 1984) undoubt-edly encouraged many investigators to exam-ine connections between the study of timingbased on human psychophysics, biologicalrhythms, and animal learning. This sympo-sium was organized by an active investigatorof animal timing (John Gibbon) and an ac-tive investigator of human timing (LorraineAllan), and Gibbon and Allan were able toobtain participation from established inves-tigators of both human and animal timing.This may have led to an increasing use ofmore similar experimental methods, as wellas an increasing use of the same theoriesof time perception and timed performance(Allan, 1998). When similar methods are usedfor the study of timing by humans and otheranimals, similar results often occur (Church,1993).

This chapter describes temporal learningfrom the viewpoint of animal learning butnotes various influences based on research inhuman psychophysics and biological rhythms.

TIME AS A STIMULUS ATTRIBUTE

A starting point for the analysis of time as astimulus attribute is to determine whether ananimal can discriminate between stimuli thatdiffer only in duration. For example, can a ratbe trained to make a lever response follow-ing a 4-s auditory stimulus, but not follow-ing shorter or longer intervals? A temporalgeneralization procedure may be used that isequivalent to a generalization procedure forauditory intensity or auditory frequency. Theonly difference is that the manipulated dimen-sion is the duration of the auditory stimulus,rather than its intensity or frequency.

Discrimination of a Temporal Interval:A Temporal Generalization Procedure

An example of a temporal generalization pro-cedure for a rat in a lever box, based on anexperiment by Church and Gibbon (1982), isas follows: After a 30-s interval, a house lightwas turned off for 0.8, 1.6, 2.4, 3.2, 4.0, 4.8,5.6, 6.4, or 7.2 s. A random half of the du-rations were 4.0 s; the remaining durationswere randomly selected from the remainingeight durations. When the house light cameback on, a lever was inserted into the box. Ifthe stimulus duration had been 4.0 s, and therat pressed the lever within 5 s, food was de-livered, and the lever was withdrawn. If thestimulus duration had not been 4.0 s, and therat pressed the lever within 5 s, no food wasdelivered, and the lever was withdrawn. If therat did not press the lever within 5 s, the leverwas withdrawn, and another cycle began. Thiscycle was repeated throughout sessions last-ing 1 hr 50 min.

pashler-44108 book January 17, 2002 14:40

368 Temporal Learning

Figure 9.2 Temporal generalization procedure.NOTE: Probability of a response given attention to time as a function of stimulus duration relative toreinforced stimulus duration.SOURCE: From Church and Gibbon (1982).

The probability of a lever response wasgreatest following the reinforced stimulus du-ration (4 s) and was lower at shorter or longerdurations. This temporal generalization gradi-ent was not affected by a logarithmic spacingof the intervals or by an extension of the rangefrom 0.8 s to 7.2 s to 0 s to 32 s, but it was af-fected by many experimental manipulations.For example, it was affected by the durationof the reinforced stimulus: The maximum re-sponse probability and the spread of the gradi-ent increased with increases in the reinforcedduration. The temporal generalization gradi-ent was also affected by partial reinforcementand by a reduction in the probability of pre-sentation of the reinforced stimulus: Both ledto an overall lowering and flattening of thegradient. There were also large individual dif-ferences in the temporal gradient that were re-lated to overall responsiveness. Some rats hadsteep generalization gradients that began andended near zero; others had flatter generaliza-tion gradients that began and ended at higherresponse probabilities.

The essential similarity of performance un-der all of these conditions was revealed when“attention to time” was separated from “sensi-tivity to time.” The assumption was that, withsome probability, the rat attended to the du-ration of the stimulus and its behavior wasaffected by the duration of the stimulus, orthat it did not attend to the duration of thestimulus and its behavior was not affectedby the duration of the stimulus. Figure 9.2shows the probability of a response givenattention to time as a function of relative dura-tion of the stimulus. (The relative duration isthe duration of the stimulus, T , divided by theduration of the reinforced stimulus, S+.) Mostof the data from the various experimentalmanipulations and individual differences fallapproximately on the same function. Thisanalysis of general attention, developed byHeinemann, Avin, Sullivan, and Chase (1969),suggests that the various procedures affectedthe probability of attention to stimulus du-ration rather than the sensitivity to stimulusduration.

[Image not available in this electronic edition.]

pashler-44108 book January 17, 2002 14:40

Time as a Stimulus Attribute 369

The results of experiments with the tem-poral generalization procedure provided evi-dence that animals can discriminate betweenstimuli that differ in duration, and the anal-ysis of these data suggests that a single tim-ing mechanism may be used under variousconditions. This procedure has also been usedwith human participants (Wearden, Denovan,& Haworth, 1997), but it has not been used ex-tensively, probably because of the asymmet-rical (biased) nature of the two elements usedfor a calculation of a probability (a responseand a nonresponse in a 5-s interval). Muchmore evidence regarding the ability of animalsto discriminate between stimuli that differ induration, as well as evidence regarding thecharacteristics of the psychophysical func-tion relating response probability to stimulusduration, has come from a somewhat morecomplex procedure known as the bisectionprocedure.

Discrimination between TemporalIntervals: A Bisection Procedure

In a duration discrimination procedure, oneresponse is reinforced following a short-duration stimulus (such as 2 s) and another re-sponse is reinforced following a long-durationstimulus (such as 8 s). Animals learn tomake one of the responses following theshort-duration stimulus (called the “short re-sponse”) and to make the other response fol-lowing the long-duration stimulus (the “longresponse”). In the bisection procedure, stim-uli of various intermediate durations are alsopresented, but neither the long nor the short re-sponse is reinforced. The results of a bisectionprocedure are often reported as a psychophys-ical function that relates the probability of along response to the duration of the stimuli.This function usually has the S-shaped formof an ogive that increases from a probabilityclose to 0.0 to a probability close to 1.0. Thisprocedure provides a way to define the psy-

chological middle of the two reinforced timeintervals: It is the time at which it is equallyprobable that the animal will make a short orlong response. This psychological middle iscalled the point of bisection, or the point ofsubjective equality (PSE).

Such a bisection procedure, modified froma temporal discrimination procedure devel-oped by Stubbs (1968), was conducted byChurch and Deluty (1977) with rats in leverboxes. A cycle consisted of (a) the termi-nation of the house light for some duration,(b) the insertion of both levers, (c) the press-ing of one of the levers (and, possibly, deliveryof food), (d) the retraction of both levers, and(e) the turning on of the house light for 30 s.Food was delivered following a response onone of the levers after the shortest stimulusin a series, and it was delivered following aresponse on the other lever after the longeststimulus in a series. Food was not deliveredfollowing either response after a stimulus ofintermediate duration. Rats were trained un-der different ranges of durations (1–4 s, 2–8 s,3–12 s, and 4–16 s).

Some results are shown in Figure 9.3, re-drawn from data from individual rats includedin the appendix of Church and Deluty (1977).The probability of a long response is shownas a function of the stimulus duration in sec-onds for the four ranges of intervals. Thesefunctions were slightly asymmetrical and rosemore rapidly for the shorter ranges of intervals(upper-left panel). The probability of a longresponse is also plotted in relative logarithmicunits; these functions were more symmetri-cal, and they superposed (upper-right panel).The point of bisection was near the geomet-ric mean (middle-left panel), and the differ-ence limen (the semi-interquartile range ofthe functions shown in the upper-left panel)was a linear function of the geometric mean(middle-right panel). Thus, an estimate ofthe coefficient of variation (the difference li-men divided by the point of bisection) was

pashler-44108 book January 17, 2002 14:40

370 Temporal Learning

Figure 9.3 Bisection procedure.NOTE: Upper left: Probability of long response as a function of stimulus duration in seconds. Upper right:Probability of long response as a function of stimulus duration in logarithmic units. Middle left: Point ofbisection as a function of geometric mean of reinforced stimulus durations. Middle right: Difference limenas a function of geometric mean of reinforced stimulus durations. Bottom left: Coefficient of variation(CV) as a function of geometric mean of reinforced stimulus durations. Bottom right: Probability of longresponse as a function of stimulus duration divided by the point of bisection.SOURCE: From Church and Deluty (1977).

approximately constant (bottom-left panel).Superposition was also obtained when theprobability of a long response was plotted asa function of the stimulus duration in seconds(T ) divided by the geometric mean (S+), asshown in the bottom-right panel.

Such bisection experiments have providedevidence for the following six principles:

1. Symmetry. The psychophysical functionrelating the proportion of long responsesto stimulus duration is an ogive that is

[Image not available in this electronic edition.]

pashler-44108 book January 17, 2002 14:40

Time as a Stimulus Attribute 371

approximately symmetrical on a logarith-mic scale of time.

2. Geometric mean. The point of bisectionis near the geometric mean of the rein-forced short interval and the reinforcedlong interval.

3. Proportional timing. The point of bisec-tion increases approximately linearly withthe geometric mean of the reinforced shortinterval and the reinforced long interval.

4. Scalar variability. The standard deviationof the point of bisection increases approx-imately linearly with stimulus duration.

5. Weber’s law. The coefficient of variation(the standard deviation divided by themean) of the point of bisection is approxi-mately constant.

6. Superposition. The psychophysical func-tions at all ranges superpose when the du-ration of a stimulus is divided by the pointof bisection (which is often approximatedby the geometric mean).

These principles are not all independent.For example, proportional timing and scalarvariability imply that Weber’s law applies totiming (Gibbon, 1977). Superposition is prob-ably the most fundamental principle becauseit applies to all of the data points in a psy-chophysical function and not just to a measureof central tendency or variability.

The regularities in the results of the bisec-tion procedure by pigeons and rats are alsoobserved in similar experiments with humanparticipants (Allan & Gibbon, 1991). Thestimuli were 1000-Hz tones with durations tobe discriminated. In one experiment describedin this article, each participant was givenfive sessions at each of four different ranges(1–2 s, 1–1.5 s, 1.4–2.1 s, and .75–1 s). Resultsof the six individual participants at these fourdifferent ranges is shown in Figure 9.4. Thesepsychophysical functions show the proba-bility of a long response as a function of

the ratio of the duration of the stimulus tothe point of bisection. The six principlesdescribed for the rats apply also to humanparticipants.

Although the coefficient of variation of thepoint of bisection is approximately constant(Gibbon, 1977), there are some small system-atic deviations from constancy. The importantfeatures of these deviations is that (a) theyare systematic rather than random, (b) theyare local (i.e., the coefficient of variation islower at some time intervals than at shorteror longer intervals), and (c) there are multi-ple local minima. Evidence for such system-atic, multiple local deviations from a constantcoefficient of variation require that animalsbe tested at a large number of closely spacedtime intervals. Using a temporal discrimina-tion method in which many different short in-tervals were used and in which the durationof the long stimulus was adjusted until the ratresponded correctly on approximately 75% ofthe stimuli, intervals of particular sensitivityhave been located in the range of 100 ms to2 s and 2 s to 50 s (Crystal, 1999, 2001). Thesesmall departures from Weber’s law may pro-vide evidence about the mechanism involvedin temporal perception.

The results from both the temporal general-ization and the temporal bisection procedureshave provided evidence that animals can dis-criminate stimuli that differ in duration. Thequestion arises whether the discrimination isbased on some modality-specific mechanism(such as light adaptation) rather than on an at-tribute of duration characteristic of stimuli ofdifferent modalities. A cross-modal transferof training procedure can provide evidenceabout this. In one experiment 16 rats weretrained in a 1-s versus 4-s temporal discrim-ination procedure (Roberts, 1982). Half ofthem were trained with durations of light andothers with durations of noise. Then the stimu-lus modalities were switched. The rats trainedwith light were tested with noise, and vice

pashler-44108 book January 17, 2002 14:40

372 Temporal Learning

Figure 9.4 Bisection procedure with six human participants.NOTE: Probability of a long response as a function of duration of the stimulus (T ) divided by the geometricmean of the shortest and longest stimulus (T1/2).SOURCE: From Allan and Gibbon (1991).

versa. Rats in a random half of each of thesegroups were trained with the same responsefor the short and long stimulus, and the otherswere trained with the opposite response for theshort and long stimulus. An empirical ques-tion was whether the speed that a rat wouldlearn to press the left lever for a 1-s light andthe right lever for a 4-s light would be affectedby whether it had learned to press the left leverfor the 1-s noise and the right lever for the 4-snoise, or the reverse. The essential idea wasto determine if there were something commonbetween a 1-s light and a 1-s noise. The resultsclearly indicated that percentage correct wasmuch higher in retraining with the same as-

sociation of stimulus duration and responsethan in the reversed association of stimulusduration and response. The same conclusionswere reached in other related cross-modaltransfer experiments (Meck & Church, 1982a,1982b).

The empirical effects of a retention intervalon the psychophysical function relating theprobability of a long response to the presentedstimulus duration are quite clear. Under mostconditions the function flattens, with a biastoward reporting the stimuli as being short.Both of these factors increase with an increasein the retention interval between the presen-tation of the stimulus and the opportunity

[Image not available in this electronic edition.]

pashler-44108 book January 17, 2002 14:40

Time as a Stimulus Attribute 373

to make a response (Spetch & Wilkie, 1983).The retention intervals that have been studiedare primarily in the range of 0 s to 20 s. Theflattening of the function is presumably dueto forgetting or interference that results in adecrease in overall stimulus control, but thecause of the bias to respond “short” is stilluncertain.

The phenomenon of choosing the short re-sponse as the retention interval is increased isoften called subjective shortening. This sug-gests that the forgetting of a temporal dura-tion, in contrast to forgetting of other featuresof a stimulus, occurs on the time dimension.Based on the stability of the point of bisec-tion with retention intervals of 0 s, 0.5 s, 2 s,and 8 s, the probability of .79 of classifyingan absent stimulus as “short,” and the highprobability of classifying both a 2-s and an8-s stimulus after a 32-s retention interval as“short,” Church (1980) concluded that “therewas no evidence that forgetting of a signalduration occurred on the time dimension.”p. 219.

An alternative to subjective shortening asa mechanism is that forgetting of temporal in-tervals, like forgetting of other features of astimulus, occurs on a general strength dimen-sion and that a weak memory is more simi-lar to a short stimulus than to a long stimu-lus (Kraemer, Mazmanian, & Roberts, 1985).Another alternative is that animals typicallyreport the presence or absence of the mostsalient stimulus, that the long stimulus is thesalient one, and that it weakens during the re-tention interval (Gaitan & Wixted, 2000). Noconsensus has been reached on whether theperceived duration of an interval shortens dur-ing a retention interval.

Time as an Attribute of Stimuli:The Temporal Coding Hypothesis

Classical conditioning procedures involve thepresentation of stimuli at particular times. Forexample, the interval between the onset of a

conditioned stimulus and the onset of the un-conditioned stimulus (the CS-US interval) af-fects performance. There is general agreementabout the profound effects of such temporalindependent variables on behavior, but thereis no common agreement about the content oflearning. One type of interpretation of the re-sults is that the shorter CS-US interval leadsto a stronger association; another type of in-terpretation of the results is that the intervalbetween the onset of the CS and the onset ofthe US is stored as a temporal interval. Thisdistinction was clearly described by Logan(1960), but there is still no general agreementabout whether classical conditioning requiresboth a strength and a timing dimension, andif not, which of the two is fundamental.

The temporal coding hypothesis has led toa series of experiments that provide substan-tial support for the view that animals learnthe specific temporal intervals that are used inclassical conditioning experiments. Most ofthese have been conducted in a lick suppres-sion paradigm for rats in which the uncondi-tioned reinforcement was an electric shock,and the measured response was the latency tobegin to drink. The latency to respond is usu-ally reported in logarithmic units (base 10),so that a 1.0 refers to 10 s and a 2.0 refers to100 s.

An application of the temporal coding hy-pothesis is shown in the upper two panels ofFigure 9.5, which are based on an experimentby Matzel, Held, and Miller (1988). In the firstphase a sensory preconditioning procedurewas used in which two 5-s neutral stimuli werepresented sequentially. This may lead to an as-sociation between the two neutral stimuli, butone that may produce only subtle behavioralmanifestations or, perhaps, none at all. In thesecond phase an electric shock preceded thesecond neutral stimulus. This backward con-ditioning procedure also may not produce anyobvious manifestations of learning. Accord-ing to the temporal coding hypothesis, how-ever, the rats in Phase 1 learned the temporal

pashler-44108 book January 17, 2002 14:40

374 Temporal Learning

Figure 9.5 Temporal maps for two experimental procedures.NOTE: A: The left panel shows the procedure used by Matzel, Held, and Miller (1988); the right panelshows the temporal map as well as predicted and obtained results. B: The left panel shows the procedureused by Barnet, Cole, and Miller (1997); the right panel shows the temporal map as well as predictedand obtained results.SOURCE: From Arcediano and Miller (in press).

intervals between the two neutral stimuli, andin Phase 2 they learned the temporal inter-vals between the electric shock and the sec-ond neutral stimulus. This temporal codinghypothesis becomes testable because it alsoincludes the assumption that animals can in-tegrate the temporal maps formed in the twophases. The nature of the integration is shownin the top-right panel of Figure 9.5. The as-sumptions are that the animals identify thecommon element (in this case, the second neu-tral stimulus), that they know the temporal re-lationship between the common element andthe first neutral stimulus (from Phase 1), andthat they know the temporal relationship be-tween the common element and the secondneutral stimulus (from Phase 2). This leads

to the hypothetical temporal representationshown in the top-right panel of Figure 9.4.The prediction is that if the second stimulusis presented, there would be only slight sup-pression (as indicated by the lowercase lettersused for the conditioned response); if the firststimulus is presented, however, there wouldbe substantial suppression (as indicated by theuppercase letters used for the conditioned re-sponse). These predictions were supported bythe results, which showed that the latency todrink was about 20.0 s (101.3 s) to the secondstimulus and about 50.1 s (101.7 s) to the firststimulus.

A similar analysis is shown in the twolower panels of Figure 9.5, which are based ona secondary conditioning procedure (Barnet,

[Image not available in this electronic edition.]

pashler-44108 book January 17, 2002 14:40

Time as a Stimulus Attribute 375

Cole, & Miller, 1997). The reader can followthe procedures in Phase 1 and Phase 2, rec-ognize that the hypothetical temporal repre-sentation is based on precisely the same tem-poral hypothesis used in the interpretation ofthe results of the previous experiment, and ap-preciate why lick suppression in the forwardgroup should be greater in the second stimu-lus than in the first stimulus used in Phase 2,as well as why lick suppression in the back-ward group should have the reverse pattern.Finally, the figures show that the results sup-port the predictions of the temporal codinghypothesis.

The assumptions of the temporal cod-ing hypothesis appear to apply also in ex-periments in blocking (Barnet, Grahame,& Miller, 1993), overshadowing (Blaisdell,Denniston, & Miller, 1998), and conditionedinhibition (Denniston, Cole, & Miller, 1998).These experiments support the view that thetemporal relationships between stimuli arelearned during conditioning procedures andthat intervals that are learned separately maybe integrated into the same temporal map.Other experiments have provided evidencethat the intervals are learned bidirectionally.For example, if a light has been followed by atone after a 10-s interval, then when the lightoccurs, the animal expects the tone to occurin 10 s; when the tone occurs, the animal re-members that the light typically occurred 10 searlier (Arcediano & Miller, in press).

Typically, conditioning theories havefocused on states, such as the presence orabsence of a noise. Timing theories have fo-cused on state transitions, which may also bereferred to as events or time markers. They in-clude the onset and termination of a stimulus,a response, and a reinforcer. The time intervalbetween any of these events can be learned. Inthe temporal generalization and bisection pro-cedures, the relevant interval is from stimulusonset to termination. In other procedures ani-mals demonstrate their ability to perceive the

causal efficacy of their responses. For exam-ple, a pigeon can learn to discriminate whethera peck on the center key was followed im-mediately by two side key lights or whetherthey went on independently of the pigeon’sresponses (Killeen, 1978; Killeen & Smith,1984).

Preferences among Distributionsof Time Intervals

Not only can animals use time intervals be-tween events as discriminative stimuli; theycan also indicate a preference among alter-native distributions of intervals. They prefershort intervals to long ones, variable inter-vals to fixed intervals of the same arithmeticmean, and signaled intervals to unsignaledones. Such preferences may be based on lo-cal expected time to reinforcement on the twoalternatives, rather than overall relative rein-forcement rate.

In a concurrent schedule of reinforcement,animals distribute their responding betweentwo continuously available response alterna-tives that lead to two distributions of rein-forcement. The relative response rate on thetwo alternatives is a measure of relative prefer-ence. The matching law states that the relativeresponse rates are equal to the relative rein-forcement rates, and the generalized match-ing law states that the logarithm of the rel-ative responses rates is a linear function ofthe logarithm of the relative reinforcementrate (Davison & McCarthy, 1988; Herrnstein,1997). Both of these are called molar lawsbecause they make predictions about the aver-age response rate as a function of the averagereinforcement rate. The generalized match-ing law provides a good fit to behavior inthe concurrent variable-interval schedules ofreinforcement, as well as in other sched-ules. A local maximizing account may doso also (Shimp, 1969). A related approachthat makes use of current understanding of

pashler-44108 book January 17, 2002 14:40

376 Temporal Learning

response timing has been developed byGallistel and Gibbon (2000).

In the concurrent schedule of reinforce-ment, the relative response rate is used as ameasure of preference. In some cases, differ-ent response rates may be due to reactions toreinforcers, to selective reinforcement of par-ticular response patterns, and to other factorsnot normally considered to be involved in aconcept of preference. To separate the act ofchoice from its consequences, Autor (1969)developed the concurrent chains procedure.In this procedure a pigeon is presented withtwo illuminated keys in an initial link; aftera random interval of time, the next peck onone of the keys leads to a terminal link withone time to reinforcement and the next peckon the other key leads to a terminal link witha different time to reinforcement. The relativeresponse rate is approximated by the matchingrule

RL/(RL +RR) = 1/t2L/(1/t2L +1/t2R) (1)

where RL and RR refer to the response rateon the left and right keys of the initial link, re-spectively. It is also approximated by the delayreduction hypothesis (Fantino, 1969, 1977):

RL/(RL + RR)

= (T − t2L)/[(T − t2L) + (T − T2R)],

t2L < T , t2R < T (2)

where T refers to the mean time to reinforce-ment from the onset of the initial links, andt2L and t2R refer to the mean time to rein-forcement from the onset of the left and rightterminal links, respectively. The essence ofthe idea is that the value of an alternative is re-lated to the reduction in the delay of reinforce-ment (in seconds) rather than, for example, tothe reduction in the delay of reinforcement asa proportion. The delay reduction hypothesishas been successfully applied to many otherprocedures (Fantino, Preston, & Dunn, 1993),but the equation that involves only mean val-ues does not account for the preference for

variable over constant intervals with the samemean durations (Mazur, 1997).

The relative preference for two delays ofreinforcement may be derived from an as-sumption about the mathematical form of thisgradient. One particularly successful equationis the hyperbolic-decay hypothesis:

V = A/(1 + K D) (3)

where V is the value, A is the amount of re-inforcement, D is the delay of reinforcement,and K is a parameter to be estimated fromthe data. In an extensive program of researchon factors affecting choice, Mazur (1997) hasused a simple adjusting-delay procedure withpigeons. A single peck on the side key illu-minated with a red light led to a fixed delayto food; a single peck on the side key illumi-nated with a green light led to a delay thatcould be adjusted. An adjustment rule led toincreases or decreases of the duration of theadjusting delay until a stability criterion wasachieved. This is the point at which the value(V ) on the two alternatives is approximatelyequal. The hyperbolic decay hypothesis pro-vided quantitative fits to experiments in whichamounts, probabilities, and distribution of re-inforcements were varied. Figure 9.6 shows

Figure 9.6 Adjusting delay choice procedure.NOTE: Adjusted delay as a function of predicteddelay. Predictions were based on the hyperbolicdecay hypothesis shown in Equation (3).SOURCE: From Mazur (1997).

[Image not available in this electronic edition.]

pashler-44108 book January 17, 2002 14:40

Time as a Response Attribute 377

the adjusted delay in seconds (the indifferencepoints) as a function of the durations predictedfrom the hyperbolic decay hypothesis. Thisequation accounted for 96% of the varianceof the adjusted delay measure.

TIME AS A RESPONSE ATTRIBUTE

The latency of a response refers to the intervalbetween the onset of a stimulus and the oc-currence of the response. The latency of a re-sponse has been used to measure three differ-ent psychological processes. First, it has beenused to measure the time required for psy-chological processes such as memory searchspeed (Sternberg, 1966). It has also beenused as a measure of response strength (Hull,1943). Finally, it has been used as a measureof the expected time of a reinforcing event(Pavlov, 1927). The problem is how to in-terpret a response latency. If substantial timepasses between the presentation of a stimu-lus and the occurrence of a response, is thisan indication that (a) a great deal of mentaleffort was required to make the decision torespond, (b) the strength of the response waslow, or (c) a response does not occur untilthe expected time to reinforcement has de-clined to some critical value? Empirical stud-ies support each of these interpretation ofresponse latencies: (a) Response latency usu-ally increases as the number of required men-tal operations increases; (b) response latencyusually decreases as a function of the amountof training; and (c) response latency usuallyis related to the time between stimulus on-set and reinforcer availability. A theory of re-sponse times should make correct predictionswhen the independent variable is task com-plexity, amount of training, or time of rein-forcement.

Studies of time as a stimulus attribute areoften regarded as investigations of temporalperception, whereas studies of time as a re-sponse attribute are regarded as temporal per-

formance. They clearly differ in what the in-vestigator measures. In studies of time as astimulus attribute, the measure is a categori-cal one such as a left- or right-lever response;in studies of time as a response attribute, themeasure is a quantitative one on the tempo-ral dimension. However, both types of studiesprovide information about perception, mem-ory, and decision processes.

Platt and Davis (1983) developed a bisec-tion procedure in which the important depen-dent variable is the time of occurrence of thetwo responses. In contrast to the bisection pro-cedure previously described, in this procedurethe animal can respond at any time duringthe interval. In Platt and Davis’s procedure,two side keys were turned on, and (with equalprobability) either (a) after a short interval oftime, the first peck to left key was followed byfood or (b) after a long interval of time, the firstpeck to the right key was followed by food.(The left and right keys were counterbalancedacross pigeons.) In one condition the short in-terval was 40 s and the long interval was 200 s.As a result of this training, the response rate onthe left key increased to a maximum near 40 sand then declined, and the response rate on theright key increased throughout the 200 s. Thepoint of bisection was defined in two ways—from the time at which the response rates onthe two keys was equal (“rate”), and from themedian time of a switch from one key to theother (“switch”). Both of these are measuresof the time at which the animal has equal pref-erence for the two alternatives (i.e., is indif-ferent toward them). With both definitions, thepoint of bisection was approximately equal tothe geometric mean of the short and long re-inforced intervals (see Figure 9.7). Thus, theresults of an experiment in which the animal isfree to respond throughout the interval (suchas Platt & Davis, 1983) is similar to one inwhich the animal is exposed to the stimulusand is permitted to make only a single re-sponse (Church & Deluty, 1977). Studies oftime as a response attribute may be accounted

pashler-44108 book January 17, 2002 14:40

378 Temporal Learning

Figure 9.7 The point of bisection as a function of the geometric mean of the short and long intervalsof four pigeons.NOTE: The two measures of the point of bisection (rate and switch) are described in the text. Thepercentage of variance accounted for (ω2) is shown.SOURCE: From Platt and Davis (1983).

for by the same processes as studies of timeas a stimulus attribute.

Fixed-Interval Schedule of Reinforcement

A fixed-interval schedule of food reinforce-ment is one in which there is a fixed timefrom delivery of a food until the availabil-ity of the next food; food is delivered imme-diately after the next response. Thus, a cy-cle consists of a fixed interval of time fol-lowed by reinforcement of the next response.Skinner referred to this as periodic recondi-

tioning because there was a fixed interval ofextinction followed by continuous reinforce-ment. He identified four sources of variabilityin the response rates of rats in a fixed-intervalschedule of reinforcement (Skinner, 1938,123–126). There were differences among ses-sions, differences among intervals, and differ-ences as a function of time since an intervalbegan, and the responses tended to appear inclusters.

From the standpoint of temporal learning,the change in the response rate as a func-tion of time since the previous delivery of

[Image not available in this electronic edition.]

pashler-44108 book January 17, 2002 14:40

Time as a Response Attribute 379

food is the most diagnostic source of variabil-ity in a fixed-interval schedule of reinforce-ment. The increase in response rate is similarat fixed intervals of quite different durations.In one experiment, pigeons were trained onfixed intervals of 30 s, 300 s, and 3,000 s;the response rate was reported in successivefifths of the interval as a fraction of the re-sponse rate in the final fifth of the interval(Dews, 1970). This normalized response rateis shown in Figure 9.8 as a fraction of the inter-val; the functions for the three fixed intervalsof very different durations is approximatelythe same. Similar results, although usuallywith a much smaller range of fixed intervals,have been obtained with different species, in-tervals, response measures, and reinforcers(Gibbon, 1991).

Figure 9.8 Fixed interval procedure.NOTE: Response rate (as a fraction of terminal rate)as a function of time since food (as a fraction of theinterval).SOURCE: From Dews (1970).

The increase in rate as a function of timesince the previous reinforcement has been de-scribed as a gradual increase (a scalloped pat-tern) and also as an abrupt increase (break-run pattern). This is an important distinctionbecause the theoretical processes necessary togenerate these two patterns are different. Al-though extensive research by Skinner (1938),Ferster and Skinner (1957), and Dews (1970)has indicated that gradual increases often oc-cur on individual cycles, quantitative analy-ses based on fitting of a two-state model hasaccounted for most of the variance (Church,Meck, & Gibbon, 1994; Schneider, 1969).The two-state model assumes that the re-sponse rate is constant at some low rate oneach cycle until a point of transition, when itbecomes constant at some high rate. Further, itis assumed that the point of transition is a ran-dom variable with a mean that is proportionalto the fixed interval. The average response rateof many such step functions with the variablepoint of transition is a gradually increasingfunction.

To test the two-state hypothesis of fixed-interval performance, it is necessary to iden-tify a point of transition on each cycle. Thisclassification can be done with a formal defi-nition of the temporal criterion that providesthe largest difference in response rate betweenthe early and later parts of the interval. Thencycles can be averaged, not with respect tothe time of the last food, but with respect tothe temporal criterion. These average func-tions are characterized by a rather steady lowrate prior to the criterion, followed by a rathersteady high rate after the criterion (Schneider,1969). An example from one pigeon on afixed interval of 256 s is shown in the toppanel of Figure 9.9. Before the breakpointcriterion (shown by the vertical dashed line)the response rate was low and approximatelyconstant; after the breakpoint criterion it washigh and approximately constant. There aresystematic deviations from a step function,

[Image not available in this electronic edition.]

pashler-44108 book January 17, 2002 14:40

380 Temporal Learning

Figure 9.9 Fixed interval procedure.NOTE: Top panel: Response rate as a function oftime relative to the breakpoint. Bottom panel: Timeof the breakpoint as a function of the duration ofthe fixed interval.SOURCE: From Schneider (1969).

but they are small. If this analysis weredone on an increasing function, the increaseshould be detectable both in the period priorto the criterion and in the period after thecriterion.

Schneider (1969) tested pigeons on fixed-interval schedules of 16 s, 32 s, 64 s, 128 s,256 s, and 512 s. He found the breakpoint cri-terion occurred at approximately a constantproportion of the fixed interval over this rangeof intervals; the response rate changed froma low state to a high state at about two thirdsof the interval (see the bottom panel of Fig-ure 9.9). This is another example of approxi-mately proportional timing.

To determine if there are systematic devi-ations from approximately proportional tim-ing, it is necessary to investigate a largenumber of closely spaced fixed intervals. Aramped fixed-interval procedure is an efficientway to determine the functional relationshipbetween the time of starting to respond and thelength of the fixed interval (see Figure 9.10).In two experiments, rats were tested on a fixedinterval that varied between 10 s and 140 sin 2-s steps (Crystal, Church, & Broadbent,1997). Each successive interval was 2 s longerthan the previous interval, until the maximuminterval of 140 s was presented, and then eachsuccessive interval was 2 s shorter than theprevious interval, until the minimum intervalof 10 s was presented. The median start times(solid points in the top panels) and the in-terquartile range of the start times (open pointsin the top panel) were approximately propor-tion to the interval. The residuals of these twomeasures from the best-fitting straight line areshown in the bottom panels. These showed,relative to the linear rule, that particular in-tervals were overestimated, that others wereunderestimated, and that particular intervalswere estimated with more or less variability.Tests at slightly different ranges indicated thatthe systematic residuals were related to theabsolute values of the intervals, rather than tothe relative values. As in the case of systemicdeviations from proportionality in the tempo-ral discrimination task, the important featuresof these deviations are that (a) they are sys-tematic rather than random, (b) they are local,(i.e., the dependent measure is lower at sometime intervals than at shorter or longer inter-vals), and (c) there are multiple local minima.These small departures from Weber’s law inthe fixed-interval task may provide evidenceabout the mechanism involved in temporalperception.

In a standard fixed-interval procedure, thedelivery of the food marks the beginning of aninterval that culminates in the next delivery of

[Image not available in this electronic edition.]

pashler-44108 book January 17, 2002 14:40

Time as a Response Attribute 381

Figure 9.10 Ramped interval procedure.NOTE: The top panels show the median start times (closed circles) and interquartile range of the starttimes as a function of interval duration in seconds. The bottom panels show the residuals of these twomeasures from the best-fitting straight line.SOURCE: From Crystal, Church, and Broadbent (1997).

food. In a discriminative fixed-interval proce-dure, the onset of a stimulus is a time marker.A cycle consists of an interval without a stim-ulus, the onset of a stimulus (the time marker),the availability of food after a fixed interval af-ter the onset of the stimulus, a response, anddelivery of food. The same regularities de-scribed for the standard fixed-interval proce-dure apply to the discriminative fixed-intervalprocedure, especially if measures are taken tominimize the effect of food delivery as an ad-ditional time marker. With a short and fixedinterval between the delivery of food and theonset of the stimulus, the delivery of food canbe an additional time marker. Typically, inves-tigators use a long random interval betweenthe delivery of food and the onset of the stim-ulus to minimize this effect (Church, Miller,Meck, & Gibbon, 1991).

The Peak Procedure

In the discriminative fixed-interval procedure,the mean response rate increases as a func-tion of time from stimulus onset to food.The peak procedure randomly intermixes withthese food cycles other cycles in which thestimulus lasts much longer and in which thereis no food. On these nonfood cycles, the meanresponse rate can be examined as a functionof time from stimulus onset to a time muchlater than the time that food is sometimes de-livered. Catania (1970) trained a pigeon on apeak procedure in which there were food andnonfood cycles. On a food cycle, the key lightand house light were turned on, and food wasdelivered following the first response after a10-s interval. On a nonfood cycle the lights re-mained on for 38 s, and no food was delivered.

[Image not available in this electronic edition.]

pashler-44108 book January 17, 2002 14:40

382 Temporal Learning

On nonfood cycles the response rate increasedas a function of time to a maximum nearthe time of reinforcement and then declined.The overall response rate was influenced bythe probability of a food cycle (p = .9 or .1),but this did not influence the time of the max-imum response rate, which was near 10 s inboth conditions.

This procedure was used effectively byRoberts (1981) to determine factors affectingthe peak time and the peak rate. For exam-ple, the peak time is approximately equal tothe time at which food is sometimes deliv-ered (Figure 9.11, top panel), and the peakrate is positively related to the probability offood (Figure 9.11, bottom panel). The dis-tinction between factors that produce a hor-izontal shift in the function (on the time axis)and factors that produce a vertical shift in

Figure 9.11 Peak procedure.NOTE: Response rate as a function of time since stimulus onset. Top panel: Time of food availability was20 s or 40 s. Bottom panel: Probability of food availability was .8 (high food) or .2 (low food).SOURCE: From Roberts (1981).

the function (on the response axis) can bemade easily from the nonfood cycles of apeak procedure that increase to a maximumand then decrease. This distinction is muchmore difficult to make on the basis of thefood cycles of a peak procedure (or with afixed-interval procedure) because the distinc-tion between vertical and horizontal shiftsof a rising function is more subtle. Experi-ments with the peak procedure have providedevidence for six principles that are similarto those based on investigations of temporalbisection:

1. Symmetry. The function-relation responserate as a function of time since stimulusonset (the peak function) is approximatelyon an arithmetic scale of time, often withsome positive skew.

[Image not available in this electronic edition.]

pashler-44108 book January 17, 2002 14:40

Time as a Response Attribute 383

2. Peak time. The maximum response rate isnear the reinforced interval.

3. Proportional timing. The peak time in-creases approximately linearly with thetime of the reinforced interval.

4. Scalar variability. The standard deviationof the peak time increases approximatelylinearly with stimulus duration.

5. Weber’s law. The coefficient of variation(the standard deviation divided by themean) of the peak location is approxi-mately constant.

6. Superposition. The peak functions at allranges superpose when the duration of astimulus is divided by the peak time (whichis often approximated by the time of rein-forcement).

The superposition result is normally ob-tained by training different animals at dif-ferent intervals, or by training the same an-imals for many sessions on one interval andthen many sessions on another interval. It can,however, be obtained also by training animalson two different intervals (such as 10 s and30 s) that are marked by different stimuli (suchas light and noise) and intermixed on eachsession (Gibbon, Church, & Meck, 1984). Inthis experiment, the peak response rates weresomewhat greater than 10 s and somewhat lessthan 30 s when the times of scheduled rein-forcement were 10 s and 30 s, respectively.The functions did not superimpose when nor-malized by the time of scheduled reinforce-ment, but they did superimpose when normal-ized by the peak time. This suggests that therelative subjective time, rather than the rela-tive physical time, is used in determining thetimes to respond.

The mean response functions obtainedwith the peak procedure gradually increaseto a maximum near the time that reinforce-ment sometimes occurs, followed by a slightly

asymmetrical decrease. This is not character-istic of individual cycles. Cycles that end infood typically have no responding until somepoint after stimulus onset, and then they haveresponding at a fairly steady rate until the foodis received. (This is the same pattern typicallyobtained in fixed-interval schedules of rein-forcement.) Cycles that do not end in food typ-ically have no responding until some point af-ter stimulus onset, and then they have a fairlysteady rate until some point after the time thatthat food is sometimes received, and finallyno responding until the next stimulus onset.This is a low-high-low pattern of respondingin which the period of high response rate gen-erally brackets the time that food is sometimesreceived. On each cycle it is possible to de-fine a start and stop of the high response rate,and from these to define a center (halfway be-tween the start and stop) and a spread (thedifference between the start and stop). Thepatterns of correlations among these mea-sures are quite consistent, and they have beenused in the development of quantitative the-ories of timing (Cheng & Westwood, 1993;Church et al., 1994; Gibbon & Church, 1990,1992).

Differential Reinforcementof Low Response Rates

Reinforcers, stimuli, and responses can all beused as time markers. Animals learn to adjustto the interval between reinforcers in temporalconditioning and in fixed-interval schedulesof reinforcement; they learn to adjust to theinterval between a stimulus and reinforcer indiscriminated fixed-interval schedules of rein-forcement and in the peak procedure; and theylearn to adjust to the interval between a re-sponse and a reinforcement in the differentialreinforcement of low response rate schedules(DRL; Harzem, 1969; Skinner, 1938). In theDRL schedule of reinforcement, a response

pashler-44108 book January 17, 2002 14:40

384 Temporal Learning

that is separated by more than t seconds fromthe previous response will be reinforced. Forexample, in a DRL-20 schedule, the first re-sponse that is spaced more than 20 s fromthe previous response will be reinforced. Thisleads to a low response rate and many interre-sponse intervals that are near 20 s. The meanresponse rate is inversely related to the dura-tion of the DRL schedule; the interresponseintervals are typically bimodal, with one veryshort mode and the other near the duration ofthe DRL schedule.

Dynamics of Temporal Learning

Most research on temporal learning has con-cerned the asymptotic performance understeady-state conditions. The study of initialacquisition of temporal learning is more diffi-cult because the amount of data that can berecorded at each stage of acquisition fromeach animal is limited. An analysis of thedevelopment of a temporal gradient by pi-geons trained on 40-s and 80-s fixed-intervalschedules of reinforcements (Machado &Cevik, 1998) suggested that training led to anincrease in the slope of a nonlinear increas-ing gradient without markedly affecting ei-ther the mean response rate or the responserate at some intermediate time (a fixed pivotpoint). This might mean that the mean den-sity of food (leading to the mean responserate) and the temporal interval (leading to thefixed pivot point was learned rapidly, but thatthe quantitative features of the temporal gra-dient were developed only with considerabletraining.

The study of transitions in the temporalschedules of reinforcement has shown thatthe effects may be very rapid. A single, shortinterfood interval in a series of longer inter-food intervals leads to an immediate short-ening of the waiting time on the next inter-val (Higa, Wynne, & Staddon, 1991). Evenchanges in the schedule of random interval

reinforcements can have a rapid effect onbehavior, as shown by Mark and Gallistel(1994) in their studies of transitions in therelative rates of brain-stimulation reward ofrats.

Temporal Pattern Learning

Considerable research has been done with re-peating sequences of interfood intervals byStaddon and his colleagues. In these experi-ments pigeons have been exposed to a repeat-ing series of food-food intervals, and the timefrom food delivery until the next response wasmeasured (Staddon & Higa, 1991). The ef-fects of many different series were explored,and under many of them the pigeons appearedto track the series. This was often due to animmediate reaction to the previous interval.Staddon and his colleagues proposed the one-back hypothesis, in which the wait time ona particular interval was a linear function ofthe previous interfood interval. Thus, if theseries changed gradually, a linear function ofthe previous interfood interval would approx-imate a linear function of the next interfoodinterval. In some of the research by Staddonand his colleagues, the wait times of the pi-geons were accounted for better by the dura-tion of the current interval than by the durationof the previous interval. In a procedure for ratsin which a 10-step ramp function of intervalswas used, the wait time was more closely re-lated to the duration of the present or next in-terval than to the previous interval (Church &Lacourse, 1998). This indicates that underconditions that have not yet been clearly spec-ified, animals anticipate subsequent intervalsin a repeating series.

Classical Conditioning

Many classical conditioning experiments in-volve variations in the distribution of time

pashler-44108 book January 17, 2002 14:40

Time as a Response Attribute 385

intervals between stimuli and reinforcers.These include the three procedures dia-grammed in Figure 9.1 (temporal condition-ing, delayed conditioning, and trace condi-tions), as well as many others. In some cases,more than one temporal interval can be shownto be affecting behavior simultaneously. Forexample, in autoshaping of pigeons (a delayedconditioning procedure in which the stimulusis a lighted key, the reinforcer is food, and theconditioned response is a peck on the key), theinterval from stimulus to food and the intervalfrom one food to the next may both be con-stant. Under these conditions the number ofreinforcers necessary for a pigeon to achievea criterion of acquisition is negatively relatedto the ratio of the stimulus-food interval to thefood-food interval (Gibbon & Balsam, 1981).Investigations of these two time intervals havebeen done in an appetitive conditioning exper-iment with rats in which the stimulus is usu-ally a noise, the reinforcer is food, and the con-ditioned response is a head entry into the foodcup (Holland, 2000; Kirkpatrick & Church,2000; Lattal, 1999). A plausible interpreta-tion of the results is that the two time intervalshave independent effects and that simultane-ous timing of the two intervals leads to theobserved response gradients, discriminationratios, and number of responses to achieve anacquisition criterion (Kirkpatrick & Church,2000).

The ability of animals to time intervals isnot restricted to constant intervals. If the timebetween food reinforcements is constant, ratshave an increasing tendency to make a headentry into the food cup as a function of timesince the last food; if the time between foodreinforcements is random, rats have a rela-tively constant tendency to make a head entryas a function of time since the last food; andif the time between foods is the sum of a fixedand random time, the head-entry gradient re-flects the expected time to next food as a func-tion of time since food. In these experiments,

the overall response rate was determined bythe mean reinforcement rate in the fixed, ran-dom, and combined conditions (Kirkpatrick &Church, in press).

The control of behavior by random inter-vals between shocks was established in an im-portant experiment by Rescorla (1968). Ratswere trained in a conditioned suppression pro-cedure in which they were given five sessionsof training to press a lever for food reinforcers,given five sessions with occasional shocks inthe presence or absence of a stimulus, andthen given five sessions in the lever box inwhich the stimulus was presented but nofood was delivered (extinction). During theshock-conditioning phase, there were 2-minpresentations of the stimulus and a mean in-terstimulus interval of 8 min. Shocks wereadministered during the stimulus and in theabsence of the stimulus according to randominterval schedules. The schedules used wererandom intervals of 5 min, 10 min, or 20 min(or no shock). All of the combinations wereused in which the shock rate in the stimu-lus was greater or equal to the shock rate inthe absence of the stimulus (10 groups). Theresults are shown in Figure 9.12. (The pro-babilities given in the figure are for the ex-pected number of shocks in a 2-min in-terval.) The figure shows that the relativeresponse rate in the presence of the stimu-lus (the suppression ratio) was affected by theshock rate in the presence of the stimulus rel-ative to the shock rate in the absence of thestimulus.

Most interpretations of the contingency ex-periments of Rescorla (1968) emphasize theinstantaneous probability of a shock in de-termining the amount of suppression. How-ever, if shock is delivered at the end of afixed-duration stimulus, fear increases dur-ing the stimulus; if it is delivered at randomduring the stimulus, fear decreases duringthe stimulus (Libby & Church, 1975). Thiswas interpreted to mean that “fear at a given

pashler-44108 book January 17, 2002 14:40

386 Temporal Learning

Figure 9.12 Conditioned suppression procedure.NOTE: Median suppression ratio as a function of days of extinction. The expected rate of shock in thepresence of the stimulus was the same for all the functions in a panel; the expected rate of shock in theabsence of the stimulus was different for each of the functions in a panel.SOURCE: From Rescorla (1968).

time after signal onset depends upon theexpected time to the next shock” (p. 915). Thedecrease in fear when shock is delivered atrandom during a fixed-duration stimulus (witha lower shock rate in the absence of the stim-ulus) may occur because the expected timeto the next shock does decrease as a func-tion of time since stimulus onset. In a condi-tion in which all shocks occurred at randomduring the stimulus, Rescorla (1968) foundthat fear decreased markedly as a functionof time during the conditioned stimulus. Theconditional expected time to reinforcementis also a critical determinant of behavior inclassical conditioning experiments with posi-tive reinforcement (Kirkpatrick & Church, inpress).

Learning of Circadian Phase

There is no general consensus regarding therelationship between the circadian clock(Moore-Ede et al., 1982) and the various tem-poral abilities of animals that have been de-scribed in this chapter.

One possibility is that they are separatemechanisms used for entirely different pur-poses. A major function of a circadian clockis to coordinate the behavior of an animal withits external environment; the major functionof an interval clock is to measure short inter-vals of time from an arbitrary stimulus thatoccurs at an arbitrary time. Of course, thisdoes not preclude some influences that thecircadian system may have on an interval tim-ing system.

[Image not available in this electronic edition.]

pashler-44108 book January 17, 2002 14:40

Quantitative Models of Timing 387

Another possibility is that the phase ofthe circadian clock serves as a time marker.Both interval and circadian may be involvedin daily meal anticipation (Terman, Gibbon,Fairhurst, & Waring, 1984) and in incubationof eggs by male and female ringdoves(Gibbon, Morrell, & Silver, 1984). Animalsreadily learn to go to different places at dif-ferent times of day, a well-studied phenom-enon known as time-place learning (Bieback,Gordijn, & Krebs, 1989; Carr & Wilkie,1999), and often there are alternative basesfor making that discrimination.

The most interesting possibility is thatthere is a fundamental connection betweencircadian and interval timing. The most thor-ough analysis of this possibility is containedin the three chapters on timing in Gallistel’s1990 book, The Organization of Learning.A periodic clock can serve for the percep-tion of duration if the animal is able to sub-tract the current phase from a previouslyremembered phase. In a study of meal an-ticipation of rats, Crystal (1999) found thatrats anticipated the time of food availabilitywith greater precision in the circadian range(22–26 hr) than at shorter or longer intervals(see Figure 9.13). Such privileged intervals

Figure 9.13 Circadian time perception.NOTE: The relative precision of food anticipationwas greater in the circadian range (open circles)than at shorter or longer intervals (closed circles).SOURCE: From Crystal (2001).

of low variability have also been identified inshorter time ranges, as described earlier in thechapter.

QUANTITATIVE MODELSOF TIMING

Most research on temporal learning concernsthe ways in which intervals between stim-uli, responses, and outcomes affect responseprobability and response time. These factshave led to the descriptive generalizations thathave been described in this chapter. Theseare sometimes referred to as general princi-ples because they may apply to many differentstimuli, reinforcers, procedures, and species.In many cases these general principles can bedescribed in terms of quantitative functions.

Some investigators have attempted to iden-tify reliable differences in the timing be-havior of different species, with the goal ofrelating these differences to evolutionary pres-sures, ecology, and brain structure. This ap-proach is clearly described in Shettleworth’s(1998) book, Cognition, Evolution, and Be-havior. Other investigators have proposed thatno general principles of timing apply to allstimuli, reinforcers, procedures, and species.The present chapter has emphasized similari-ties rather than differences, but any completeunderstanding of temporal learning must in-clude an appreciation of reliable differencesalso.

One way to explain behavior is in termsof general principles, such as the scalar prop-erty. Thus, in a new procedure with a newmeasure of timing, one may hypothesize thatthe standard deviation (rather than the vari-ance) of that measure will increase linearlywith the mean of the measure. This kind ofexplanation, which may also be regarded asan organization of the facts, was important inthe development of theoretical understandingof temporal learning.

[Image not available in this electronic edition.]

pashler-44108 book January 17, 2002 14:40

388 Temporal Learning

Another way to explain behavior is interms of a process model, such as the scalartiming theory described next. In a processmodel there are input, intermediate, and out-put states; connections between the states maybe described with quantitative rules. A processmodel provides a way to generate output thancan be compared with the data.

Scalar Timing Theory

Scalar timing theory refers both to a set ofgeneral principles and to a process model. Thedistinction between the development of gen-eral principles and of a process model is de-scribed clearly by Gibbon (1991) for scalartiming theory. The former is described in thesection on the historical origins of the scalarproperty, and the latter is described in the sec-tion on the causal origins of scalar timing.

In the historical section, Gibbon (1991)referred to Dews’ (1970) finding of super-position in the temporal gradients producedby a fixed-interval procedure with very dif-ferent interval lengths, which is reproducedas Figure 9.8. He then replotted Catania’s(1970) finding—the constancy of the coeffi-cient of variation of the temporal gradientsproduced by a differential reinforcement oflow response rates procedure with very dif-ferent response-to-reinforcer intervals. Otherexamples came from studies of avoidancelearning in the 1950s, choice studies, and clas-sical conditioning. Gibbon’s (1977) influen-tial article titled “Scalar Expectancy Theoryand Weber’s Law in Animal Timing” served toorganize such empirical generalizations into amore general principle.

In the causal section, Gibbon (1991) re-ferred to an information-processing model ofthe timing process that was introduced byGibbon and Church (1984) for the analysisof animal timing behavior (see Figure 9.14).(This was similar to Treisman’s 1963 modelof the internal clock.) It included an internal

clock (which included a pacemaker, a switch,and an accumulator), a memory, and a deci-sion process. Gibbon and Church describedthe effects of variance at several places inthis system and the effects of two differentresponse decision rules.

A diagrammatic representation of the the-ory applied to the temporal generalization pro-cedure is shown in Figure 9.14. The top panelshows subjective time as a function of sig-nal duration. After a latency (T0) the meansubjective time (XT ) increases linearly withsignal duration (T ). Because of numeroussources of variability, a particular signal dura-tion does not always produce exactly the same

Figure 9.14 Scalar timing theory.SOURCE: From Church and Gibbon (1982).

[Image not available in this electronic edition.]

pashler-44108 book January 17, 2002 14:40

References 389

subjective time; a normal distribution of sub-jective times for a particular reinforced signalduration is shown on the vertical axis of thetop panel. The middle panel shows the ab-solute difference between the subjective timeand a random sample of one element fromthe distribution of remembered times of rein-forcement (the subjective discrepancy). Thebottom panel shows this measure of subjec-tive discrepency expressed as a ratio of thesampled element (the discrimination ratio).Finally, a normal distribution of thresholds ispostulated with a mean of B. A random sam-ple of one element from this threshold distri-bution (b) is used to predict the occurrence ofa response. If the discrimination ratio is abovethe threshold, no response will be made; ifit is below the threshold a response will bemade.

The appendix of Gibbon, Church et al.1984 article provides an explicit solution ofthe theory for three timing procedures (tem-poral generalization, the peak procedure, andthe time-left procedure). For example, a two-parameter version of scalar timing theory in-volving sensitivity to time and a bias param-eter led to the smooth functions near the datapoints in Figure 9.2. A two-parameter versionof scalar timing theory involving the coeffi-cient of variation of clock speed and bias pa-rameter led to the smooth functions near thedata points in Figure 9.4.

In practice, most of the tests of the modelhave been conducted with simulations. Scalartiming theory has been particularly effectivein accounting for well-trained behavior on awide range of temporal discrimination andperformance tasks; it was not developed toaccount for acquisition of temporal perfor-mance or for adaptation to new temporal inter-vals. This is primarily due to the assumptionsabout memory storage and retrieval: Each re-inforced interval is stored as an example, andretrieval is based on a random sample of oneof these examples.

A Behavioral Theory of Timing

An alternative theory of timing, proposedby Killeen and Fetterman (1988), was basedon the possibility that the animal’s behav-ior might itself serve as a clock. For exam-ple, Killeen and Fetterman referred to resultsfrom rats on a 30-s fixed-interval schedule oflever pressing in which measures were takenof eating, drinking, general activity, activ-ity in a running wheel, and lever responses(Roper, 1978). The rats typically began theinterval eating activity, then the drinking ac-tivity, and then lever responding. Such behav-ioral states, rather than a hypothetical accu-mulator mechanism, could serve as a clock.Machado (1997) presented a fully specifiedversion of a behavioral theory of timing. Al-though it is described in behavioristic terms—whereas the information-processing versionof scalar timing theory is described in cogni-tive terms—it is not clear that this is an es-sential distinction. Process theories of timingand conditioning can be described in termsof processes such as perception, memory, anddecision (Church & Kirkpatrick, 2001).

Many other quantitative models of timingand conditioning have been proposed. Theseinclude the multiple-oscillator model (Church& Broadbent, 1990), the spectral theory oftiming, the multiple time-scale model of time(Staddon & Higa, 1999), and real-time modelsof conditioning. They differ in their percep-tual representations of time, in memory rep-resentations, and in their decision processes.All of them have unique merits, but not onewould be able to pass a Turing test (Church,2001).

REFERENCES

Allan, L. G. (1998). The influence of the scalartiming model on human timing research. Be-havioural Processes, 44, 101–117.

pashler-44108 book January 17, 2002 14:40

390 Temporal Learning

Allan, L. G., & Gibbon, J. (1991). Human bisectionat the geometric mean. Learning and Motivation,22, 39–58.

Arcediano, F., & Miller, R. R. (in press). Someconstraints for models of timing: A temporalcoding hypothesis perspective. Learning andMotivation.

Autor, S. M. (1969). The strength of conditionedreinforcers as a function of frequency andprobability of reinforcement. In D. P. Hendry(Ed.), Conditioned reinforcement (pp. 127–162).Homewood, Ill: Dorsey Press.

Barnet, R. C., Cole, R. P., & Miller, R. R. (1997).Temporal integration in second-order condition-ing and sensory preconditioning. Animal Learn-ing & Behavior, 25, 221–233.

Barnet, R. C., Grahame, N. J., & Miller, R. R.(1993). Temporal encoding as a determinant ofblocking. Journal of Experimental Psychology:Animal Behavior Processes, 19, 327–341.

Bieback, H., Gordijn, M., & Krebs, J. R. (1989).Time-and-place learning by garden warblers.Sylvia borin. Animal Behaviour, 37, 353–360.

Blaisdell, A. P., Denniston, J. C., & Miller, R. R.(1998). Temporal encoding as a determinant ofovershadowing. Journal of Experimental Psy-chology: Animal Behavior Processes, 24, 72–83.

Carr, J. A. R., & Wilkie, D. M. (1999). Rats are re-lunctant to use circadian timing in a daily time-place task. Behavioural Processes, 44, 287–299.

Catania, A. C. (1970). Reinforcement schedulesand psychophysical judgments: A study ofsome temporal properties of behavior. In W. N.Schoenfeld (Ed.), The theory of reinforcementschedules (pp. 1–42). New York: Appleton-Century-Crofts.

Cheng, K., & Westwood, R. (1993). Analysis ofsingle trials in pigeons’ timing performance.Journal of Experimental Psychology: AnimalBehavior Processes, 19, 56–67.

Church, R. M. (1980). Short-term memory for timeintervals. Learning and Motivation, 11, 208–219.

Church, R. M. (1993). Human models of animalbehavior. Psychological Science, 4, 170–173.

Church, R. M. (2001). A Turing test of computa-tional and association theories. Current Direc-tions in Psychological Science, 10, 132–136.

Church, R. M., & Broadbent, H. A. (1990). Alter-native representations of time, number, and rate.Cognition, 37, 55–81.

Church, R. M., & Deluty, M. Z. (1977). Bisectionof temporal intervals. Journal of Experimen-tal Psychology: Animal Behavior Processes, 3,216–228.

Church, R. M., & Gibbon, J. (1982). Temporal gen-eralization. Journal of Experimental Psychol-ogy: Animal Behavior Processes, 8, 165–186.

Church, R. M., & Kirkpatrick, K. (2001). Theoriesof conditioning and timing. In R. R. Mowrer &S. B. Klein (Eds.), Handbook of ContemporaryLearning Theories (pp. 211–253). Mahwah, NJ:Erlbaum Associates.

Church, R. M., & Lacourse, D. M. (1998). Serialpattern learning of temporal intervals. AnimalLearning & Behavior, 26, 272–289.

Church, R. M., Meck, W. H., & Gibbon, J. (1994).Application of scalar timing theory to individ-ual trials. Journal of Experimental Psychology:Animal Behavior Processes, 2, 135–155.

Church, R. M., Miller, K. D., Meck, W. H., &Gibbon, J. (1991). Sources of symmetric andasymmetric variance in temporal generalization.Animal Learning & Behavior, 19, 207–214.

Crystal, J. D. (1999). Systematic nonlinearities inthe perception of temporal intervals. Journalof Experimental Psychology: Animal BehaviorProcesses, 25, 3–17.

Crystal, J. D. (2001). Circadian time perception.Journal of Experimental Psychology: AnimalBehavior Processes, 27, 68–78.

Crystal, J. D., Church, R. M., & Broadbent, H. A.(1997). Systematic nonlinearities in the memoryrepresentation of time. Journal of experimen-tal Psychology: Animal Behavior Processes, 23,267–282.

Davison, M., & McCarthy, D. (1988). The match-ing law: A research review. Hillsdale, NJ:Erlbaum.

Denniston, J. C., Cole, R. P., & Miller, R. R. (1998).The role of temporal relationships in the transfer

pashler-44108 book January 17, 2002 14:40

References 391

of conditioned inhibition. Journal of Experimen-tal Psychology: Animal Behavior Processes, 24,200–214.

Dews, P. B. (1970). The theory of fixed-intervalresponding. In W. N. Schoenfeld (Ed.), The the-ory of reinforcement schedules (pp. 43–61). NewYork: Appleton-Century-Crofts.

Fantino, E. (1969). Choice and rate of reinforce-ment. Journal of the Experimental Analysis ofBehavior, 12, 723–730.

Fantino, E. (1977). Conditioned reinforcement:choice and information. In W. K. Honig &J. E. R. Staddon (Eds.), Handbook of operantbehavior. Englewood Cliffs, NJ: Prentice-Hall.

Fantino, E., Preston, R. A., & Dunn, R. (1993). De-lay reduction: Current status. Journal of the Ex-perimental Analysis of Behavior, 60, 159–169.

Ferster, C. B., & Skinner, B. F. (1957). Schedulesof reinforcement. New York: Appleton-Century-Crofts.

Fraisse, P. (1963). The psychology of time. NewYork: Harper.

Gaitan, S. C., & Wixted, J. T. (2000). The roleof “nothing” in memory for event duration inpigeons. Animal Learning & Behavior, 28, 147–161.

Gallistel, C. R. (1990). The organization of learn-ing. Cambridge, MA: MIT Press.

Gallistel, C. R., & Gibbon, J. (2000). Time, rate,and conditioning. Psychological Review, 107,289–344.

Gibbon, J. (1977). Scalar expectancy theory andWeber’s Law in animal timing. PsychologicalReview, 84, 279–325.

Gibbon, J. (1991). Origins of scalar timing. Learn-ing and Motivation, 22, 3–38.

Gibbon, J., & Allan, J. G. (Eds.). (1984). Tim-ing and time perception. New York: New YorkAcademy of Sciences.

Gibbon, J., & Balsam, P. (1981). Spreading asso-ciation in time. In C. M. Locurto, H. S. Terrace,& J. Gibbon (Eds.), Autoshaping and condition-ing theory (pp. 219–254). New York: AcademicPress.

Gibbon, J., & Church, R. M. (1984). Sources ofvariance in an information theory of timing.

In H. L. Roitblat, T. G. Bever, & H. S.Terrace (Eds.), Animal cognition (pp. 465–488).Hillsdale, NJ: Erlbaum.

Gibbon, J., & Church, R. M. (1990). Representa-tion of time. Cognition, 37, 23–54.

Gibbon, J., & Church, R. M. (1992). Comparisonof variance and covariance patterns in paralleland serial theories of timing. Journal of the Ex-perimental Analysis of Behavior, 57, 393–406.

Gibbon, J., Church, R. M., & Meck, W. H. (1984).Scalar timing in memory. In J. Gibbon & L. G.Allan (Eds.), Annals of the New York Academy ofSciences: Timing and time perception (pp. 52–77). New York: New York Academy ofSciences.

Gibbon, J., Morrell, M., & Silver, R. (1984). Twokinds of timing in circadian incubation rhythmof ring doves. American Journal of Physiology:Regulatory, Integrative and Comparative Phys-iology, 237, 1083–1087.

Harzem, P. (1969). Temporal discrimination. In R.M. Gilbert & N. S. Sutherland (Eds.), Animaldiscrimination learning (pp. 299–334). London,Academic Press.

Heinemann, E. G., Avin, E., Sullivan, M. A., &Chase, S. (1969). Analysis of stimulus general-ization with a psychophysical method. Journalof Experimental Psychology, 80, 215–224.

Herrnstein, R. J. (1997). The matching law: Papersin psychology and economics (H. Rachlin &D. I. Laibson (Eds.). Cambridge, MA: HarvardUniversity Press.

Higa, J. J., Wynne, C. D., & Staddon, J. E. (1991).Dynamics of time discrimination. Journal ofExperimental Psychology: Animal BehaviorProcesses, 17, 281–291.

Holland, P. C. (2000). Trial and intertrial durationsin appetitive conditioning in rats. Journal of Ex-perimental Psychology: Animal Behavior Pro-cesses, 28, 121–135.

Hull, C. L. (1943). Principles of behavior. NewYork: Appleton-Century-Crofts.

James, W. (1890). The principles of psychology.London: Macmillan.

Killeen, P. R. (1978). Superstition: A matter of bias,not detectability. Science, 199, 88–90.

pashler-44108 book January 17, 2002 14:40

392 Temporal Learning

Killeen, P. R., & Fetterman, J. G. (1988). A behav-ioral theory of timing. Psychological Review, 95,274–295.

Killeen, P. R., & Smith, J. P. (1984). Percep-tion of contingency in conditioning: Scalar tim-ing, response bias, and erasure of memory byreinforcement. Journal of Experimental Psy-chology: Animal Behavior Processes, 10, 333–345.

Kirkpatrick, K., & Church, R. M. (2000). Indepen-dent effects of conditioning and cycle durationin conditioning: The role of timing processes.Animal Learning & Behavior, 28, 373–388.

Kirkpatrick, K., & Church, R. M. (in press). Track-ing of expected times in classical conditioning.Animal Learning & Behavior.

Kraemer, P. J., Mazmanian, D. S., & Roberts, W. A.(1985). The choose-short effect in pigeon mem-ory for stimulus duration: Subjective shorten-ing versus coding models. Animal Learning &Behavior, 13, 349–354.

Lattal, K. M. (1999). Trial and intertrial durationsin Pavlovian conditioning: Issues of learningand performance. Journal of Experimental Psy-chology: Animal Behavior Processes, 25, 433–450.

Libby, M. E., & Church, R. M. (1975). Feargradients as a function of the temporal inter-val between signal and aversive event in therat. Journal of Comparative and PhysiologicalPsychology, 88, 911–916.

Logan, F. A. (1960). Incentive. New Haven: YaleUniversity Press.

Machado, A. (1997). Learning the temporal dy-namics of behavior. Psychological Review, 104,241–265.

Machado, A., & Cevik, M. (1998). Acquisitionand extinction under periodic reinforcement.Behavioural Processes, 44, 237–262.

Mark, T. A., & Gallistel, C. R. (1994). Kinetics ofmatching. Journal of Experimental Psychology:Animal Behavior Processes, 20, 79–95.

Matzel, L. D., Held, F. P., & Miller, R. R. (1988).Information and expression of simultaneous andbackward associations: Implications for conti-guity theory. Learning and Motivation, 19, 317–344.

Mazur, J. E. (1997). Choice, delay, probability, andconditioned reinforcement. Animal Learning &Behavior, 25, 131–147.

Meck, W. H., & Church, R. M. (1982a). Abstrac-tion of temporal attributes. Journal of Experi-mental Psychology: Animal Behavior Processes,8, 226–243.

Meck, W. H., & Church, R. M. (1982b). Dis-crimination of intertrial intervals in cross-modaltransfer of duration. Bulletin of the PsychonomicSociety, 19, 234–236.

Moore-Ede, M. C., Sulzman, F. M., & Fuller, C. A.(1982). The clocks that time us. Cambridge,MA: Harvard University Press.

Pavlov, I. P. (1927). Conditioned reflexes (G. V.Anrep, Trans.). London: Oxford UniversityPress.

Platt, J. R., & Davis, E. R. (1983). Bisection oftemporal intervals by pigeons. Journal of Ex-perimental Psychology: Animal Behavior Pro-cesses, 9, 160–170.

Rescorla, R. A. (1968). Probability of shock in thepresence and absence of CS in fear condition-ing. Journal of Comparative and PhysiologicalPsychology, 66, 1–5.

Richelle, M., & Lejeune, H. (1980). Time in animalbehaviour. Oxford: Pergammon.

Richter, C. P. (1922). A behavioristic study ofthe activity of the rat. Comparative PsychologyMonographs, 1, 1–55.

Richter, C. P. (1965). Biological clocks in medicineand psychiatry. Springfield, IL: C. C. Thomas.

Richter, C. P. (1977). Heavy water as a tool forstudy of the forces that control the length of pe-riod of the 24-hour clock of the hamster. Pro-ceedings of the National Academy of Sciences,USA, 74, 1295–1299.

Roberts, S. (1981). Isolation of an internal clock.Journal of Experimental Psychology: AnimalBehavior Processes, 7, 242–268.

Roberts, S. (1982). Cross-modal use of an inter-nal clock. Journal of Experimental Psychology:Animal Behavior Processes, 8, 2–22.

Roeckelein, J. E. (2000). The concept of timein psychology: A resource book and annotatedbibliography. Westport, CT: Greenwood.

pashler-44108 book January 17, 2002 14:40

References 393

Roper, T. J. (1978). Diversity and substitutability ofadjunctive activities under fixed-interval sched-ules of food reinforcement. Journal of the Ex-perimental Analysis of Behavior, 30, 83–96.

Schneider, B. A. (1969). A two-state analysis offixed-interval responding in the pigeon. Journalof the Experimental Analysis of Behavior, 12,677–687.

Shettleworth, S. J. (1998). Cognition, evolution,and behavior. New York: Oxford UniversityPress.

Shimp, C. P. (1969). Optimal behavior in free-operant experiments. Psychological Review, 76,97–112.

Skinner, B. F. (1938). The behavior of organisms.New York: Appleton-Century-Crofts.

Spetch, M. L., & Wilkie, D. M. (1983). Subjec-tive shortening: A model of pigeons’ memoryfor event duration. Journal of Experimental Psy-chology: Animal Behavior Processes, 9, 14–30.

Staddon, J. E. R., & Higa, J. J. (1991). Tempo-ral learning. The Psychology of Learning andMotivation, 27, 265–294.

Staddon, J. E. R., & Higa, J. J. (1999). Time andmemory: Toward a pacemaker-free theory of in-

terval timing. Journal of the Experimental Anal-ysis of Behavior, 71, 215–251.

Sternberg, S. (1966). High-speed scanning in hu-man memory. Science, 153, 652–654.

Stubbs, A. (1968). The duration of stimulus du-ration by pigeons. Journal of the ExperimentalAnalysis of Behavior, 11, 223–238.

Terman, M., Gibbon, J., Fairhurst, S., & Waring,A. (1984). Daily meal anticipation: Interactionof circadian and interval timing. In J. Gibbon& L. Allan (Eds.), Timing and time perception(pp. 470–487). New York: New York Academyof Sciences.

Treisman, M. (1963). Temporal discriminationand the indifference interval: Implications fora model of the “Internal clock.” PsychologicalMonographs, 7 (13, Whole No. 576).

Wearden, J. H., Denovan, L., & Haworth, R.(1997). Scalar timing in temporal generalizationin humans with longer stimulus durations. Jour-nal of Experimental Psychology: Animal Behav-ior Processes, 23, 502–511.

Woodrow, H. (1951). Time perception. In S. S.Stevens (Ed.), Handbook of experimental psy-chology. New York: Wiley, pp. 1224–1236.