Continuous Time Arma

Embed Size (px)

Citation preview

  • 8/2/2019 Continuous Time Arma

    1/52

    Continuous-time linear models

    John H. Cochrane1

    February 8, 2012

    1University of Chicago Booth School of Business, 5807 S. Woodlawn, Chicago IL 60637, 773 7023059, [email protected], http://faculty.chicagobooth.edu/john.cochrane/. I acknowledgeresearch support from CRSP.

  • 8/2/2019 Continuous Time Arma

    2/52

    Contents

    1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

    2 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

    3 Linear models and lag operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

    3.1 Discrete time reminder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

    3.2 Continuous-time Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

    3.3 Laplace transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

    4 Moving average representation and moments . . . . . . . . . . . . . . . . . . . . . . . . 11

    4.1 Levels and differences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

    4.1.1 Levels to differences in discrete time . . . . . . . . . . . . . . . . . . . . . 13

    4.1.2 Levels to differences in continuous time . . . . . . . . . . . . . . . . . . . 13

    4.2 Finding an integral representation; unit roots and random walks . . . . . . . . . 14

    4.2.1 Discrete time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

    4.2.2 Continuous time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

    5 Impulse-response function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

    6 Hansen-Sargent formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

    6.1 Discrete time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

    6.1.1 AR(1) example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

    6.1.2 A prettier formula and the impact multiplier . . . . . . . . . . . . . . . . 21

    6.1.3 Derivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

    6.2 Continuous time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

    6.2.1 Derivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

    7 Autoregressive representations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

    7.1 How not to define AR models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

    7.2 Adding AR(1)s to obtain more complex processes . . . . . . . . . . . . . . . . . . 26

    7.2.1 A two-component example . . . . . . . . . . . . . . . . . . . . . . . . . . 278 Tricks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

    9 A continuous-time asset pricing economy . . . . . . . . . . . . . . . . . . . . . . . . . . 30

    9.1 Time-separable model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

    9.1.1 Model solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

    9.2 A model with habits or durability . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

    1

  • 8/2/2019 Continuous Time Arma

    3/52

    9.2.1 Derivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

    10 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

    11 Lecture notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

    12 Notes on Hansen Sargent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

    2

  • 8/2/2019 Continuous Time Arma

    4/52

    1 Introduction

    Discrete-time linear ARMA processes and lag operator notation are very convenient for lotsof calculations. Continuous-time models allow you to handle interesting nonlinear models aswell. But treatments of those models typically dont mention how to adapt the discrete-time

    linear model and lag operator tricks to continuous time. Here Ill attempt that translation.The capstone is an explanation of the Hansen-Sargent prediction formulas (1980), (1991) forcontinuous time linear models, and their use to solve a fairly complex linear-quadratic assetpricing model with habits and durability in consumption.

    The point of this note is heuristic, to learn to use and intuit the techniques. I do not pretendthere is anything new here. I also dont discuss the technicalities. Hansen and Sargent (1991)is a good reference for most of this material.

    2 Summary

    Basic operators

    =

    =1

    = ; = log()

    Lag polynomials and functions

    =X

    =0

    = Z(); Z() =X

    =0

    =Z

    =0 () = L(); L() =Z

    =0

    ()

    Inverting the AR(1)

    +1 = + =X

    =0

    = + =

    Z

    =0

    (1 ) = =1

    1 =

    X

    =0

    ( + ) = = 1 +

    =Z

    =0

    1

    Forward-looking operators

    kk 1

    1

    1

    =

    11

    1 11

    ! =

    X

    =1

    =

    X=1

    +

    kk 0 1

    + =

    1

    =

    Z

    =0+

    =

    Z

    =0+

    3

  • 8/2/2019 Continuous Time Arma

    5/52

    Moving averages and moments, discrete time:

    2() =

    X

    =0

    2

    2

    ( ) = X

    =0 +

    2

    () =X

    =

    ( ) = Z()Z(

    )

    ( ) =1

    2

    Z

    ()

    Continuous time:

    2 () =

    Z

    =02()

    ( ) =Z

    =0()( + )

    () =

    Z

    =() = L()L()

    ( ) =1

    2

    Z

    () =1

    2

    Z

    L()L()

    Levels to differences

    1 = 0 +X

    =1

    ( 1)

    =Z

    =0

    ()

    + (0)

    (1) = (1)Z() = [0 + Z()] = [Z(0) + Z()]

    = L() = [(0) + L0()] =

    lim

    L() + L0()

    Differences to levels, Beveridge-Nelson decompositions. In discrete time,

    (1 ) = Z() =X

    =0

    implies

    = ;

    (1 ) = Z(1)

    = Z(); =X

    =+1

    or, equivalentlyZ() = Z(1) + (1)Z()

    4

  • 8/2/2019 Continuous Time Arma

    6/52

    has the trend property

    = lim

    + = +

    X=0

    +

    In continuous time,

    =

    Z

    =0()

    +

    = [L() + ]

    implies

    = ;

    =

    +

    Z

    =0()

    = [ + L(0)]

    Z

    =0() = L(); () =

    Z

    =0( + )

    or, equivalently,L() + = [ + L(0)] + L()

    has the trend property

    = lim

    + = +

    Z

    =0+

    Impulse-response functions and multipliers

    ( 1) + =

    lim0

    (+ ) + = ()

    ( 1) = ( 1) = 0 = Z(0)

    lim0

    (+ ) = (0) =

    lim

    (L())

    = Z() should = 0

    () = lim0

    L() should = 0

    Cumulative response of

    Z(1) =

    X=0

    L(0) =Z

    =0()

    For a MA representation of differences,

    =

    Z

    =0()

    +

    = [L() + ]

    5

  • 8/2/2019 Continuous Time Arma

    7/52

    the impact multiplier is lim0

    (+ ) =

    the cumulative multiplier is

    lim0

    (+ ) Z=0

    + = + Z=0

    () = + L(0) Hansen-Sargent prediction formulas

    X=0

    + =

    Z() Z()

    X=1

    1+ =

    Z()Z()

    ( 1)X

    =0

    + = Z()

    Continuous time:

    Z

    =0+ =

    L()L()

    lim0

    (+ )

    Z

    =0+ = L()

    If

    =

    Z

    =0() +

    = [ + L()]

    then also

    Z

    =0+ =

    L()L()

    Autoregressive processes,

    =

    Z

    =0()

    +

    (1 + L()) =

    or,

    = (0) + Z

    =0

    0() + [ + (0) + L0()] =

    Two-component example AR(2)

    =

    + 1+

    + 2

    + 2

    1

    ( + 1)

    =

    6

  • 8/2/2019 Continuous Time Arma

    8/52

    where

    1 =2 + 1

    + ; 2 =

    1 + 2 +

    ; =

    + (1 2)

    2

    Conversely,

    + 1 1

    + 2 = =

    1

    2

    + +

    1

    +

    where

    =(1 + 2)

    p(1 2)2 + 4

    2

    3 Linear models and lag operators

    I start by defining lag operators and the inversion formulas

    3.1 Discrete time reminder

    As a reminder, discrete-time linear models can be written in the unique moving average or Woldrepresentation

    =X

    =0

    = Z() (1)

    where the operator is defined by = 1

    and

    Z() =

    X=0

    (Its common to write () =P

    =0 but I will need to more clearly distinguish the function

    Z() from the function () = as we go to continuous time)

    The Wold representation and its error are defined from the autoregression

    = X

    =1

    +

    We can write this autoregressive representation in lag-operator form,

    Z() = ; Z() =X

    =0

    (2)

    We can connect the autoregressive and moving average representations by inversion,

    Z() = Z()1;Z() = Z()

    1

    Now, thats pretty, but how do you actually construct Z()1? Here we use a power series

    interpretation. For example, from the definition = 1 its pretty clear that 1 = +1.

    7

  • 8/2/2019 Continuous Time Arma

    9/52

    But suppose we want to invert the AR(1) (1 ) = . What meaning can we give to1(1)? (How do we find a Z() such that Z()Z() = ?) To do that, we use the powerseries expansion,

    1

    1 =

    X=0

    (assuming kk 1). Then we can use the lag operator notation to construct transformationsfrom AR(1) to MA() representations and back again,

    (1 ) = =

    =1

    1 =

    X

    =0

    =

    X=0

    A common confusion is especially important when we go to continuous time. The process may also have a nonlinear representation which allows greater predictability. For example, arandom number generator is fully deterministic, = (1) with no error. The function is

    just so complex that when you run linear regressions of on its past, it looks unpredictable.A precise notation would use 1() = (|1) to mean prediction using all linear andnonlinear functions, and give 1() = (1) = in this example. We would use a notationsuch as (| {}) to denote linear prediction. I will not be so careful, so I will use 1or (| {}) to mean prediction given the linear models under consideration, and Ill writeexpected without further ado.

    This point is important to address the common confusion that a linear model is not rightif there is an underlying better nonlinear model. That criticism is incorrect. Even if there is anonlinear model (say a square root process), there is nothing wrong with also studying its linearpredictive representation. Similarly, just because there may be additional variables that help to forecast , there is nothing wrong with studying conditional (on ) moments that

    ignore this extra information. The only place that either conditioning-down assumption causestrouble is if you assume agents in a model only see the variables or information set that you theeconometrician choose to model.

    3.2 Continuous-time Operators

    We usually write continuous time processes in differential or integral form. For example, thecontinuous-time AR(1) can be written in differential form,

    = +

    or in integral form =

    Z

    =0.

    The integral form is the obvious analogue to the moving-average form of the discrete-time rep-resentation. Our job is to think about and manipulate these expressions and how to transformone to the other with lag operators, with an eye to doing so for more complex processes.

    The lag operator can simply be extended to real numbers from integers, i.e.

    =

    8

  • 8/2/2019 Continuous Time Arma

    10/52

    Since we write differential expressions, in continuous time, its convenient to define thedifferential operator , i.e.

    =1

    where is the familiar continuous-time forward-difference operator,

    = lim0 (+ ) (3)

    (This is not a limit in the usual sense, but Ill leave that to continuous time math books andcontinue to abuse notation.)

    and are related by

    = lim0

    1

    = lim0

    log() 1

    = lim0

    log() log()

    1= log()

    In sum, the integral and differential operators are related by

    = ; = log() (4)

    The obvious generalization: We write the general moving average as

    =

    Z

    =0()

    and in lag operator notation = L()

    L() =

    Z

    =0()

    It will usually be convenient to describe continuous-time lag functions in terms of ratherthan . Thus, familiar quantities such as the impact multiplier Z

    ( = 0) and the cumulative

    multiplier Z( = 1) will have counterparts corresponding to L( = ) and L( = 0). (Ifind these below.)

    For example, the continuous-time AR(1) process in differential form reads

    + =

    ( + ) =

    We can invert this formula by inverting the lag operator polynomial as we do in discretetime1.

    = 1

    + = Z

    =0

    = Z

    =0

    = Z

    =0

    The second equality uses the formula for the integral of an exponential to interpret 1( + ),as we used the power series expansion

    P

    =0 to interpret 1(1 ).

    1In case you forgot, heres the standard way to go from differential to integral representation by solving thestochastic differential equation. You introduce a clever intergrating factor and note

    = + =

    ( + ) (5)

    Then, start with the differential representation of the AR(1),

    = +

    9

  • 8/2/2019 Continuous Time Arma

    11/52

    3.3 Laplace transforms

    Now, where does this all come from really? If a process is generated from another by

    = Z

    =0()

    the Laplace transform of this operation is defined as

    L() =

    Z

    =0()

    where is a complex number.

    Given this definition, the Laplace transform of the lag operation is

    =

    L() =

    This defi

    nition establishes the relationship between lag and diff

    erential operators (4) directly.One difference in notation between discrete and continuous time notation is necessary. Its

    common to write the discrete-time lag polynomial

    () =X

    =0

    It would be nice to write

    () =

    Z

    =0()

    but we cant do that, since () is already a function. If in discrete time we had written = (),then () wouldnt have made any sense either. For this reason, well have to use a differentletter. In deference to the Laplace transform Ill stick with the notation

    L()

    Z

    =0()

    and for clarity I also write lag polynomial functions as

    Z() =X

    =0

    multiplying both sides by and using (5),

    ( + ) =

    = Z

    0

    =

    Z0

    0 =

    Z0

    =

    0 +

    Z0

    10

  • 8/2/2019 Continuous Time Arma

    12/52

    rather than the more common (). (Z stands for z-transform.)

    To use a lag polynomial

    Z() =1

    1 =

    X=0

    we must have kk 1. In general, then, the poles : Z() = and the roots : Z() =Z()

    1 = 0 must lie outside the unit circle. The domain ofZ() is kk kk1 ; for which

    kk 1 will suffice.

    When 1, or if the poles of Z() are inside the unit circle, we solve in the oppositedirection:

    1

    1 =

    11

    1 11 =

    X

    =1

    =

    X=1

    +

    Here, the domain ofZ() must be outside the unit circle.

    Similarly, to when we interpret

    L() =1

    + =

    Z

    =0

    =

    Z

    =0

    we must have kk 0, and Re() 0 so that 1. In the other circumstance, we again

    expand forward i.e.

    L() =1

    + =

    1

    =

    Z

    =0

    =

    Z

    =0+

    Lag operators (Laplace transforms) commute, so we can simplify expressions by taking themin any order that is convenient,

    L()L() = L()L()

    Z()Z() = Z()Z()

    4 Moving average representation and moments

    The moving average representation

    =X

    =0

    = Z()

    is also a basis for all the second-moment statistical properties of the series: Variance

    2() =

    X

    =0

    2

    2

    covariance

    ( ) =

    X

    =0

    +

    2

    11

  • 8/2/2019 Continuous Time Arma

    13/52

    and spectral density

    () =X

    =

    ( ) = Z()Z(

    )

    The inversion formula

    ( ) =1

    2

    Z

    () =1

    2

    Z

    Z()Z(

    )

    gives us a direct connection between the function Z() and the second moments of the series.

    The continuous-time moving-average representation

    =

    Z

    =0() = L()

    is also the basis for standard moment calculations,

    2

    () =Z

    =0 2

    ()

    ( ) =

    Z

    =0()( + )

    () = L()L()

    and the inversion formula

    ( ) =1

    2

    Z

    () =1

    2

    Z

    L()L()

    For example, the AR(1),

    =Z

    =0 =

    +

    2() = 2Z

    =02 =

    2

    2

    ( ) = 2Z

    =0(+) =

    2

    2

    () = L()L() =

    +

    =

    2

    2 + 2

    4.1 Levels and differences

    In discrete time, you usually choose to work with levels or differences depending onwhich is stationary. In continuous time, we often work with differences even though the seriesis stationary in levels, as we wrote the AR(1) as a model for differences, = + .

    This fact accounts for the major difference between the look of continuous and discrete-timeformulas, especially in the autoregressive representation. Expressions such as

    R

    =0 () = dont make much sense (or require such wild () functions as to make them unusable).

    12

  • 8/2/2019 Continuous Time Arma

    14/52

    4.1.1 Levels to differences in discrete time

    First-differencing is simple in discrete time. We have

    = Z()

    Were looking for Z() in(1 ) = Z()

    The answer is pretty obvious,

    Z() = (1 )Z() (6)

    We can also easily find the form ofZ(),

    1 = 0 +X

    =1

    ( 1) (7)

    or suggestively0 = 0; =

    So we can write(1) = 0 + Z() (8)

    We can also find the impact multiplier 0 from the operator function,

    0 = Z(0)

    This important quantity gives the response ( 1) to a shock . In discrete time, thisis also the response ( 1) ( 1) to the shock

    4.1.2 Levels to differences in continuous time

    Start with the moving average representation for levels,

    =

    Z

    =0() (9)

    The same process in differences is

    = Z

    =0

    0() + (0) (10)This is the analogue to (7). This expression gives drift and diffusion terms. It gives us animpulse response function or moving average representation for . And it shows instantlythat (0) is the impact multiplier and the meaning of that term; (0) tells us how a shock affects

    The operator statement of this transformation is that

    = L() (11)

    13

  • 8/2/2019 Continuous Time Arma

    15/52

    implies = [(0) + L0()] (12)

    To derive (12) we use a handy property of Laplace transforms:

    L() = (0) + L0() (13)

    This property follows by integrating by parts:

    L0() =Z

    =0

    ()

    = ()

    0+Z

    =0()

    = (0) +

    Z

    =0() = (0) + L()

    If it helps intuition, we can also obtain (10) by brute force,

    = Z

    =0()

    + =Z

    =0()+

    Z

    =0()

    =

    Z

    =0()+ +

    Z

    =0[( + ) ()]

    (0)(+ ) +

    Z

    =0

    ()

    =

    Z

    =0

    ()

    + (0)

    4.2 Finding an integral representation; unit roots and random walks

    The converse operation: Suppose you have a differential representation,

    =

    Z

    =0()

    +

    how do you find the integral representation

    =Z

    =0()?

    4.2.1 Discrete time

    We want to get from(1 ) = Z()

    to = Z()

    Lag operator notation suggests

    Z() =Z()

    1

    14

  • 8/2/2019 Continuous Time Arma

    16/52

    = 0 + (0 + 1) + (0 + 1 + 2) 2 +

    This operation only produces a stationary process ifP

    =0 = Z(1) = 0. That need not bethe case. In general a process (1 ) = Z() is stationary in differences but not levels.

    A convenient way to handle the possibility is to decompose in to stationary and random

    walk components via the Beveridge-Nelson decomposition,

    =

    (1 ) = Z(1)

    = Z()

    where

    Z() =X

    =0

    X

    =+1

    ; i.e. =

    X=+1

    is a pure random walk, while is stationary in levels.

    Equivalently, we express the given moving average representation

    Z() = Z(1) (1)Z()

    and then integrating is straightforward.

    Now, ifZ(1) = 0, we can find an integral representation stationary . If not, there is aninteresting stationary + random walk representation and we get as much of a level-stationarycomponent as possible.

    Thus, finding a level representation is nearly the opposite summing of the differencingoperation used to find a differential representation. Unsurprisingly we have to worry about aconstant of integration.

    The Beveridge-Nelson trend has the property

    = lim

    +

    This property can be used to derive the decomposition.

    = + lim

    X=1

    + = + +1 + +2 +

    = +

    X=1+1 +

    X=2 +2 +

    X=3 +3 +

    = +

    X

    =1

    +

    X

    =2

    1 +

    We can also use the Hansen-Sargent prediction formulas (17) with = 1 to obtain this resultwithout doing big sums.

    X=1

    + =

    Z(1)Z()

    1

    15

  • 8/2/2019 Continuous Time Arma

    17/52

    Define

    =

    X

    =1

    +

    X

    =2

    1 + =

    X=0

    ;

    You can see that if {} are well behaved ifP

    =0 2 , then so are the {} so is a

    stationary process.So now we have defined

    = +

    and is stationary. It remains to characterize . Note that

    (1) =

    X

    =1

    +

    X

    =2

    1 +

    X

    =1

    1 +

    X

    =2

    2 +

    =

    X

    =1

    11 2

    = X

    =0

    X=0

    = (1 ) X

    =0

    Examine the first-difference

    (1) = (1) + (1 ) =

    X

    =0

    = Z(1)

    is a pure random walk.

    4.2.2 Continuous time

    Our objective is to start with a specification for differences,

    =

    Z

    =0()

    + (14)

    = [L() + ]

    and obtain the corresponding specification for levels,

    =

    Z

    =0() = L()

    The fly in the ointment, as in discrete time, is that the process may not be stationary inlevels, , so the latter integral doesnt make sense. As an extreme example, if you started with

    =

    you cant invert that to

    =

    Z

    =0

    16

  • 8/2/2019 Continuous Time Arma

    18/52

    because the latter integral doesnt converge. We usually leave it as

    = 0 +

    Z=0

    As in this example, one can still get the right answers by ignoring the nonstationarity, andthinking of a process that starts at time 0 with = 0 for all 0. (Hansen and Sargent(1993), last paragraph.)

    Alternatively, we can handle this as in discrete time with the continuous time Beveridge-Nelson decomposition. From (14), write

    =

    where is a pure Brownian motion,

    =

    +

    Z

    =0()

    = [ + L(0)]

    and is stationary,

    Z

    =0() = L()

    () =

    Z

    =0( + );

    Equivalently, we can express this result as a decomposition of the original operator,

    L() + = [ + L(0)] + L()

    This integration is almost exactly the opposite of the differentiation I used to find differencesfrom levels in (10). As usual, there is a constant of integration to worry about.

    Mirroring the Beveridge-Nelson construction, define

    = lim

    + = +

    Z

    =0+

    = +

    Z

    =0

    Z

    =0( + )

    = + Z

    =0Z

    =0( + ) ()

    or, = +

    where

    Z

    =0

    Z

    =0( + )

    Now, we find the innovations to , = +

    17

  • 8/2/2019 Continuous Time Arma

    19/52

    To evaluate the latter term,

    =

    Z

    =0

    Z

    =0( + )

    +

    Z

    =0(0 + )

    = Z

    =0() + Z

    =0()

    Hence,

    =

    Z

    =0()

    +

    Z

    =0()

    +

    Z

    =0()

    =

    +

    Z

    =0()

    We could also use the Hansen-Sargent formula (21),

    Z

    =0+ =

    L()L()

    to arrive at

    Z

    =0+ =

    L(0)L()

    which also leads to Z

    =0

    Z

    =0( + )

    5 Impulse-response function

    The discrete-time moving average representation is convenient because it is the impulse-response

    function. In

    =X

    =0

    = Z()

    the terms of measure the response of + to a shock ,

    ( 1) + =

    In particular,0 = Z(0)

    is the impact multiplier, 1 = 0,

    Z(1) =X

    =0

    gives the cumulative response, the response ofP

    =0 + to a shock, and

    = lim

    = lim

    Z() = Z()

    gives the final response, which needs to be zero for a stationary process.

    18

  • 8/2/2019 Continuous Time Arma

    20/52

    In continuous time,

    =

    Z

    =0()

    () again gives an impulse-response function, namely how expectations at about + areaffected by the shock . The concept lim0 (+ ) is not precise, but the response

    to shock interpretation is fairly clear.A more precise sense of lim0 (+ ) + can be obtained by defining

    = (+) =

    Z

    =0( + )

    Then, following the same logic as in (10)

    = () +

    Z

    =00( + )

    Here you see directly what it means to say that () is the shock to todays expectations of +.

    Transforming to differences, (10) also gives a better sense of(0) as the impact response of to a shock,

    =

    Z

    =00()

    + (0)

    as in discrete time,

    +1 = (0)+1 +X

    =0

    (+1 )

    the innovation in and (i.e. +1 and +1) are the same.

    We can recover the impact multiplier from the lag operator function from

    (0) = limL() (15)

    This is the analogue to (8),0 = Z(0)

    in discrete time. (This is the initial value theorem of Lapalce transforms.) To derive thisformula, take the limit of both sides of (13),

    L() = (0) + L0()

    and note that

    lim

    L0() = limZ

    =0

    0() = 0.

    Informally, for very large , the function drops off so quickly that only (0) survives,

    lim

    L() lim

    (0)

    Z

    0 = (0)

    0= (0)

    One requirement for stationarity is that the moving averages tail off,

    lim

    () = 0

    19

  • 8/2/2019 Continuous Time Arma

    21/52

    (Actually we needR

    =0 2() which is stronger.) The Final value theorem states

    () = lim0

    L()

    To see this result, simply examine

    Z=0

    ()

    We also want the equivalent ofZ(1) in discrete time,

    L(0) =

    Z

    =0()

    In continuous time, the specification for differences and levels is not quite the same (justwith =

    P in place of =

    P ) so its worth writing down the results for

    =Z

    =0 ()

    +

    = [L() + ]

    In this case, the impact multiplier is , and the cumulative multiplier is, from the Beveridge-Nelson decomposition

    +

    Z

    =0()

    = + L(0)

    6 Hansen-Sargent formulas

    Here is one great use of the operator notation and the application that drove me tofi

    gure allthis out and write it up. Given a process , how do you calculate

    Z

    =0+?

    This is an operation we run into again and again in asset pricing.

    6.1 Discrete time

    Hansen and Sargent (1980) gave an elegant answer to this question in discrete time. You wantto calculate P=0 + . You are given a moving average representation.

    = Z()

    (Here and below, can be a vector of shocks, which considerably generalizes the range ofprocesses you can write down.) The answer is, that the moving average representation of theexpected discounted sum is

    X=0

    + =

    Z() Z()

    =

    Z()

    1Z()

    1 1

    ! (16)

    20

  • 8/2/2019 Continuous Time Arma

    22/52

    Hansen and Sargent give the first form. The second form is a bit less pretty but shows a bitmore clearly what youre doing. Z() is just . 1

    1 =P

    =0 takes the forward

    sum so

    1 11

    Z() is the actual, ex-post value whose expectation we seek. But

    that expression would leave you many terms in +. The second term

    1 11

    1Z()subtracts offall the + terms leaving only terms, which thus is the conditional expectation.

    (Shown below.)

    6.1.1 AR(1) example

    We start with = Z() = (1 )

    1

    Then P

    =0 + follows

    X=0+ =

    1

    1

    =

    1

    (1 )

    1

    (1 ) =

    1

    (1 )

    X=0 =

    1

    (1 )

    6.1.2 A prettier formula and the impact multiplier

    The formula is even prettier if we start one period ahead, as often happens in finance

    X=1

    1+ =

    Z()Z()

    (17)

    Just subtract = Z() from (16). This version turns out to look exactly like the continuous-time formula below.

    We often want the impact multiplier how much does a price react to a shock? If = Z()

    then the impact multiplier is

    ( 1) = Z(0) = 0

    Applying that idea to the Hansen-Sargent formula (16),

    ( 1)X

    =0

    + = Z() (18)

    This formula is particularly lovely because you dont have to construct, factor, or invert anylag polynomials. Suppose you start with an autoregressive representation

    Z() =

    Then, you can immediately find

    ( 1)X

    =0

    + = [Z()]1 !

    21

  • 8/2/2019 Continuous Time Arma

    23/52

    6.1.3 Derivation

    To keep this note self-contained, here is a verification of the Hansen-Sargent formula. I sim-ply write out

    P

    =0 +, apply the annihilation operator to take conditional expectations

    by eliminating all + , and then verify that Hansen and Sargents operator yields the same

    expression. Z()1 1

    =

    X=0

    + =

    =

    = +0 +11 +22 +33 + +0+1 +1 +21 +32 + +20+2 +

    21+1 +22 +

    231 +243

    +30+3 +31+2 +

    32+1 = +3()+3 +

    2()+2 +()+1 +() +()1 +

    Now,

    1

    1 1Z() =

    1 + 22 + 33 +

    Z()

    = 3Z()+3 +2Z()+2 +Z()+1

    neatly subtracts off all the forward terms +1 +2, etc.

    6.2 Continuous time

    Now, lets translate this idea to continuous time. Hansen and Sargent (1991) show that if weexpress a process in moving-average form,

    =

    Z

    =0() = L()

    then we can find the moving average representation of the expected value by

    Z

    =0+ =

    L()L()

    (19)

    You can see that the formula is almost exactly the same as (19).

    The pieces work as in discrete time. The operator

    1

    =Z

    =0

    takes the discounted forward integral; subtracting offL() removes all the terms by which thediscounted sum depends on future realizations of+, leaving an expression that only dependson the past and hence is the conditional expectation.

    Lets do the AR(1) example in continuous time.

    ( + ) =1

    22

  • 8/2/2019 Continuous Time Arma

    24/52

    =1

    ( + )

    1

    Z

    =0+ =

    1

    ( )

    1

    +

    1

    +

    1

    Z

    =0

    + =1

    ( )

    ( + ) ( + )

    1

    Z

    =0+ =

    1

    +

    1

    +

    1

    =

    1

    +

    Z

    =0 =

    1

    +

    So we recover the same result as in continuous time.

    The impact or ( 1) counterpart to (18) is found as we found impact multipliers above.Start with

    =

    Z

    =0+ =

    L()L()

    The impact multiplier is

    lim

    L() L()

    = L() (20)

    L() approaches a finite limit, so the first term is zero. Thus,

    lim0

    (+ )

    Z

    =0+ = L()

    or, more precisely, = () + L()

    As in discrete time, this is a lovely formula because one may be able to find L() (say, from anautoregressive representation) without knowing the whole L() function.

    In continuous time we have to adapt a little more to levels and first differences. Suppose westart instead with

    =Z

    =0 () +

    = [ + L()]

    Then, since cancels from the larger operator [ + L()] (19) suggests again

    Z

    =0+ =

    L()L()

    (21)

    Checking that this is correct,Z

    =0+ =

    Z

    =0

    Z

    =0()+

    + +

    you can see that the last term vanishes, and were back at the same expression as before.

    6.2.1 Derivation

    One can derive the Hansen-Sargent formulas mirroring the approach of discrete time. Expressthe ex-post forward looking present value, then notice that the second half of the Hansen-Sargentformula neatly eliminates all the + terms:Z

    =0+ =

    Z

    =0

    Z

    =0()+

    =

    1

    L()

    23

  • 8/2/2019 Continuous Time Arma

    25/52

    =Z

    =0

    Z

    =0()+

    We transform to an integral over = that counts each once. Given = 0,looking at + in the past, then 0 means . If = 0, then starts at 0.

    =

    = +

    =

    Z

    =

    Z

    =max(0)()+

    =

    Z

    =0

    Z

    =0()+ +

    Z0=

    Z

    =()+

    =

    Z

    =0

    Z

    =0()

    + +

    Z

    =0

    Z

    =()

    = Z

    =0 Z

    =0() + + Z

    =0

    Z

    =0( + )

    =

    Z

    =0()

    Z

    =0+ +

    Z

    =0

    Z

    =0( + )

    Taking expectations, we just lose all the forward-looking + terms, so the last term is theexpected value and we have

    L()

    =

    L()

    +

    Z

    =0+

    7 Autoregressive representations

    AR-MA models are convenient tractable forms in discrete time. Here, I develop a similar classof simple models in continuous time.

    An autoregressive process in continuous time consists of a regression of on lagged ,

    =

    Z

    =0()

    + (22)

    (1 + L()) =

    or, using (13), a regression on lagged itself.

    =

    (0) +Z

    =0 0

    ()

    + (23)

    [ + (0) + L0()] =

    This definition is important, for the Wold representation is defined by inverting an autoregres-sion.

    24

  • 8/2/2019 Continuous Time Arma

    26/52

    7.1 How not to define AR models

    You might be tempted to try to invert

    = Z

    =0()

    to something like Z

    =0() =

    as we do in discrete time, but you can see right away that it wont work. In continuous time, wewill have to forecast even stationary time-series such as the AR(1) in overdifferenced form.

    We have the AR(1) process as an example,

    + =

    =

    Z

    =0

    ( + ) =

    =

    +

    You might be tempted to write an AR(2) as follows.

    ( + )( + ) = 1

    This would not work. Look at the MA representation,

    1( + )

    1( + )

    = 1

    1

    +

    1 +

    =1

    Z

    =0

    =1

    Z

    =0

    We have (0) = 0 so is nonstochastic no loading on . The 2 operator takes a second

    derivative.

    The problem is that as we go to continuous time, we do not want the second lag to contract

    towards the first. To derive the continuous-time analogue to the AR(2) we would start with

    = 11 + 22 +

    1 = (1 1) 1 + 22 +

    Then, take the limit keeping the second lag fixed,

    = ( + 2) + + +

    =

    25

  • 8/2/2019 Continuous Time Arma

    27/52

    we can do that, but the tractability is clearly lost, as inverting this lag operator will not be fun.

    Thus, as we go to continuous time, the tractability of simple AR or MA models (past AR(1))disappears. The finite length of AR or MA lag polynomials is no longer a simplification. Thoughwe can write finite-length processes such as

    =Z

    =0() +

    =

    Z=0

    ()

    The finiteness of the AR or MA representation does not lead to easy inversion or manipulationas it does in discrete time.

    7.2 Adding AR(1)s to obtain more complex processes

    The obvious way to think about forming more complex yet tractable linear processes is to add

    AR(1) together

    =

    + +

    +

    =

    Z

    =0

    + (24)

    So long as 6= , this process will still be a diffusion, i.e., will load on . Sums ofexponentials give convenient, though not finite, and flexible moving average representations.

    The obvious way to achieve an autoregressive representation is to introduce additionalstate variables,

    =

    + =

    Z

    =0

    = + =

    Z=0

    = +

    and thus

    = +

    = +

    = +

    This expression mirrors the way we often write an AR(2) in discrete time by adding 1 as anadditional state vector, and then treating the AR(2) as a vector AR(1).

    However, by doing so we disguise the fact that and and the history are in thelinear span of{}. Also, it is often convenient to use the autoregressive lag polynomials, forexample in the Hansen-Sargent formulas.

    For that reason, I work to write an autoregressive representation corresponding to (24).

    The basic idea is straightforward. By partial-fractions type algebra, any process of the form

    =

    1

    + 1+

    2 + 2

    +3

    + 2+

    26

  • 8/2/2019 Continuous Time Arma

    28/52

    Are equivalent to systems of the form + 1 2

    1

    ( + 2) 3

    1

    ( + 3)+

    =

    7.2.1 A two-component example

    To make the ideas concrete, lets invert

    =

    + 1+

    + 2

    The answer, after some pleasant algebra, is + 2

    1

    ( + 1)

    =

    where

    1 =2 + 1

    + ; 2 =

    1 + 2 +

    ; =

    + (1 2)

    2

    We start by forming the operator polynomial,

    =

    + 2+1+

    ( + 1) ( + 2)

    ( + )

    Denote

    1 =2 + 1

    +

    We invert,( + 1) ( + 2)

    ( + 1) = ( + )

    Now, we reexpress the polynomial on the left into the form + +

    + 1

    = ( + )

    This will result in an autoregressive representation

    =

    +

    Z

    =01

    +

    Now lags of are useful in forecasting , just as they are in higher order AR processes.

    We just have to find and :

    ( + 1)( + 2)

    ( + 1)

    2 + (1 + 2) + 12( + 1)

    +(1 + 2 1) + 12

    ( + 1)

    + (1 + 2 1)1 +12 (1 + 2 1)1

    ( + 1)

    + (1 + 2 1)1 +(1 2) (1 1)

    + 1

    27

  • 8/2/2019 Continuous Time Arma

    29/52

    In this case, I can make it a little prettier:

    1 + 2 1 = 1 + 2 2 + 1

    + =

    2 + 1 +

    1 2 =2 + 1

    + 2 =

    (1 2)

    +

    1 1 =2 + 1

    + 1 =

    (2 1)

    +

    + 2

    +

    (1 2)2

    + 1

    Thus, writing out the result, we have the autoregressive process:

    =

    + 1+

    + 2

    + 2

    1

    ( + 1)

    =

    where

    1 =2 + 1

    + ; 2 =

    1 + 2 +

    ; =

    + (1 2)

    2

    or,

    = 2 +

    Z

    =01

    + ( + )

    1 =2 + 1

    + ; 2 =

    1 + 2 +

    The converse operation, AR to MA:

    Suppose we start with

    =

    1 +

    Z

    =02

    +

    + 1

    1

    + 2

    =

    ( + 1)( + 2)

    + 2

    =

    We invert,

    = + 2

    ( + 1)( + 2)

    We simply expand, factor, and express the result in partial fractions form. Defining

    ( + )( + ) = ( + 1)( + 2)

    0 = 2 + (1 + 2) + (12 )

    28

  • 8/2/2019 Continuous Time Arma

    30/52

    =(1 + 2)

    p(1 2)2 + 4

    2 + = 1 + 2

    2 = 1

    2 = 1

    then we have

    = + 2

    ( + )( + )

    = + 2

    1

    +

    1

    +

    =1

    +

    +

    +

    2

    +

    2 +

    =1

    1

    + 1 +

    +

    +

    2

    +

    2 +

    =1

    2

    + +

    2

    +

    or, finally,

    =1

    2

    + +

    1

    +

    8 Tricks

    To expand lag polynomials and make any sense out of them, you have to reduce them to knownforms like 1( + ). Here are some tricks.

    Trick 1: +

    = 1

    +

    How much easier then

    Z

    =0

    =

    Z

    =0

    Similarly, +

    + = 1 +

    + ( )

    +

    Trick 2:Reducing powers

    ( + ) ( + )

    ( + )= +

    ( + ) +

    ( + )

    Trick 3: Partial fractions

    1

    ( + )

    1

    ( + )=

    1

    1

    +

    1

    +

    an AR(2) is the same as the sum of two AR(1).

    29

  • 8/2/2019 Continuous Time Arma

    31/52

    9 A continuous-time asset pricing economy

    I solve the linear-quadratic economy and find asset prices in continuous time. I then extend themodel to include a habit/durability process. That problem is much easier in continuous timeand using the Hansen-Sargent prediction formulas.

    9.1 Time-separable model

    The warmup problem is the standard quadratic utility permanent income world,

    max

    Z

    0

    1

    2

    ( +)

    2 s.t. (25)

    = ( + ) (26)

    = +

    Ifi

    nd the equilibrium consumption process in level and diff

    erenced form,

    = +

    +

    =

    +

    Then, I find the price of the consumption stream,

    =

    Z

    =0

    +

    +

    The result is

    = 1

    2

    ( + )2

    or equivalently

    =

    Z

    =0+

    1

    2

    ( + )2

    The first term is the risk neutral term, as emphasized by its expansion in the last equality.The second term is a risk adjustment. The higher the variance, the lower the price. As approaches the quadratic utility investor gets more risk averse. At hes at bliss and doesntwant more consumption so you cant induce him to take risk by giving him more consumption.So as rises to the price discount rises.

    The price of the endowment stream has an identical risk correction, ,

    =

    Z

    =0

    +

    +

    =

    +

    1

    2

    ( + )2

    =

    Z

    =0+

    1

    2

    ( + )2

    30

  • 8/2/2019 Continuous Time Arma

    32/52

    9.1.1 Model solution

    First, I show that the flow constraint (26) (together with limits on how fast capital can grow)imply the present value constraint

    Z0

    + = Z0

    + + (27)

    Quotes on present value because R

    0 + is not the present value (price) of the risky

    consumption stream!

    To show this, write the flow budget constraint (26) as

    ( ) =

    =1

    ( )( )

    writing it out,

    = Z

    0

    + Z

    0

    +

    Applying to both sides, we obtain (27).

    Second, since the rate of return equals the discount rate, the basic asset pricing first ordercondition gives

    0 = 0+

    = ( +)

    = (+)

    and hence () = 0.

    Thus, we know that = 0 + (), with the latter loading depending on the resourceconstraint.

    Next, we substitute first order conditions = + and the income forecast + = into the resource constraint (27) to find the actual consumption process,

    Z

    =0 =

    Z

    =0 +

    =

    + +

    = +

    +

    This is the familiar permanent-income rule.

    To the random walk I just take differences

    = +

    +

    = ( + ) +

    + ( + )

    =

    +

    +

    +

    + ( + )

    31

  • 8/2/2019 Continuous Time Arma

    33/52

    =

    +

    The price of the consumption stream

    = Z

    =0

    + +

    By brute force,

    + = +

    +

    Z=0

    + =

    2+ =

    2 +

    +

    2

    =

    Z

    =0

    2

    +

    2

    = 1

    (

    )

    1

    +

    2 Z0

    Z

    0 =

    |0

    Z

    0

    =

    1

    2

    =

    1

    2

    ( + )2

    Similarly, we can find the price of the endowment stream, The price of the consumptionstream

    =

    Z

    =0

    +

    +

    By brute force,+ = +

    +

    Z=0

    + = +

    Z=0

    + =

    (++) = +

    2

    +

    !Z=0

    = +

    2

    +

    !1

    1

    = Z

    =0

    2

    +1

    (1 )

    =

    Z

    =0(+)

    2

    ( + )

    1

    Z

    =0

    1

    =

    +

    2

    ( + )

    1

    1

    1

    +

    =

    +

    1

    2

    ( + )2

    32

  • 8/2/2019 Continuous Time Arma

    34/52

    9.2 A model with habits or durability

    I use a utility function with a temporal nonseparability. The quantity dynamics are solved byHeaton (1993), section 3.1, though I hope the notation here is a bit simpler. The consumerfaces a linear capital accumulation technology and an AR(1) income stream. The consumers

    problem is

    max{}

    1

    2

    Z

    =0

    Z

    =0 ()

    2

    = ( + )

    = +

    We can write the objective in operator form,

    1

    2

    Z

    =0 ( [1 + L()] )

    2 (28)

    The present-value form of the resource constraint is, as before,

    Z

    =0+ = +

    Z

    =0+ = +

    +

    (29)

    We normally think of the nonseparability () as generating habit persistence or dura-bility in consumption. If () 0, then past consumption contributes positively to currentutility, as past durable goods purchases do. If () 0, then past consumption raises currentmarginal utility, as habit-forming goods do.

    We can also use nonseparability for a different purpose: We can also regard as shiftingthe bliss point, which controls risk aversion. A larger means current consumption is

    closer to the composite bliss point, so the consumer becomes more risk averse. Thus, temporalnonseparability allows us to control risk aversion.

    Controlling risk aversion is useful to making the linear quadratic model vaguely reasonable.One of its biggest problems is that higher consumption makes the consumer more risk averse,where in reality we think risk aversion is likely independent of wealth in the long run, and risesin consumption relative to the recent past may make people less risk averse, as in the Campbell-Cochrane model. By moving the bliss point, we can capture both ideas, and thus make thelinear-quadratic model a more useful approximation.

    As an example, I use a sum of two exponentials for the () function, so the problem is is

    max{}

    1

    2Z

    =0

    ( )2 (30)

    =

    Z

    =0 (31)

    =

    Z

    =0 (32)

    In the operator notation of (28),

    L() =

    + +

    +

    33

  • 8/2/2019 Continuous Time Arma

    35/52

    This formulation allows for both habit and durable effects. For example, consumptioncan be durable in the short run, but induce habits in the long run, with 0, 0, .Having included two exponentials, the generalization to an arbitrary sum of exponentials is clear.

    Here are the major results: The consumption process follows

    [1 + L()] =

    + [1 + L()]

    You can see the natural generalization from the timeseparable case in which L() = 0. In thedouble-exponential case,

    = [ ( ) + ( )] +

    +

    1 +

    + +

    +

    (33)

    Without nonseparabilities, consumption follows the familiar random walk. With nonsepara-bilities, marginal utility is still a random walk, but consumption is not, and adjusts towards( 0) or away ( 0) from its recent past.

    The price of the consumption stream is

    = 1[1 + L()]

    1 + L()L()

    ( )

    [1 + L()]

    [1 + L()]

    +

    2(34)

    The first term is just the risk-neutral present value of the consumption stream, i.e.

    Z

    =0+

    The second term in (34) is a discount for risk.

    In the double-exponential case the price of the consumption stream is

    =1

    +

    + +

    +

    1 + + +

    +

    1 +

    + +

    +

    +

    2

    In either general or special formulas, the denominator in the second term is the interestingcomponent. This term is still current marginal utility. This term shows us how risk premiumsevolve over time. If rises in a comparative-static sense, we still see risk aversion and the pricediscount rise. But now risk aversion is determined by relative to the habit or durable stock and . This generalization can allow the model to to produce more realistic time series.

    The price of the endowment stream {} is similarly,

    =

    +

    [1 + L()]

    [1 + L()]

    2

    ( + )2

    In the double-exponential case,

    =

    +

    h1 +

    + +

    +

    i [ + + ]

    2

    ( + )2

    The first term is again the risk neutral value,

    +

    =

    Z

    =0+

    The second term reflects the same time-varying risk aversion as before, due to changing marginalutility at time .

    34

  • 8/2/2019 Continuous Time Arma

    36/52

    9.2.1 Derivation

    First order conditions and consumption drift.

    The consumers first-order conditions state that marginal utility is a martingale,2

    +Z

    =0() =

    + +

    Z=0

    ()+ (35)

    Taking the limit, marginal utility follows a random walk

    +

    Z

    =0()

    = 0

    I.e., we know that consumption follows a classic autoregressive process3, written in either form(22) or (23),

    =

    (0) +

    Z

    =00() +

    = Z

    =0() +

    2This condition is the same whether the nonseparability is internal or external. If external, these aredirectly the first order conditions, in equilibrium where individual = aggregate consumption. If internal, themarginal utility of consumption today includes its effects on fugure utility,

    =

    Z

    =0

    ()

    +

    Z

    =0

    +

    Z

    =0

    () +

    ()

    But then

    =

    +

    Z=0

    ()

    + Z=0

    +

    Z=0

    () +()

    =

    +

    Z

    =0

    () +

    +

    Z

    =0

    ++

    Z

    =0

    () ++

    ()

    Hypothesize that (35) holds, and we obtain

    Z

    =0

    ()

    1 +

    Z

    =0

    ()

    =

    +

    Z

    =0

    () +

    1 +

    Z

    =0

    ()

    3A brute-force verification:

    Z=0

    () = +

    Z=0

    ()+

    Z=

    ()+

    Z

    =0

    () =

    + (0)

    Z

    =0

    (+ )

    (+ ) = (0) +

    Z

    =0

    =

    (0) +

    Z

    =0

    0()

    35

  • 8/2/2019 Continuous Time Arma

    37/52

    In operator notation, both expressions are equivalent to

    [ + L()] = (36)

    We dont know what is, which the resource constraint will tell us.

    Resource constraint and consumption shocks.Using the Hansen-Sargent response-to-shock formula (20), and the representation (36), we

    have

    Z

    =0+ = () +

    1

    1

    1 + L()

    Differentiating the resource constraint (29), we obtain

    Z

    =0+ = +

    +

    =

    +

    +

    +

    +

    Comparing the two expressions, the impact multiplier satisfies

    1

    1 + L() =

    +

    Therefore,

    =

    + (1 + L())

    In sum, then, consumption follows the process

    =

    (0) +

    Z

    =00() +

    +

    1 +

    Z

    =0()

    =

    Z

    =0()

    +

    +

    1 +

    Z

    =0()

    or, in operator notation,

    [ + L()] =

    + [1 + L()]

    We can write the same result by characterizing the marginal utility process,

    +

    Z

    =0()

    =

    +

    1 +

    Z

    =0()

    (37)

    [1 + L()] =

    + [1 + L()] (38)

    Consumption process, exponential case

    Expressing the state in terms of and is useful. Directly, (37) (38) are

    ( + + ) =

    + [1 + L()] (39)

    If we want to study consumption growth, we can substitute from (38)

    1 +

    + +

    +

    =

    +

    1 +

    + +

    +

    36

  • 8/2/2019 Continuous Time Arma

    38/52

    +

    1

    +

    +

    1

    +

    =

    +

    1 +

    + +

    +

    Then, rewriting this in terms of the state variables and ,

    = [ ( ) + ( )] +

    + 1 +

    + +

    + (40)Price of the consumption stream

    The price of the consumption stream is

    =

    Z

    =0

    [+ +R

    =0 ()+]

    [ +R

    =0 ()]+

    The formula simplifies if we recognize that marginal utility follows a random walk. Then

    + +

    Z

    =0()+

    =

    +

    Z

    =0()

    +

    + (1 + L())

    Z=0

    +

    Substituting this result in the pricing formula, we have

    =

    Z

    =0+

    + [1 + L()]

    [1 + L()]

    Z

    =0

    Z=0

    +

    + (41)

    Lets work on the first term. Using the Hansen-Sargent prediction formula and the operatorexpression for the consumption process,

    [ + L()] =

    + (1 + L())

    we have

    Z

    =0+ =

    +

    (1+L())[1+L()]

    (1+L())[1+L()]

    =

    +

    [1 + L()] [1 + L()]

    ( ) [1 + L()]

    =

    [1 + L()] [1 + L()]

    ( ) [1 + L()]

    =1

    [1 + L()]

    1 +

    L()L()

    ( )

    I cant get further in general, so using the double-exponential functional form,

    Z

    =0+ =

    1h1 +

    + +

    +

    i1 +

    + +

    +

    + +

    +

    ( )

    =1h

    1 + + +

    +

    i"

    1 +2

    ( + ) ( + )+

    2

    (+ ) (+ )

    #

    =

    + (+) + (+)

    1 + + +

    +

    1

    37

  • 8/2/2019 Continuous Time Arma

    39/52

    Now for the second part. Start with the hard-looking part,

    Z

    =0

    Z=0

    +

    Actually, this is easy, and the risk adjustment term is quite general. Start with any con-

    sumption process,

    =Z

    =0()

    +

    Z=0

    +

    =

    Z=0

    ()

    Z

    =0+

    Z=0

    +

    =

    Z

    =0

    Z=0

    ()

    =

    Z

    =0()

    Z

    =

    = Z

    =0()

    1

    =1

    L()

    Using our consumption process,

    =

    +

    [1 + L()]

    [1 + L()] =

    +

    h1 +

    + +

    +

    ih

    1 + + +

    +

    iwe have

    Z

    =0++ =

    +

    h1 +

    + +

    +

    i h1 +

    + +

    +i

    =

    +

    1

    Now, adding back the other terms of (41),

    + [1 + L()]

    [1 + L()]

    Z

    =0

    Z=0

    +

    +

    =

    + [1 + L()]

    [1 + L()]

    +

    1

    =[1 + L()]

    [1 + L()]

    +

    2

    Price of the endowment stream

    The price of the endowment stream is

    =

    Z

    =0

    [+ +R

    =0 ()+]

    [ +R

    =0 ()]+

    Working analogously, we have

    + +

    Z

    =0()+

    =

    +

    Z

    =0()

    +

    + (1 + L())

    Z=0

    38

  • 8/2/2019 Continuous Time Arma

    40/52

    =

    Z

    =0+

    + [1 + L()]

    [1 + L()]

    Z

    =0

    Z=0

    +

    +

    =

    Z

    =0

    + [1 + L()]

    [1 + L()]

    Z

    =0

    Z=0

    +

    Z

    =0+

    =

    +

    2

    + [1 + L()] [1 + L()]

    Z

    =0

    Z

    =0

    =

    +

    2

    + [1 + L()]

    [1 + L()]

    1

    1

    1

    +

    =

    +

    [1 + L()]

    [1 + L()]

    2

    ( + )2

    In the double-exponential case,

    = +

    h1 + +

    +

    +i

    [ + + ]

    2

    ( + )2

    10 References

    Hansen, Lars Peter, and Thomas J. Sargent, 1991, Prediction Formulas for Continuous-TimeLinear Rational Expectations Models Chapter 8 of Rational Expectations Econometrics,https://files.nyu.edu/ts43/public/books/TOMchpt.8.pdf

    Hansen, Lars Peter, and Thomas .J. Sargent, 1980, 11Formulating and Estimating DynamicLinear Rational-Expectations Models, Journal of Economic Dynamics and Control 2: 7-46.

    Hansen, Lars Peter, and Thomas J. Sargent, 1981, A Note On Wiener-Kolmogorov Predic-tion Formulas for Rational Expectations Models, Economics Letters 8(3): 255-260,

    Heaton, John, 1993, The Interaction Between Time-Nonseparable Preferences and TimeAggregation, Econometrica 61 353-385 http://www.jstor.org/stable/2951555

    39

  • 8/2/2019 Continuous Time Arma

    41/52

    11 Lecture notes

    Goal: Translate class of linear ARMA models and operator notation tricks to continuoustime. Disclaimer: this is all in Hansen, Heaton. or so obvious to them they didnt botherto write it down.

    Discrete time. AR(1)

    = 1 + =X

    =0

    Operators(1 ) =

    =1

    1 =

    X

    =0

    =

    X=0

    Our goal: This class of processes and operator tricks in continuous time. Linear processes.

    Continuous time AR(1)

    +1 = (1 ) + +1

    +1 = + +1

    = + =

    Z

    =0.

    Task 1 learn to do AR(1) with operators.

    Basic operators =

    This would let us do general MA

    =

    Z

    =0() =

    Z

    =0()

    We dont do it this way. We need :

    = lim0

    (+ )

    = lim0

    1

    = lim0

    log() 1

    = lim0

    log() log()

    1= log()

    =1

    = ; = log()

    Inverting the AR(1)

    + =

    ( + ) = =1

    +

    40

  • 8/2/2019 Continuous Time Arma

    42/52

    Interpret the last integral slowly

    =

    Z

    =0

    1

    =

    Z

    =0

    1

    =

    Z

    =0

    or = Z

    =0 = Z

    =0 = Z

    =0 = Z

    =0

    Note we could use but its easier to just stick with = log()

    Lag polynomials moving average representations and functions more generally

    =X

    =0

    = Z(); Z() =X

    =0

    =

    Z

    =0() = L(); L() =

    Z

    =0()

    Agenda:Z() = Z()

    1;Z() = Z()1

    how to do that in continuous time....

    Laplace transforms.

    =

    Z

    =0()

    the Laplace transform of this operation is defined as

    L() =

    Z

    =0()

    where is a complex number. The Laplace transform of the lag operation mysterious = is

    =

    L() =

    Yes, they commute

    L()L() = L()L()

    Z()Z() = Z()Z()

    Moving averages and moments, discrete time:

    2() =

    X=0

    2

    2

    ( ) =

    X

    =0

    +

    2

    () =X

    =

    ( ) = Z()Z(

    )

    ( ) =1

    2

    Z

    ()

    41

  • 8/2/2019 Continuous Time Arma

    43/52

    Continuous time:

    2 () =Z

    =02()

    ( ) =

    Z

    =0()( + )

    () =Z

    =() = L()L()

    ( ) =1

    2

    Z

    () =1

    2

    Z

    L()L()

    Levels to differences moving average representations. (Continuous time uses even when is stationary, as in AR(1)

    1 = 0 +X

    =1

    ( 1)

    =Z

    =0()

    + (0)

    Operators

    (1) = (1)Z() = [0 + Z()] = [Z(0) + Z()]

    = L() = [(0) + L0()] =

    lim

    (L()) + L0()

    Part 1) An important trickL() = (0) + L0()

    is a a property of Laplace Transforms. Integrate by parts

    L0() =

    Z

    =0

    ()

    = ()

    0+

    Z

    =0()

    = (0) +

    Z

    =0() = (0) + L()

    Part 2) recovering (0) from the lag polynomial. Wait just a minute.

    Differences to levels, MA representation. Issue: is the series difference or level stationary?

    Operators with nonstationary series?

    =

    0 =Z

    0

    =

    = ?

    =1

    =

    Z

    0

    ok if you think of = 0 = 0 0.

    42

  • 8/2/2019 Continuous Time Arma

    44/52

    How do we handle where might have unit root? Beveridge-Nelson decompositions.In discrete time, (note )

    (1 ) = Z() =X

    =0

    implies

    = ;

    (1 ) = Z(1)

    = Z(); =X

    =+1

    or, equivalentlyZ() = Z(1) + (1)Z()

    has the trend property

    = lim

    + = +

    X=0

    +

    In continuous time,

    =

    Z

    =0()

    +

    = [L() + ]

    implies

    = ; =

    +

    Z

    =0()

    = [ + L(0)]

    Z

    =0() = L(); () =

    Z

    =0( + )

    or, equivalently,L() + = [ + L(0)] + L()

    has the trend property

    = lim

    + = + Z

    =0+

    Also () =R

    =0 ( + ) is the obvious inverse of () = 0().

    Impulse-response functions and multipliers.

    1. Meaning

    = () +

    lim0

    (+ ) =

    43

  • 8/2/2019 Continuous Time Arma

    45/52

    2. MA representation is an impulse response function

    ( 1) + =

    lim0

    (+ ) + = ()

    meaning, = +; = ()

    3. You can read the impact multiplier off the lag polynomial

    ( 1) = ( 1) = 0 = Z(0)

    lim0

    (+ ) = (0) =

    lim

    (L())

    meaning

    = Z

    =0

    ()

    + (0) = + lim(L())

    Derivation: take the limit of both sides of

    L() = (0) + L0()

    lim

    L() = (0) + lim

    L0()

    and note that

    lim

    L0() = lim

    Z

    =00() = 0.

    Informally, for very large , the function drops off so quickly that only (0)survives,

    lim

    L() lim

    (0)Z

    0 = (0)

    0= (0)

    Weirdness I dont really understand: = ; = log() so I expected = 0 = limL() not the extra

    4. Final multiplier

    = Z() should = 0

    () = lim0

    L() = lim0

    Z

    =0() should = 0

    5. Cumulative response of

    Z(1) =X

    =0

    L(0) =

    Z

    =0()

    44

  • 8/2/2019 Continuous Time Arma

    46/52

    6. For a MA representation of differences,

    =

    Z

    =0()

    +

    = [L() + ]

    the impact multiplier is lim0

    (+ ) =

    the cumulative multiplier is

    lim0

    (+ )

    Z

    =0+ = +

    Z

    =0()

    = + L(0)

    Autoregressive processes. We have the AR(1) process as an example,

    + =

    = Z

    =0

    ( + ) =

    =

    +

    This is what I generalize.

    Autoregressive processes in continuous time

    =

    Z

    =0()

    +

    or,

    =

    (0) +Z

    =00()

    +

    more obviously equivalent in lag notation.

    (1 + L()) =

    [ + (0) + L0()] =

    Notice the difference between an autoregressive process and a moving average process in

    =

    Z

    =0()

    +

    representation.

    How not to do AR

    1.

    =

    Z

    =0()

    Z

    =0() =

    too smooth

    45

  • 8/2/2019 Continuous Time Arma

    47/52

    2. You might be tempted to write an AR(2) as.

    ( + )( + ) = 1

    This would not work. Look at the MA representation,

    1

    ( + )

    1

    ( + )=

    1

    1

    +

    1

    +

    =1

    Z

    =0

    =1

    Z

    =0

    We have (0) = 0 so is nonstochastic no loading on . The 2 operator takes

    a second derivative.

    3. We do not want the second lag to contract towards the first.

    = 11 + 22 +

    1 = (1 1) 1 + 22 +

    We could take the limit keeping the second lag fixed,

    = ( + 2) + + +

    =

    The tractability is clearly lost. Finite length AR or MA lag polynomials are no longera simplification.

    =Z

    =0() +

    =

    Z=0

    ()

    How yes to do ARs. Two-component example AR(2)

    =

    + 1+

    + 2

    + 2

    1

    ( + 1)

    =

    (where

    1 =2 + 1

    + ; 2 =

    1 + 2 +

    ; =

    + (1 2)

    2 )

    The key to avoiding the oversmoothness problem:

    =

    ( + )

    ( + 1) ( + 2)

    Keep order of polynomials!

    46

  • 8/2/2019 Continuous Time Arma

    48/52

    Conversely, + 1

    1

    + 2

    =

    =1

    2

    + +

    1

    +

    where

    =(1 + 2)

    p(1 2)2 + 4

    2

    To HS: Forward-looking operators . Needed 1 0

    kk 1

    1

    1

    =

    11

    1 11

    ! =

    X

    =1

    =

    X=1

    +

    kk 0 1

    + =

    1

    =

    Z

    =0+

    =

    Z

    =0+

    Hansen-Sargent prediction formulas

    X=0

    + =

    Z() Z()

    X=1

    1+ =

    Z()Z()

    = 0 : ( 1)X

    =0

    + = Z()

    Particularly handy. If

    Z() =

    ( 1)X

    =0

    + = [Z()]1

    then no inverting lag polynomials!

    Continuous time:

    Z

    =0+ =

    L()L()

    lim0 (+

    ) Z

    =0

    + = lim

    L() L()

    lim0

    (+ )

    Z

    =0+ = L()

    Again, you dont need the whole partial fractions, etc.

    1. AR(1) = Z() = (1 )

    1

    47

  • 8/2/2019 Continuous Time Arma

    49/52

    X=0

    + =

    1 1

    = 1

    (1 )

    1

    (1 )

    =1

    (1 )

    X=0

    =1

    (1 )

    2. Lets do the AR(1) example in continuous time.

    ( + ) =1

    =1

    ( + )

    1

    Z

    =0+ =

    L()L()

    Z

    =0+ =

    1+

    1+

    !

    Z

    =0+ = 1

    ( )

    ( + ) ( + )

    1

    Z

    =0+ =

    1

    +

    1

    +

    1

    =

    1

    +

    Z

    =0 =

    1

    +

    3. Differences: If

    =

    Z

    =0() +

    = [ + L()]

    then is in L() as well as L() so also

    Z

    =0+ =

    L()L()

    Habit/durable problem

    max{}

    1

    2

    Z

    =0

    Z

    =0 ()

    2

    1

    2

    Z

    =0 ( [1 + L()] )

    2

    1. Specific example (paper does double exponential)

    max{}

    1

    2

    Z

    =0 ( )

    2

    =

    Z

    =0

    L() =

    +

    48

  • 8/2/2019 Continuous Time Arma

    50/52

    2. The resource constraint is,

    = ( + )

    = +

    3. Present value resource constraint

    ( ) =

    =1

    ( )( )

    =

    Z

    0+

    Z

    0+

    Z

    =0+ = +

    Z

    =0+ = +

    +

    4. FOC0() =

    0( + )

    +

    Z

    =0() =

    + +

    Z

    =0()+

    +

    Z

    =0()

    = 0

    I.e., autoregressive process

    =

    Z

    =0()

    +

    In operator notation,[ + L()] =

    5. Resource constraint determines Use HS formula

    =1

    1 + L()

    Z

    =0+ = () +

    1

    1

    1 + L()

    Differentiating the resource constraint

    Z

    =0+ = +

    +

    =

    +

    +

    +

    +

    Match the terms1

    1 + L()=

    +

    =

    + (1 + L())

    So, the consumption process:

    +

    Z

    =0()

    =

    +

    1 +

    Z

    =0()

    49

  • 8/2/2019 Continuous Time Arma

    51/52

    [1 + L()] =

    + [1 + L()]

    ..(algebra)

    = ( ) +

    +

    1 +

    +

    Note you drift back to habits, away from durables stock.6. Asset pricing. The price of the consumption stream is

    =

    Z

    =0

    [1 + L()] + [1 + L()]

    +

    MU is a random walk

    [1 + L()] + = [1 + L()] +

    + [1 + L()]

    Z=0

    +

    so the pricing formula is

    = Z

    =0

    [1 + L()]

    + (1 + L()) R=0 + [1 + L()]

    +

    =

    Z

    =0+

    + [1 + L()]

    [1 + L()]

    Z

    =0

    Z=0

    +

    +

    Now, for any consumption process,

    =Z

    =0()

    +

    Z=0

    +

    =

    Z=0

    ()

    Z=0

    +Z=0

    + = Z=0

    Z=0

    ()

    =Z

    =0()

    Z

    =

    =

    Z

    =0()

    1

    =1

    L()

    Using our consumption process,

    =

    +

    [1 + L()]

    [1 + L()]

    Z

    =0

    Z=0

    +

    + =

    1

    +

    [1 + L()]

    [1 + L()]=

    1

    +

    so finally prices are

    =

    Z

    =0+

    + [1 + L()]

    [1 + L()]

    1

    +

    =

    Z

    =0+

    [1 + L()]

    [1 + L()]

    2

    ( + )2

    50

  • 8/2/2019 Continuous Time Arma

    52/52

    =

    Z

    =0+

    1 +

    +

    +

    2The price discount (expected return premium) rises as a function of consumptionrelative to habit/durable stock, i.e.

    =

    Z

    =0+

    1 + +

    0()

    +

    2