27
Equilibrium Dynamics and Di/erence Equations Craig Burnside Duke University September 18, 2010 1 Equilibrium Dynamics When we studied the generic dynamic programming problem in Stokey and Lucas we ob- tained the following rst-order and envelope conditions F y (x t ;x t+1 )+ v 0 (x t+1 )=0 (1) v 0 (x t )= F x (x t ;y t+1 ) (2) We can combine the rst-order and envelope conditions to get the Euler equation F y (x t ;x t+1 )+ F x (x t+1 ;x t+2 )=0 (3) We also derived the transversality condition lim t!1 t F x (x t ;x t+1 ) x t =0: (4) In this section we are concerned with the following question: How can we use the equations (1) and (2), and possibly (3) or (4) to characterize the policy function h? As Stokey and Lucas point out, in general we cannot say much, as the theorem of Boldrin and Montrucchio (see Thm. 6.1 of Stokey and Lucas) states that just about any smooth h can be obtained as the policy function to some dynamic programming problem. One way to characterize the dynamics induced by h is to use rst order approximations to the model in the neighborhood of a steady state. We will take this approach, which is fundamentally local, in subsequent sections. In this section I consider a di/erent approach, which is to establish global stability of equilibrium dynamics around steady states. Denition: A point x is globally stable if for all x 0 2 X and fx t g 1 t=0 such that x t+1 = h(x t ), lim t!1 x t = x. Thanks to Rosen Valchev for pointing out a typo in the previous version of these notes. 1

Equilibrium Dynamics and Di⁄erence Equationspeople.duke.edu/~acb8/notes3.pdf · Equilibrium Dynamics and Di⁄erence Equations Craig Burnside Duke University September 18, 2010

  • Upload
    others

  • View
    14

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Equilibrium Dynamics and Di⁄erence Equationspeople.duke.edu/~acb8/notes3.pdf · Equilibrium Dynamics and Di⁄erence Equations Craig Burnside Duke University September 18, 2010

Equilibrium Dynamics and Di¤erence Equations�

Craig BurnsideDuke University

September 18, 2010

1 Equilibrium Dynamics

When we studied the generic dynamic programming problem in Stokey and Lucas we ob-

tained the following �rst-order and envelope conditions

Fy(xt; xt+1) + �v0(xt+1) = 0 (1)

v0(xt) = Fx(xt; yt+1) (2)

We can combine the �rst-order and envelope conditions to get the Euler equation

Fy(xt; xt+1) + �Fx(xt+1; xt+2) = 0 (3)

We also derived the transversality condition

limt!1

�tFx(xt; xt+1) � xt = 0: (4)

In this section we are concerned with the following question: How can we use the equations

(1) and (2), and possibly (3) or (4) to characterize the policy function h?

As Stokey and Lucas point out, in general we cannot say much, as the theorem of Boldrin

and Montrucchio (see Thm. 6.1 of Stokey and Lucas) states that just about any smooth h

can be obtained as the policy function to some dynamic programming problem.

One way to characterize the dynamics induced by h is to use �rst order approximations

to the model in the neighborhood of a steady state. We will take this approach, which is

fundamentally local, in subsequent sections. In this section I consider a di¤erent approach,

which is to establish global stability of equilibrium dynamics around steady states.

De�nition: A point �x is globally stable if for all x0 2 X and fxtg1t=0 such that xt+1 = h(xt),

limt!1 xt = �x.�Thanks to Rosen Valchev for pointing out a typo in the previous version of these notes.

1

Page 2: Equilibrium Dynamics and Di⁄erence Equationspeople.duke.edu/~acb8/notes3.pdf · Equilibrium Dynamics and Di⁄erence Equations Craig Burnside Duke University September 18, 2010

You can soon see how hard proving global stability will be when you realize that even

proving the existence of a unique steady state is di¢ cult. It�s quite obvious that a necessary

condition for a point to be a steady state is that (3) hold for xt = xt+1 = xt+2 = �x. Stokey

and Lucas also point out that this is a su¢ cient condition if F is concave. Uniqueness of

the steady state, on the other hand, is more elusive, but is a necessary condition for global

stability. One approach to establishing global stability is to use a Liapounov function, if one

is available for the problem in question. Lemma 4.2 in Stokey and Lucas shows how these

functions could be useful.

Lemma: Let X � Rl be compact and let h : X ! X be continuous with �x = h(�x) for some

�x. If there exists a continuous function L : X ! R such that

(a) L(x) � 0; with equality i¤ x = �x,(b) L[h(x)] � L(x); with equality i¤ x = �x,

then �x is a globally stable solution to (1) and (2).

You should consult the proof of this theorem in Stokey and Lucas. The problem, for

many cases, is that it is not obvious how to form L, nor whether an L exists. Stokey and

Lucas argue that one way to proceed is to check whether the following form of L satis�es

the two conditions of the Lemma:

L(x) = (x� �x) � [v0(x)� v0(�x)]: (5)

The reason this form of function is worth trying is that it automatically satis�es the Lemma�s

condition (a) when v is strictly concave.1 It�s then a matter of trying to verify (b). This

turns out to be much harder. Stokey and Lucas motivate it as follows. They begin by noting

that for any pair of points (x; y), (x0; y0) the strict concavity of F implies2

(x� x0) � [Fx(x; y)� Fx(x0y0)] + (y � y0) � [Fy(x; y)� Fy(x

0y0)] � 0

with equality only when the two points are the same. We can therefore let (x; y) be (x; h(x))

and (x0; y0) be (�x; �x) so that

(x� �x) � fFx [x; h(x)]� Fx(�x; �x)g+ [h(x)� �x] � fFy [x; h(x)]� Fy(�x; �x)g � 0: (6)

The optimality conditions imply that v0(x) = Fx[x; h(x)] and Fy[x; h(x)] = ��v0[h(x)], hence

(x� �x) � [v0(x)� v0(�x)]� � [h(x)� �x] � fv0 [h(x)]� v0(�x)g � 0: (7)

1The strict concavity of v implies that v0(x) < v0(�x) for x > �x and v0(x) > v0(�x) for x < �x, hence (a) issatis�ed.

2We will usually need to assume F is concave to prove that v is concave. Also, the result holds for thesame reason as given in the previous footnote.

2

Page 3: Equilibrium Dynamics and Di⁄erence Equationspeople.duke.edu/~acb8/notes3.pdf · Equilibrium Dynamics and Di⁄erence Equations Craig Burnside Duke University September 18, 2010

Now notice that

L[h(x)]� L(x) = (h(x)� �x) � [v0(h(x))� v0(�x)]� (x� �x) � [v0(x)� v0(�x)]:

Obviously we could use (7) to show that L[h(x)] � L(x) � 0 if � = 1, but it is not. So wecannot get a generic result that will work for any concave F and v, but we are very close. If

we could somehow start from

�(x� �x) � fFx [x; h(x)]� Fx(�x; �x)g+ (h(x)� �x) � fFy [x; h(x)]� Fy(�x; �x)g � 0 (8)

instead of (6), we would have the desired result that L[h(x)] � L(x) � 0. So on a case by

case basis one could try to show that (8) holds in order to demonstrate that L given in (5)

is a Liapounov function.

In the end, I o¤er two proofs of the global stability of the dynamics of the neoclassical

model. One of these is from Stokey and Lucas, while the other is based on Brock and Mirman

(1972). Both proofs start by working out the steady state of the model.

Steady State Here I conjecture that the model has a steady state in which ct = c and

kt = k for all t. In such a steady state, the Euler equation and resource constraint are

u0[g(kt)� kt+1] = �u0[g(kt+1)� kt+2]g0(kt+1)

ct + kt+1 = g(kt).

The steady state in which kt = kt+1 = kt+2 = k� is the solution to

1 = �g0(k) (9)

c = g(k)� k: (10)

We can solve (9) for k� once we note that g0(k) = f 0(k) + 1 � �. If we rewrite (9) in terms

of the rate of time preference, � = ��1 � 1 we have f 0(k) = � + �. Since � > 0, 0 < � � 1,f 0(k) > 0, f 00(k) < 0, limk!0 f

0(k) = 1 and limk!1 f0(k) = 0, there is a unique, positive,

�nite solution to (9). Once we have solved for k�, notice that we can easily obtain c� from

(10). We need to verify that the solution for c� is positive. This requires verifying that

g(k�) � k� = f(k�) � �k� > 0. Let �k be the unique point at which f(�k) = ��k. That such a

point exists follows from the fact that f is increasing, concave, has in�nite slope at 0, and

0 slope at 1. Obviously f 0(�k) < � and the point at which f 0(k) = � lies to the left of �k.

Therefore k� must also lie between 0 and �k since f 0(k�) = �+ �.

3

Page 4: Equilibrium Dynamics and Di⁄erence Equationspeople.duke.edu/~acb8/notes3.pdf · Equilibrium Dynamics and Di⁄erence Equations Craig Burnside Duke University September 18, 2010

Stokey and Lucas�Proof Under assumption 4.3, 4.4, 4.7 and 4.8 (which are easily shown

to be true for the growth model), Theorem 4.8 establishes that v is strictly concave and h(k)

is continuous. We also want to show that h is increasing. If you make enough of the regularity

assumptions to ensure that the optimal k is always given by an interior solution where the

�rst order condition holds then:

u0[g(k)� h(k)] = �v0 [h(k)] :

Pick k0, k1 2 K+ = (0; �k] with k1 > k0. We want to prove that h(k1) > h(k0). Assume the

contrary, so let h(k1) � h(k0). Then since v is strictly concave

u0[g(k0)� h(k0)] = �v0 [h(k0)] � �v0[h(k1)] = u0[g(k1)� h(k1)]:

And since u is strictly concave this means

g(k0)� h(k0) � g(k1)� h(k1):

implying

g(k0)� g(k1) � h(k0)� h(k1) � 0:

But since g is strictly increasing this means k0 � k1, which is a contradiction.

With this result we can proceed to �nish the proof. For any k 2 K+, the strict concavity

of v implies

[k � h(k)] fv0(k)� v0[h(k)]g � 0 (11)

with equality only at k = h(k), i.e. only at the steady state.3 The �rst order condition

of the model states that u0[g(k) � h(k)] = �v0 [h(k)] and the envelope condition states that

v0(k) = u0[g(k)� h(k)]g0(k). I will use the shorthand u0(c) to indicate u0[g(k)� h(k)]. If we

substitute these conditions into (11) we get

[k � h(k)] [u0 (c) g0(k)� u0(c)=�] � 0 (12)

Since u0(c) is strictly positive everywhere, we can take it out as a common factor in (12) and

preserve the inequality:

[k � h(k)] [g0(k)� 1=�] � 0; (13)

which, again, only holds with equality if k = h(k). So for k 6= k� we have

[k � h(k)] [g0(k)� 1=�] < 0: (14)

3This looks like the Liapounov function from equation (5), but Stokey and Lucas�proof does not use aLiapounov argument.

4

Page 5: Equilibrium Dynamics and Di⁄erence Equationspeople.duke.edu/~acb8/notes3.pdf · Equilibrium Dynamics and Di⁄erence Equations Craig Burnside Duke University September 18, 2010

Since g0(k) = ��1 at k� and g is concave we know that g0(k) � 1=� > 0 for k < k� and

g0(k) � 1=� < 0 for k > k�. So if k < k� this tells us that h(k) > k. We also know that if

k > k� then h(k) < k. With this result and the fact that h(k) is increasing, Stokey and Lucas

are able to draw their Figure 6.1 [equivalent to my Figure 2 below], and this guarantees the

global stability of the model�s dynamics in K+.

Brock and Mirman�s Proof Take as given the regularity assumptions made in the pre-

vious section and the results that v is concave and h is increasing. Next we can show that

the policy function for consumption, c(kt) = g(kt)� h(kt) is increasing. Pick some k0 < k1.

Our second result means h(k0) < h(k1). This, in turn, implies that v0[h(k0)] > v0[h(k1)].

And, from the �rst order condition this means

u0[c(k0)] = u0[g(k0)� h(x0)] > u0[g(k1)� h(k1)] = u0[c(k1)]

From the concavity of u this means c(k0) < c(k1).

Now de�ne

A(k) � u0[c(k)] (15)

B(k) � �A(k)g0(k): (16)

We need to plot A(k) and B(k) against k. To do this notice that B(k) > A(k) for k < k�

since g0(k) > ��1 for k < k�. Similarly, B(k�) = A(k�) and B(k) < A(k) for k > k�. Also,

both functions are downward sloping. To see this, notice that

A0(k) = u00[c(kt)]c0(kt) < 0

since u00 < 0 and c0 > 0. Also

B0(k) = �A0(k)g0(k) + �A(k)g00(k) < 0

since A > 0, A0 < 0, g0 > 0 and g00 < 0. The nonnegativity constraint on k0 given k also

implies that limk!0 c(k) = 0 so that both functions asymptote towards1 as k goes to zero.

Thus we can draw A(k) and B(k) as in Figure 1.Now consider the dynamics of the capital stock starting from an arbitrary k0 < k�. Notice

that the optimality conditions require that u0(c0) = �u0(c1)[f0(k1) + 1� �], or, equivalently

that

A(k0) = B(k1):

5

Page 6: Equilibrium Dynamics and Di⁄erence Equationspeople.duke.edu/~acb8/notes3.pdf · Equilibrium Dynamics and Di⁄erence Equations Craig Burnside Duke University September 18, 2010

FIGURE 1

The Transition Dynamics of the Neoclassical Model

The Functions A and B

k*k0k 1k 2k

AÝkÞ

BÝkÞ

A,B

AÝk0Þ = BÝk1Þ

AÝk1Þ = BÝk2Þ

From Figure 1, this implies that k0 < k1 < k�. If k0 > k� we have the opposite implication

that k� < k1 < k0. As we can see in the diagram, the dynamics clearly mean that kt converges

into the steady state at k�. It is also clear that since c0 > 0, if k0 < k� then c0 < c1 < c�, if

k0 > k� then c� < c1 < c0 and that ct converges to its steady state value.

Figure 2 shows the results we have obtained regarding h(k). First it is increasing. Second,

it is above the 45 degree line for k < k� and below it for k > k�. Also we know that

limk!0 h(k) must be zero since limk!0 c(k) = 0 and limk!0 g(k) = 0.

A Phase Diagram Another diagram that is often used to illustrate dynamics is a phase

diagram. A phase diagram for the system of di¤erence equations

u0 (ct) = �u0(ct+1)g0(kt+1) (17)

ct + kt+1 = g(kt). (18)

plots ct against kt, the two variables governed by the di¤erence equations. The phase diagram

illustrates what would happen to ct and kt if we began from arbitrary values of k0 and c0(as opposed to the optimal c0 given k0) and used the system of di¤erence equations to solve

for (k1; c1), then (k2; c2), etc. Notice that if you know ct and kt, then you can solve (18) for

kt+1 and then solve (17) for ct+1.

6

Page 7: Equilibrium Dynamics and Di⁄erence Equationspeople.duke.edu/~acb8/notes3.pdf · Equilibrium Dynamics and Di⁄erence Equations Craig Burnside Duke University September 18, 2010

FIGURE 2

The Transition Dynamics of the Neoclassical Model

The Policy Function

tk

1+tk45O line

k

)( tkh

0k 1k 2k *k

1k

2k3k

Usually we begin with a diagram such as Figure 3 that demarcates regions in which ctand kt will be either increasing or decreasing. To draw this diagram, we ask the following

questions:

1. When is kt+1 > kt? From (18) we can see that kt+1 > kt whenever ct < g(kt)� kt.

2. When is ct+1 > ct? Notice that ct+1 > ct implies u0(ct+1) < u0(ct). This, in turn, using

(17), implies g0(kt+1) > ��1 and, hence, kt+1 < k�. Unfortunately, we would like a condition

that characterizes ct+1 � ct in terms of (kt; ct) not in terms of kt+1. Notice, however, from

(18), that kt+1 < k� is equivalent to ct > g(kt)� k�.

To summarize, if ct < g(kt)� kt, kt is increasing, and if ct > g(kt)� k�, ct is increasing.

So our diagram is divided into four regions, A (where ct and kt both increase), B (where ctincreases but kt decreases), C (where ct and kt both decrease), and D (where ct decreases

but kt increases). Hence the directions of the little arrows I have drawn into the diagram.

We know that if ct is chosen optimally, the path of (kt; ct) will lie in region A or C. Why?Because in the previous section we showed that if k0 < k�, the capital stock and consumption

increase monotonically to their steady state values (this only happens in region A). On theother hand if k0 > k�, the capital stock and consumption decrease monotonically to their

steady state values (this only happens in region C).

7

Page 8: Equilibrium Dynamics and Di⁄erence Equationspeople.duke.edu/~acb8/notes3.pdf · Equilibrium Dynamics and Di⁄erence Equations Craig Burnside Duke University September 18, 2010

FIGURE 3

Constructing the Phase Diagram for the Neoclassical Model, Step 1

),( ** ck

B

A

C

D

ktk

tc*)( kkg −

kkg −)(

Obviously this means that one of the paths for (kt; ct) (these are called phase trajectories),

which are illustrated in Figure 4, corresponds to ct = g(kt)� h(kt) and converges in towardsthe steady state. How did I draw in the other paths? The reason the paths in Figure 4

are not smooth is that this we are illustrating the dynamics of a di¤erence equation not a

di¤erential equation, which would have smooth phase trajectories.

Paths Starting in B Suppose our arbitrary choice of (k0; c0) is in B. Clearly, the pathof subsequent values of (kt; ct) moves up and to the left (NW). But we can also show that

the path will eventually violate the nonnegativity constraint on kt or make ct unde�ned. To

see this, notice that (18) implies kt+1 � kt = g(kt)� kt � ct. The size of the decrease in the

capital stock is equal to the distance ct is above the curve g(kt)� kt. Clearly that distance

increases as we move along a path that leads to the NW, so the decreases in kt become larger

at each step. This implies that eventually the rule for solving for kt+1 will imply kt+1 < 0

(this happens if ct > g(kt)) violating feasibility, or kt+1 = 0 (this happens if ct = g(kt)),

making it impossible to solve for the value of ct+1.

8

Page 9: Equilibrium Dynamics and Di⁄erence Equationspeople.duke.edu/~acb8/notes3.pdf · Equilibrium Dynamics and Di⁄erence Equations Craig Burnside Duke University September 18, 2010

FIGURE 4

The Phase Diagram for the Neoclassical Model

),( ** ck

B

A

C

D

k

)()( ttt khkgc −=

tc

tk

)( tkg

Paths Starting in D If our arbitrary choice of (k0; c0) is in D the path of subsequent

values of (kt; ct) clearly moves down and to the right (SE). However, we can also show that

any path that starts in D must converge towards kt = �k, ct = 0. To see this, notice that it

is obvious that any path starting in D cannot escape into regions A and B, because thoseregions cannot be reached by SE movement that begins inside D. But we can also show thatthe dynamics cannot escape into region C or the boundary between C and D. To see this,imagine a (kt; ct) pair in D. De�ne �kt as the largest solution to ct = g(�kt)� �kt. Notice thatsince (kt; ct) 2D, �kt > kt. But I can also show that �kt > kt+1. To see this notice that

kt+1 < �kt , g(kt)� ct < �kt

, g(kt)��g(�kt)� �kt

�< �kt

, g(kt) < g(�kt)

which is clearly true since �kt < kt. With kt < kt+1 < �kt and ct+1 < ct we have the result

that (kt+1; ct+1) lies inside D. The reason the dynamics must converge to (�k; 0) is that theyalways move to the SE (so they cant stall anywhere inside D) and, because the next solutionfor kt+1 is always positive, the next solution for ct+1 will always be positive as well, so the

dynamics cannot go across the x-axis.

An interesting fact about paths that begin in D is that they violate the transversality

condition of the model because they all converge to (�k; 0). (Here you should refer to the notes

9

Page 10: Equilibrium Dynamics and Di⁄erence Equationspeople.duke.edu/~acb8/notes3.pdf · Equilibrium Dynamics and Di⁄erence Equations Craig Burnside Duke University September 18, 2010

on dynamic programming where transversality conditions are discussed.) The transversality

condition is

limt!1

�tu0(ct)g0(kt)kt = 0:

Notice that for any path that begins in D there is some �nite T for which k < kt < �k for

t � T , where k is the point where g(k) � k is maximized; i.e. g0(k) = 1. For any t > T we

have

u0(ct) = u0(ct�1)=[�g0(kt)]

= u0(ct�2)=[�2g0(kt)g

0(kt�1)]

= � � � = u0(cT )=[�(t�T )g0(kt)g

0(kt�1) � � � g0(kT+1)]

Since kt > k for t � T this means g0(kt) < g0(k) = 1 for t � T , and that g0(kt) < 1 for t � T .

Hence u0(ct) > ��(t�T )u0(cT ) for t � T . This means �tu0(ct) > �Tu0(cT ) for t � T . Therefore

�tu0(ct)g0(kt)kt > �Tu0(cT )g

0(�k)k for t � T . Hence

limt!1

�tu0(ct)g0(kt)kt > �Tu0(cT )g

0(�k)k 6= 0:

Paths Starting in A If we start with a point (k0; c0) 2A there are three possibilities:the subsequent values of (kt; ct) (i) could stay inA, (ii) move into B or (iii) move in toD. Wecan easily eliminate the fourth possibility that the path moves into C: since ct > g(kt)� k�

for any point in A, this means kt+1 = g(kt)� ct < k�, implying (kt+1; ct+1) =2C.One path that clearly lies in A, and stays in A, is the optimal path along which ct =

g(kt)�h(kt). We can eliminate the possibility that any other phase trajectory starting in Astays in A. Any phase trajectory that stays in A converges to (k�; c�).4 But any such path

satis�es (i) the di¤erence equations (17) and (18) and (ii) the transversality condition.5 From

Theorem 4.15 in Stokey and Lucas this would imply that this other path was also optimal.

But we know there is a unique optimal path. Therefore only the optimal path stays in A.Paths that start out above the optimal path eventually get into set B. To see this let

k�1 = h(k0) and c�0 = g(k0) � k�1. If c0 > c�0 the phase trajectory stays strictly above the

optimal path. To see why notice that c0 > c�0 implies k1 < k�1. Hence, u0(c0) < u0(c�0)

and g0(k1) > g0(k�1). Thus, if we solve for c1 along the phase trajectory using (17) we will

necessarily need u0(c1) < u0(c�1), where c�1 is the value of c1 that would prevail along the

4The other possibilities for paths that stay in A would be (i) that the path converges to a point in A otherthan (k; c) or (ii) the path cycles around in A without ever converging to any point. The �rst possibility isimpossible because there is only one steady state ot the system (17)�(18). The second possibility is also notpossible. We know the dynamics in A go in one direction: to the NE, and do not cycle.

5Along such a path limt!1 u0(Ct)[f

0(Kt) + 1� �]Kt = u0(C)K=�, so that limt!1 �

tu0(Ct)[f0(Kt) + 1�

�]Kt = 0.

10

Page 11: Equilibrium Dynamics and Di⁄erence Equationspeople.duke.edu/~acb8/notes3.pdf · Equilibrium Dynamics and Di⁄erence Equations Craig Burnside Duke University September 18, 2010

optimal path. Thus c1 > c�1 and the (k1; c1) pair along the phase trajectory lies strictly to

the NW of (k�1; c�1). Since our trajectory path stays strictly above the optimal path, leaves

A (shown above) and does not go into C (shown above) it must go into B. A symmetricargument implies that paths that start out below the optimal path eventually get into set

D.

Paths Starting in C If we start with a point (k0; c0) 2C there are three possibilities:the subsequent values of (kt; ct) (i) could stay in C, (ii) move into B or (iii) move in to D.Symmetric to the argument above the dynamics never move into A. Also symmetric to theargument above, the only phase trajectory that stays in C, is one that starts on the optimalpath. Paths that start out above the optimal path remain above it and move into B, pathsthat start out below it remain below it and eventually move into D.

2 Solving First Order Scalar Di¤erence Equations

Here we will discuss �rst order deterministic and stochastic di¤erence equations. The dis-

cussion draws heavily on Sargent (1987), Chapter 9 and to some extent Chapter 11, which

should be consulted by the reader for a much more thorough treatment.

2.1 Deterministic Di¤erence Equations

Consider the �rst-order di¤erence equation

yt = �yt�1 + bxt + a 8t: (19)

Here yt is treated as an unknown sequence which we would like to characterize. The sequence

fxtg1t=�1 is known and determinstic.

Di¤erence equations can be written using lag operator notation. The operator L lags a

variable in the following sense. If we de�ne a new sequence fytg in terms of the sequencefxtg and let yt = Lxt, then yt = xt�1. So we could simply write xt�1 = Lxt. In general

xt�k = Lkxt.

In lag operator notation we rewrite (19) as

(1� �L)yt = bxt + a: (20)

A solution that is immediately apparent is

yt = (1� �L)�1(bxt + a): (21)

11

Page 12: Equilibrium Dynamics and Di⁄erence Equationspeople.duke.edu/~acb8/notes3.pdf · Equilibrium Dynamics and Di⁄erence Equations Craig Burnside Duke University September 18, 2010

One problem with this solution is that we have not yet discussed polynomials in the lag

operator, such as 1 � �L, and how they are inverted. Naively it would seem that there are

two possibilities, the backward inverse,

(1� �L)�1 = 1 + �L+ �2L2 + � � � ; (22)

and the forward inverse,

(1� �L)�1 = ���1L�1(1 + ��1L�1 + ��2L�2 + � � � ): (23)

Apart from the issue of how (1��L)�1 should be de�ned, there is another problem with(21), which is that there is a more general solution:

yt = (1� �L)�1(bxt + a) + �tc; (24)

where c is an arbitrary constant. This solution is easily veri�ed given that L�t = �t�1.

So there are in�nitely many solutions to (19). How do we narrow ourselves down to one

solution? There are three common ways to do this: (i) requiring that the fytg sequence isbounded for all fxtg sequences that are bounded, (ii) treating one of the values of the fytgsequence as a known scalar, (iii) requiring the fytg sequence to converge to some �nite valueeither as t!1, or as t! �1.

Mapping Bounded Sequences to Bounded Sequences One way to obtain a unique

solution for the sequence fytg is to require that the solution to (19) should be bounded for anybounded sequence fxtg. Later we will see a theoretical argument for this assumption in thecontext of an economic model. For now we simply impose the restriction that polynomials

in the lag operator should map bounded sequences to bounded sequences. Let the set of all

bounded sequences be B. Clearly, L : B ! B since fLxtg 2 B for any fxtg 2 B . It is alsoclear that (1� �L) : B ! B for any �nite value of �. But the representation of (1� �L)�1

given by (22) only maps from B to B when j�j < 1, while the representation of (1 � �L)�1

given by (23) only maps from B to B when j�j > 1.Hence, when j�j < 1, the requirement that fytg 2 B for any fxtg 2 B implies that there

is a unique solution for yt given by

yt = (1 + �L+ �2L2 + � � � )(bxt + a)

= b

1Xj=0

�jxt�j +a

1� �: (25)

The boundedness assumption also requires that we set c = 0 in (24), since limt!�1��c�t��

!1 for any c 6= 0.

12

Page 13: Equilibrium Dynamics and Di⁄erence Equationspeople.duke.edu/~acb8/notes3.pdf · Equilibrium Dynamics and Di⁄erence Equations Craig Burnside Duke University September 18, 2010

When j�j > 1, the assumption that fytg 2 B for any fxtg 2 B implies that there is aunique solution for yt given by

yt = ���1L�1(1 + ��1L�1 + ��2L�2 + � � � )(bxt + a)

= �b1Xj=0

��(j+1)xt+1+j +a

1� �: (26)

We again set c = 0 in (24), since limt!1��c�t�� !1 for any c 6= 0.

When j�j = 1, fytg is unbounded for at least some bounded fxtg sequences. Hence, inthis case, the boundedness restriction cannot be used to �nd a unique solution for yt.

Conditioning on a Known Value Suppose we know one of the elements of the sequence

fytg, say y0. This assumption is also motivated by theory. Frequently in macroeconomicswe work with models in which a planner or economic agent maximizes a dynamic objective

function given some initial level of resources. This initial level of resources at least partially

determines the solution to the optimization problem. We will see an example of this below.

Given a value of y0, we may generate the solution for the rest of the fytg sequencerecursively using (19):

yt =

��yt�1 + bxt + a if t > 0��1yt+1 � ��1(bxt+1 + a) if t < 0:

(27)

We can also work through the recursions to get the solutions

yt =

(�ty0 +

Pt�1j=0 �

j (bxt�j + a) if t > 0�ty0 �

P�t�1j=0 ��(j+1)(bxt+j+1 + a) if t < 0:

(28)

These solutions work for all values of � and are unique given a value of y0.

Convergence in the Limit There are some examples in which we know something about

the limiting properties of the yt sequence and there is only one solution for yt that has these

limiting properties. To take a simple example, suppose we have a = 0 and xt = 0 for all t,

so that the general solution is simply yt = �tc. If j�j > 1 yet we know that limt!1 yt = 0

then the unique solution satisfying that convergence property is c = 0 and yt = 0 for all

t. On the other hand if we knew limt!�1 yt = 0 we would not be able to pin down c.

Similarly if j�j < 1 and we know that limt!�1 yt = 0 then the unique solution satisfying

that convergence property is c = 0 and yt = 0 for all t. On the other hand if we knew

limt!1 yt = 0 we would not be able to pin down c.

13

Page 14: Equilibrium Dynamics and Di⁄erence Equationspeople.duke.edu/~acb8/notes3.pdf · Equilibrium Dynamics and Di⁄erence Equations Craig Burnside Duke University September 18, 2010

2.2 Stochastic Di¤erence Equations

Now consider the di¤erence equation:

Et�1yt = �yt�1 + bEt�1xt + a 8t; (29)

where Et�1 is the expectations operator conditional on information available at time t � 1.Now xt is a stochastic process, and presumably the solution for yt will also be a stochastic

process. Another way of writing (29) is

yt = �yt�1 + bEt�1xt + a+ �t; (30)

where �t = yt � Et�1yt is a white noise process. It is also convenient to use the notation

zt�1 = Et�1xt and rewrite (31) as:

yt = �yt�1 + bzt�1 + a+ �t: (31)

You might think that we could simply think of (31) as a version of (19) with bxt being

replaced by bzt�1 + �t. Thus you might expect the general solution to be

yt = (1� �L)�1(bzt�1 + �t + a) + �tc; (32)

with c being an arbitrary constant and there being two possible forms of (1 � �L)�1 given

by (23) and (22) respectively.

The solution implied by (22) and (32), which is

yt =1Xj=0

�j(bzt�j�1 + �t�j + a) + �tc; (33)

is valid. This is easily veri�ed by noting that given (33) we have

Et�1yt =1Xj=0

�j(bzt�j�1 + a) +1Xj=1

�j�t�j + �tc

�yt�1 =

1Xj=1

�j(bzt�j�1 + a) +

1Xj=1

�j�t�j + �tc:

Hence

Et�1yt � �yt�1 = bzt�1 + a;

which is the original di¤erence equation.

The solution implied by (23) and (32), which is

yt = �1Xj=0

��(j+1)(bzt+j + �t+1+j + a) + �tc;

14

Page 15: Equilibrium Dynamics and Di⁄erence Equationspeople.duke.edu/~acb8/notes3.pdf · Equilibrium Dynamics and Di⁄erence Equations Craig Burnside Duke University September 18, 2010

is easily seen to be invalid, since Et�1yt has no terms involving the future values of �t but �yt�1does. The fact that yt must be determined as of date t imposes an additional restriction. It

turns out that the correct forward-looking solution is

yt = �1Xj=0

��(j+1)(bEtzt+j + a) + �tc: (34)

Once again we face the dilemma that there are in�nitely many solutions to (29). We could

use either representation (33) or (34). Furthermore, both (33) and (34) represent an in�nite

number of possible solutions because c could be any constant. Additionally, (33) represents

an in�nite number of possible solutions because �t could be any white noise process.

The question is whether and how we can narrow down the set of possible solutions. One

way to do this is to implement a variant of the boundedness condition that is based on the

property of covariance stationarity. In order to do this a de�nition and a couple of theorems

from time series analysis will be useful.

De�nition: A stochastic process fytg is covariance stationary if E(yt) = � and Var(yt) =

0 <1 and E[(yt � �)(yt�s � �)] = s for all t.

Wold Decomposition Theorem. Any covariance stationary stochastic process fxtg can berepresented as xt = yt + zt where zt is linearly deterministic [i.e. var(ztjzt�1; zt�2; : : : ) =0] and yt is purely non-deterministic with an MA(1) representation yt = (L)�t, withP1

j=0 2j <1.

Stationarity of ARMA Processes. An ARMA process �(L)yt = �(L)�t is covariance station-

ary if and only if �(L) is square-summable, i.e.P1

j=0 �2j <1 and the roots of �(Z) = 0 lie

strictly outside the unit circle.

We can restrict the set of possible solutions by requiring that yt be covariance stationary

for any covariance stationary xt. Covariance stationarity of yt immediately requires that

c = 0 no matter which of the two representations, (33) and (34), we use because otherwise

the mean of the process yt would be time varying. We will assume that xt is purely non-

deterministic, allowing us to write xt = �(L)et with �(L) being square-summable and etbeing white noise. Notice that zt = Etxt+1 = ~�(L)et, where ~�j = �j+1 and, therefore, ~�(L) is

square-summable.

Consider the case where j�j < 1. If we impose c = 0, the solution (33) can be rewrittenas

yt � �yt�1 = bzt�1 + �t + a = b~�(L)et�1 + �t + a: (35)

This makes it clear that the backward looking solution represents yt as an ARMA(1,1)processes which is covariance stationary because ~�(L) and the root of 1 � �Z = 0 is Z =

15

Page 16: Equilibrium Dynamics and Di⁄erence Equationspeople.duke.edu/~acb8/notes3.pdf · Equilibrium Dynamics and Di⁄erence Equations Craig Burnside Duke University September 18, 2010

��1, which lies outside the unit circle. With c = 0 the forward looking solution is yt =

�P1

j=0 ��(j+1)(bEtzt+j + a). Clearly the forward looking solution, (34), can only be �nite

if a = 0. But even when a = 0, there will be many covariance stationary xt processes for

which the forward looking solution is ill-de�ned.6 Hence, when j�j < 1 the restriction thatyt should be covariance stationary for any covariance stationary xt will lead us to use

yt = �yt�1 + bEt�1xt + �t + a

=1Xj=0

�j(bEt�j�1xt�j + �t�j + a) (36)

as the solution for yt. Although the solution is covariance stationary it remains non-unique,

since �t could be any white noise process. As we will see in the context of a theoretical

example, below, it is sometimes possible to eliminate the multiplicity because the variable

yt is known to be in the time t � 1 information set. In this case, the �t terms drop out of(36) and the solution for yt reduces to

yt = �yt�1 + bEt�1xt + a =1Xj=0

�j(bEt�j�1xt�j + a): (37)

Now consider the case where j�j > 1. It is immediately obvious that yt will not be

covariance stationary if we use the backward-looking solution as described by (35). This

is because the root of the autoregressive polynomial is ��1 and this lies inside the unit

circle. On the other hand consider the forward looking solution with c = 0 is yt =

�P1

j=0 ��(j+1)(bEtzt+j + a). Given that zt = ~�(L)et we have Etzt+j =

P1k=0~�j+ket�k so

we can write

yt =a

1� �� b

1Xj=0

��(j+1)1Xk=0

~�j+ket�k

=a

1� �� b��1

1Xj=0

(1Xk=0

~�j+k��k)et�j:

I am missing a little proof� which I think can be based on the convolution formula� that

the sequence cj =P1

k=0 �j+k+1��k is square summable. This would deliver that this repre-

sentation of yt is covariance stationary. Given this result, when j�j > 1 the restriction thatyt should be covariance stationary for any covariance stationary xt would lead us to use

yt = �1Xj=0

��(j+1)(bEtxt+j+1 + a); (38)

6Consider, for example, the case where xt = �xt�1 + �t with j�j < 1. This means zt = �xt and thatEtzt+j = �

j+1xt. With a = 0 yt = �bP1

j=0(�=�)(j+1)xt. This is not �nite when j�j < j�j.

16

Page 17: Equilibrium Dynamics and Di⁄erence Equationspeople.duke.edu/~acb8/notes3.pdf · Equilibrium Dynamics and Di⁄erence Equations Craig Burnside Duke University September 18, 2010

as the solution for yt.

If j�j = 1 neither solution can be shown to be covariance stationary for all possible

covariance stationary xt processes, but it is common, in this case, to work with the backward

looking solution which is simply an ARIMA(0,1,1).

3 Solving Deterministic Models

The basic principles of solving for the equilibrium dynamics of both deterministic and sto-

chastic models are easily illustrated using the determinstic neoclassical growth model. After

reviewing the optimality conditions from the model, I describe how to linearize the �rst

order conditions to generate a system of linear di¤erence equations that will then need to

be solved. I then demonstrate how three di¤erent solution methods can be applied to these

equations: (i) the Blanchard and Kahn (1980) approach, (ii) the shooting method and (iii)

the method of undetermined coe¢ cients.

When we solved the social planner�s problem for the neoclassical model we obtained the

following optimality conditions:

u0(ct) = �u0(ct+1)g0(kt+1), t � 0 (39)

ct = g(kt)� kt+1, t � 0: (40)

3.1 Linear Approximation

Given standard assumptions about u and f (and therefore g) equations (39) and (40) are

nonlinear and we cannot determine the form of the policy function h, which yields the optimal

value of kt+1 as a function of kt.7 A standard approach to working with equations like (39)

and (40) is to approximate them with linear equations.

The �rst step is to �nd the steady state of the model. This can be done by removing

time susbcripts from the variables in (39) and (40) and solving for c and k:

1 = �g0(k) (41)

c = g(k)� k (42)

Given a functional form for f , (41) can be solved for k, and, given this solution, (42) can be

solved for c (we did this above).

In the next step we �rst totally di¤erentiate (39) and (40) in ct, ct+1, kt and kt+1 at their

steady state values c, c, k, k:

u00(c)dct = �u00(c)g0(k)dct+1 + �u0(c)g00(k)dkt+1;

7A notable exception is the case where f(k) = Ak�, u(c) = ln c and � = 1.

17

Page 18: Equilibrium Dynamics and Di⁄erence Equationspeople.duke.edu/~acb8/notes3.pdf · Equilibrium Dynamics and Di⁄erence Equations Craig Burnside Duke University September 18, 2010

dct = g0(k)dkt � dkt+1:

We can simplify the �rst equation since �g0(k) = 1 in the steady state. We can also rewrite

both equations in terms of ct = dct=c and kt = dkt=k:8

[u00(c)c] ct = [u00(c)c] ct+1 + [�u

0(c)g00(k)k] kt+1;

[c] ct = [g0(k)k] kt � [k] kt+1:

We then divide the �rst equation by u0(c) and the second equation by k:�u00(c)c

u0(c)

�ct =

�u00(c)c

u0(c)

�ct+1 + [�g

00(k)k] kt+1;h ck

ict = [g

0(k)]kt � kt+1:

If we de�ne the coe¢ cient of relative risk aversion at the steady state as � � �u00(c)c=u0(c),and a parameter � � ��g00(k)k > 0 and note that g0(k) = ��1, then the two equations can

be written more compactly as:

ct = ct+1 + (�=�)kt+1; (43)

c

kct = ��1kt � kt+1: (44)

In vector form, these two linear equations can be expressed as a �rst-order di¤erence

equation: ��=� 11 0

��kt+1ct+1

�=

�0 1��1 �(c=k)

��ktct

�or �

kt+1ct+1

�=

�1=� �c=k

��=(��) 1 + �c=(�k)

��ktct

�: (45)

Write the di¤erence equation (45) as

xt =Mxt�1: (46)

We can use (46) to approximately solve the neoclassical model using one of three methods:

(i) the Blanchard and Kahn approach, (ii) the shooting method and (iii) the method of

undetermined coe¢ cients.. Each of these methods relies on speci�c properties of M , and

things we know about the neoclassical model.

Although we will not do so in these notes, it can be shown that the neoclassical growth

model has a unique solution.9 That is, given an initial value of k0, there are unique choices for

fct; kt+1g1t=0 that solve the social planner�s problem. Also, the optimal path has the propertythat limt!1(ct; kt) = (c; k). In terms of the linearized model, this means limt!1 xt = 0.

8Notice that the notation xt = dxt=x indicates the percentage deviation of xt from its steady state value,x, because the total derivative was taken in the neighborhood of x.

9See, for example, chapters 1�4 of Stokey and Lucas (1989).

18

Page 19: Equilibrium Dynamics and Di⁄erence Equationspeople.duke.edu/~acb8/notes3.pdf · Equilibrium Dynamics and Di⁄erence Equations Craig Burnside Duke University September 18, 2010

3.2 Blanchard and Kahn�s Method

Assuming that it is possible to diagonalize M , we begin by letting M = V �V �1, where V is

a matrix whose columns are the eigenvectors of M normalized to have unit length, and � is

a diagonal matrix with the corresponding eigenvalues arranged along the diagonal.10 De�ne

~xt = V �1xt. Premultiplying (46) by V �1 we get V �1xt = V �1V �V �1xt�1, or

~xt = �~xt�1: (47)

Thus we get a pair of scalar di¤erence equations:

~x1t = �1~x1t�1 and ~x2t = �2~x2t�1: (48)

Above, we saw that the general solutions to the two di¤erence equations are

~x1t = c1�t1 and ~x2t = c2�

t2; (49)

with c1 and c2 being arbitrary constants.

The next step is to pin down the values of c1 and c2. To do so we will rely on things we

know about the optimal paths of the elements of xt rather than ~xt. First, we know the value

of k0 because k0 is given. Second, we know that the correct solution has the property that

limt!1 xt = 0. Since xt = V ~xt we can write

x1t = v11c1�t1 + v12c2�

t2 x2t = v21c1�

t1 + v22c2�

t2: (50)

The fact that k0 (equivalently, x10) is known puts one restriction on the constants c1 and c2.

They must jointly satisfy k0 = x10 = v11c1 + v12c2. With this one restriction, the solutions

for x1t and x2t become:

x1t = cv11�t1 +

�k0 � cv11

��t2 (51)

x2t = cv21�t1 +

�k0 � cv11

� v22v12

�t2 (52)

We now check whether the fact that limt!1 xt = 0 allows us to pin down c. Without loss of

generality, we will assume that j�1j � j�2j. There are three cases to consider.1. If j�1j � 1 and j�2j � 1 there is no solution for xt such that limt!1 xt = 0. Even

if you set c = 0 to eliminate the cv���t1 terms in (51) and (52) you would be left with the

explosive terms k0�t2 and (v22=v12)k0�

t2. These would vanish in the limit only if k0 = 0 and,

therefore, xt = 0 for all t; that is, if the system was at the steady state at time 0. On the

10An n � n matrix is diagonalizable if and only if it has n linearly independent eigenvectors. The casewhere M cannot be diagonalized is discussed brie�y below.

19

Page 20: Equilibrium Dynamics and Di⁄erence Equationspeople.duke.edu/~acb8/notes3.pdf · Equilibrium Dynamics and Di⁄erence Equations Craig Burnside Duke University September 18, 2010

other hand, if you assumed c = k0=v11 to eliminate the second terms in each solution, then

you would be left with the explosive cv���t1 terms.

2. If j�1j < 1 and j�2j < 1 there are arbitrarily many solutions for xt such that limt!1 xt =

0. Clearly limt!1 xt = 0 regardless of the value of c. So we are left with an in�nite number

of solutions given by (51) and (52).

3. If j�1j < 1 and j�2j � 1 there is a unique solution.11 We must set c = k0=v11 so

that the �t2 terms drops out of the solutions (51) and (52). This implies x1t = k0�t1 and

x2t = (v21=v11)k0�t1. This completely characterizes the paths of kt and ct and tell us that

ct = (v21=v11)kt along these paths.

There is an alternative approach to solving the model. The fact that c = k0=v11 is

equivalent to c2 = 0. If we substitute this into (49) we �nd ~x2t = 0. The de�nition

~xt = V �1xt tells us that ~x2t = v21x1t + v22x2t, where vij is the ij element of V �1. Knowing

that ~x2t = 0 implies that x2t = �(v21=v22)x1t or ct = �(v21=v22)kt. It is easy to verify thatv21=v11 = �(v21=v22) so that the two ways of proceeding get to the same answer.

Appendix Proof that 0 < �1 < 1 < �2 . Let � = �c=(�k) and note that � > 0. The

eigenvalues of M are the solutions to

(�) = (��1 � �) (1 + � � �)� ���1 = 0

We have 0(�) = 2�� (1+ �+ ��1) and 00(�) = 2 so that is a convex quadratic function.Notice that (�) is minimized at �� =

�1 + � + ��1

�=2 and (��) < 0. This means that the

roots are real and distinct with �1 < �� < �2. We immediately obtain �2 > 1 since �� > 1.

Also, (1) = ��. Since is monotonically decreasing for � < �� this means �1 < 1.

3.3 The Phase Diagram and the Shooting Method

The shooting method is an approach to solving the original nonlinear representation of the

model, given by equations (39) and (40). If one knew the optimal value of c0 one could solve

recursively for the remaining choices, fct; ktg1t=1. Given kt and ct, (39) can be solved for kt+1.Then ct+1 can be obtained from (40).

Since c0 is unknown, an approach to solving for fct; kt+1g1t=0 is to guess the value of c0,for some T > 0, compute fct; ktgTt=1 using (39) and (40) and then modify the guess for c0depending on what happens to the recursive simulation. The only subtlety in the shooting

method is how, if at all, the guess for c0 should be modi�ed after each simulation. Generally

speaking, the method for modifying the guess relies on knowledge of the model�s phase

diagram.

11It turns out that this is the relevant case for the neoclassical model because 0 < �1 < 1 < �2.

20

Page 21: Equilibrium Dynamics and Di⁄erence Equationspeople.duke.edu/~acb8/notes3.pdf · Equilibrium Dynamics and Di⁄erence Equations Craig Burnside Duke University September 18, 2010

Although we will not construct it here, the phase diagram for the nonlinear version of the

model has the property that for any guess c0, limt!1(ct; kt) = (c; k), if and only if the guess

for c0 is correct.12 Otherwise (ct; kt) diverges from the steady state. In particular, if the guess

for c0 is too large, the path for the capital stock will eventually violate the non-negativity

constraint. On the other hand, if the guess for c0 is too small, limt!1(ct; kt) = (0; �k) where�k is the solution of the equation f(�k) = ��k. These facts can be used to construct a shooting

algorithm in which the guess for c0 is adjusted downward when it becomes clear that the

guess for c0 is too large, and adjusted upward when it becomes clear than the guess is too

small.

Just as it can be used to solve the original model, the shooting method can be used to

solve the linearized model. We can construct the phase diagram for the linearized model in

Figure 5. There are two paths in Figure 5 along which ct and kt are proportional to one

another. These are the optimal path, corresponding to c = k0=v11, and a suboptimal path,

corresponding to c = 0.

As we saw in the previous section, if c = k0=v11, c0 = (v21=v11)k0, and for t > 0,

ct = (v21=v11)kt. This is the unique stable path in Figure 5, which implies convergence to the

steady state.13

The other linear path in Figure 5 is the one corresponding to c = 0. When c = 0,

c0 = (v22=v12)k0, and for t > 0, ct = (v22=v12)kt.14

The remaining paths, as we have see, are explosive, and correspond to the solutions,

(51) and (52), for other values of c. These paths are not linear because ct and kt diverge at

di¤erent rates along them. To draw these paths it is useful to note that (45) implies that

kt+1 > kt if ct < (��1 � 1)(k=c)kt and that ct+1 > ct if ct > ��1(k=c)kt. Thus, we can

de�ne the regions A, B, C, D in Figure 5, in which ct and kt are both increasing (A), ct is

increasing but kt is decreasing (B), ct and kt are both decreasing (C), and ct is decreasing

while kt is decreasing (D).

The optimal path, ct = (v21=v11)kt, lies entirely within regions A and C. The steady

state (0; 0) is a saddle point, with the optimal solution being a saddle path. All other paths

are divergent.

It is now clear how the phase diagram can be used to design the shooting algorithm.

Given k0 start with a guess for c0. Then simulate (45) forward. If the path enters region B

stop and revise the guess for c0 downward. If the path enters region D stop, and revise the

guess for c0 upward. Keep doing this until the path gets within a tolerance limit of (0; 0).

12See my notes on the neoclassical growth model for a discussion of the uniqueness of the model�s solutionand for a complete discussion of the phase diagram.13The reader should verify that for the neoclassical model, v21=v11 > 0.14The reader should verify that for the neoclassical model, v22=v12 < 0.

21

Page 22: Equilibrium Dynamics and Di⁄erence Equationspeople.duke.edu/~acb8/notes3.pdf · Equilibrium Dynamics and Di⁄erence Equations Craig Burnside Duke University September 18, 2010

FIGURE 5

The Phase Diagram for the Linearized Neoclassical Model

B

A

C

D

tc

tk)0,0(

tt kckc ˆ)/)(1(ˆ 1 −= −β

tt kckc ˆ)/(ˆ 1−= β

tt kvvc ˆ)/(ˆ 1121=

tt kvvc ˆ)/(ˆ 1222=

While the shooting method is obviously feasible, there is really no reason to use it once

we have linearized the model. The reason is that the exact solution to the linearized model

is as easy to characterize as the phase diagram using the Blanchard and Kahn approach.

A few more words about the phase diagram are in order. We obtained Figure 5 because

the neoclassical model is one for which 0 < �1 < 1 < �2. The roots are real, distinct, one

is stable, while the other is unstable. For more general matrices M , a system xt = Mxt�1

can have many types of phase diagrams. If j�1j < 1 and j�2j < 1 all paths will lead into thesteady state. If j�1j > 1 and j�2j > 1 all paths will lead away from the steady state. If V

is diagonal (in either direction), all the paths will be straight lines. If the roots are complex

the paths will spiral around the steady state, either convergently, explosively or perpetually.

Nonetheless, in all of these cases, as long as M is diagonalizable, the most general solution

takes the form (??). This is true even for complex roots. The one case we have not consideredis the case where M cannot be diagonalized.

When M cannot be Diagonalized This case can arise, but does not necessarily do so,

when M has a repeat eigenvalue, �. Even when M is not diagonalizable, it can still be

written in the Jordan form M = PJP�1 where P is a speci�c invertible matrix and

J =

�� 10 �

�:

De�ne ~xt = P�1xt. Premultiplying (46) by P�1 we get

~xt = J ~xt�1:

22

Page 23: Equilibrium Dynamics and Di⁄erence Equationspeople.duke.edu/~acb8/notes3.pdf · Equilibrium Dynamics and Di⁄erence Equations Craig Burnside Duke University September 18, 2010

Thus we get a pair of equations:

~x1t = �~x1t�1 + ~x2t�1 and ~x2t = �~x2t�1:

These have the general solution

~x1t = c1�t + c2t�

t�1 and ~x2t = c2�t:

Therefore

x1t = p11�c1�

t + c2t�t�1�+ p12c2�

t

x2t = p21�c1�

t + c2t�t�1�+ p22c2�

t:

Once again we could pin the value of c2 down using the fact that x10 = k0. However, in the

next step we would �nd either (i) no bounded solution for any value of c1, if j�j > 1, or (ii)a bounded solution for every value of c1, if j�j < 1.

3.4 Method of Undetermined Coe¢ cients

Another approach to solving the model is to guess that the optimal solution is of the form

ct = �kt for some scalar �. As we have seen, above, this conjecture is correct. If the

conjecture is substituted into (45) we have�1�

�kt+1 =

�1=� �c=k

��=(��) 1 + �c=(�k)

��1�

�kt

or �1�

�kt+1 =

���1 � �c=k

��=(��) + �[1 + �c=(�k)]

�kt:

Dividing the second equation through by �:�11

�kt+1 =

���1 � �c=k

��=(���) + 1 + �c=(�k)

�kt: (53)

For the two equations implicit in (53) to both be valid requires that

��1 � �c=k = ��=(���) + 1 + �c=(�k):

This means � is a solution to the quadratic equation

�2c=k + �[1� ��1 + �c=(�k)]� �=(��) = 0: (54)

There are two solutions, �1 and �2. Suppose we choose one of these solutions, say �1.

We can verify if it is the correct solution by going back to one of the two equations, say

23

Page 24: Equilibrium Dynamics and Di⁄erence Equationspeople.duke.edu/~acb8/notes3.pdf · Equilibrium Dynamics and Di⁄erence Equations Craig Burnside Duke University September 18, 2010

kt+1 = (��1 � �1c=k)kt. Notice that for any k0, we will get limt!1 kt = 0 if and only if

j��1��1c=kj < 1. We could verify whether we had the correct solution by checking whetherthis inequality was true. If it was not we would use the solution corresponding to �2.

There�s another, more straightforward, way of thinking about solving for �. Notice that

we could also guess a solution of the form �1kt+ �2ct = 0 for all t, and try to solve for �1 and

�2. We can write this guess as �0xt = 0, for all t, where �

0 =��1 �2

�. If we premultiply

(46) by �0 we get �0xt+1 = �0Mxt. By assumption �

0xt+1 = 0 so this means �0Mxt = 0. We

also have �0xt = 0. For both of these equalities to hold for all possible xt, the elements of the

vector �0 must be proportional to the elements of the vector �0M , with the proportion being

given by some scalar �.15 I.e. there is some � 6= 0 for which ��0 = �0M . If we transpose

both sides of the equation we get M 0� = ��. But now we can see that �nding � is a matter

of �nding the eigenvalues and eigenvectors of M 0.

There are two eigenvectors ofM 0, �1 and �2, so which one should we pick? Assume, as we

did above, that M can be diagonalized as M = V �V �1. Notice that since M 0 = (V �1)0�V 0,

M 0 has the same eigenvalues as M , and its eigenvectors are the columns of (V �1)0. I.e.

(V �1)0 = ( �1 �2 ).

Since xt =Mxt�1, xt =M tx0 = V �tV �1x0 or

xt =

�v11 v12v21 v22

���t1 00 �t2

���01�02

�x0

=

�v11�

t1(�

01x0) + v12�

t2(�

02x0)

v21�t1(�

01x0) + v22�

t2(�

02x0)

�:

If we choose �1 for our solution, we have �01x0 = 0 so that

xt =

�v12�

t2(�

02x0)

v22�t2(�

02x0)

�:

Notice that this means limt!1 xt = 0 if and only if j�2j < 1. Similarly, if we choose �2 forour solution, we have �02x0 = 0 so that

xt =

�v11�

t1(�

01x0)

v21�t1(�

01x0)

�:

In this case limt!1 xt = 0 if and only if j�1j < 1. Without loss of generality let j�1j � j�2j.Since our criterion for choosing the correct solution is that limt!1 xt = 0 this means:

� if j�1j � 1 and j�2j � 1 neither solution will have the desired property15Notice that if we de�ne a vector � = M 0�, one of the equations says �1k0 + �2c0 = 0, while the other

says that �1k0 + �2c0 = 0. So one equation implies that c0 = �(�1=�2)k0 while the other implies thatc0 = �(�1=�2)k0. So �1=�2 = �1=�2 or, in other words, �1 = ��1 and �2 = ��2 for some �.

24

Page 25: Equilibrium Dynamics and Di⁄erence Equationspeople.duke.edu/~acb8/notes3.pdf · Equilibrium Dynamics and Di⁄erence Equations Craig Burnside Duke University September 18, 2010

� if j�1j < 1 and j�2j < 1 then either solution will work and the choice will be ambiguous.

� if j�1j < 1 and j�2j � 1 then clearly we would need to use �2 as our solution.

As we saw before, for the neoclassical model the last condition holds. So there is a unique

solution, corresponding to the eigenvector �2 ofM0 that corresponds to the larger eigenvalue.

As we noted above, the eigenvectors of M 0 are the columns of (V �1)0. Hence they are the

rows of V �1. This means that �2 = ( v21 v22 )0. So v21kt + v22ct = 0, or ct = �(v21=v22)kt.

This is the same as the solution we obtained using the Blanchard and Kahn approach.

4 Linear Approximation in the Stochastic Model

Now imagine that we have the stochastic model where output is given by ztf(kt) and zt is

some stationary stochastic process. Now the optimality conditions are:

u0(ct) = �Et fu0(ct+1) [zt+1f 0(kt+1) + 1� �]g (55)

ct = ztf(kt) + (1� �)kt � kt+1: (56)

It is straightforward to show that the linearized representation of these equations is

ct = Etct+1 +�

�kt+1 �

�Z�Etzt+1

c

kct =

y

kzt + ��1kt � kt+1;

where �, �, c and k are de�ned as before, �Z = �f 0(k)z and y = zf(k).

In vector form, these two linear equations can be expressed as a �rst-order di¤erence

equation:��=� 11 0

�Et

�kt+1ct+1

�=

�0 1��1 �(c=k)

��ktct

�+

��Z=�0

�Etzt+1 +

�0y=k

�zt

or

Et

�kt+1ct+1

�=

���1 � c

k

���1 ��1 + �c

�k

��ktct

�+

�0�Z�

�Etzt+1 +

�yk

���yk

�zt:

For the moment it is probably easiest to write this system as

Etxt+1 =Mxt + �t

where

�t =

�0�Z�

�Etzt+1 +

�yk

���yk

�zt:

25

Page 26: Equilibrium Dynamics and Di⁄erence Equationspeople.duke.edu/~acb8/notes3.pdf · Equilibrium Dynamics and Di⁄erence Equations Craig Burnside Duke University September 18, 2010

Notice that the matrix M is unchanged. Once again, we could diagonalize the system and

write it as

Et~xt+1 = �~xt + ~zt

where ~xt = V �1xt and ~zt = V �1�t. In this case we end up with two scalar stochastic

di¤erence equations

Et~x1t+1 = �1~x1t + ~z1t

Et~x2t+1 = �2~x2t + ~z2t:

We would like to �nd solutions for ~x1t and ~x2t that are (i) consistent with kt+1 being deter-

mined at time t and (iii) imply that xt is covariance stationary.

There is a very direct way of �nding the solution which is due to King, Plosser and Rebelo

(2002). They suggest solving the second equation forward using (38):

~x2t = �1Xj=0

��(j+1)2 Et~z2t+j = �

1Xj=0

��(j+1)2 ( v21 v22 )Et�t+j

Then they suggest using the fact that ~x2t = v21x1t+v22x2t to solve for x2t = �(v22)�1v21x1t+

(v22)�1~x2t. Therefore

x2t = �(v22)�1v21x1t � (v22)�11Xj=0

��(j+1)2 ( v21 v22 )Et�t+j

Then take the original di¤erence equation, which states that x1t+1 = m11x1t +m12x2t + �1t

and rewrite it as

x1t+1 =�m11 �m12(v

22)�1v21�x1t + �1t �m12(v

22)�11Xj=0

��(j+1)2 ( v21 v22 )Et�t+j

= �1x1t + �1t �m12(v22)�1

1Xj=0

��(j+1)2 ( v21 v22 )Et�t+j

The King, Plosser Rebelo code typically works with the assumption that Etzt+j = �jzt in

which case

�t =

�yk

�Z��� �

�yk

�zt = zt

and

Et�t+j = �j zt:

Hence

x2t = �v21

v22x1t �

�2�2 � �

( v21

v221 ) zt

26

Page 27: Equilibrium Dynamics and Di⁄erence Equationspeople.duke.edu/~acb8/notes3.pdf · Equilibrium Dynamics and Di⁄erence Equations Craig Burnside Duke University September 18, 2010

x1t+1 = �1x1t +

�( 1 0 )�m12

1

�2 � �( v21

v221 )

� zt

It�s important to keep in mind that the King, Plosser, Rebelo approach only works in

speci�c circumstances. Recall that we �rst solved for ~x2t forward and then inverted the

equation ~x2t = v21x1t + v22x2t to solve for the non-predetermined variables, x2t, in terms of

the predetermined variables x1t and the variable just solved for, ~x2t. The invertibility of the

latter mapping depends on the dimension of x2 and ~x2 being the same. I.e. the number of

non-predetermined variables must be the same as the number of explosive roots. If number

of explosive roots is smaller, the system of equations will have a non-unique solution. If the

number of explosive roots is bigger, there will be no covariance stationary solution. The

�ipside of this analysis is that the number of predetermined variables, which we solved for

in the second step, was equal to the number of stable roots. If there are more stable roots

than predetermined variables, the model will have multiple solutions. If there are less stable

roots than predetermined variables, there will be no covariance stationary solution.

There is a way of getting these results using the same logic as was used to obtain the

bounded solutions in the deterministic model, except here we use the covariance stationarity

property and the fact that kt is predetermined. This could be a good homework question.

References

Blanchard, Olivier J. and Charles M. Kahn (1980) �The Solution of Linear Di¤erence

Models under Rational Expectations,�Econometrica, 48(5), 1305�12.

Christiano, Lawrence J. (2002) �Solving Dynamic Equilibrium Models by a Method of

Undetermined Coe¢ cients,�Computational Economics, 20, 21�55.

King, Robert G., Charles I. Plosser and Sergio T. Rebelo (2002) �Production, Growth and

Business Cycles: Technical Appendix,�Computational Economics, 20, 87�116.

Mirman, Leonard J. and William A. Brock (1972) �Optimal Economic Growth and Uncer-

tainty: The Discounted Case,�Journal of Economic Theory, 4, 479�513.

Sargent, Thomas J. (1987) Macroeconomic Theory. New York: Academic Press.

Stokey, Nancy L. and Robert E. Lucas, Jr. (1989) Recursive Methods in Economic Dy-

namics. Cambridge, Mass.: Harvard University Press.

27