74
Heterogeneous Agent Models in Continuous Time, II ECO 235: Advanced Macroeconomics Benjamin Moll Princeton Stanford February 12, 2014 1 / 74

Lecture2 Stanford 235 Web Nomovie

Embed Size (px)

Citation preview

Page 1: Lecture2 Stanford 235 Web Nomovie

Heterogeneous Agent Models

in Continuous Time, II

ECO 235: Advanced Macroeconomics

Benjamin Moll

Princeton

StanfordFebruary 12, 2014

1 / 74

Page 2: Lecture2 Stanford 235 Web Nomovie

Plan for Today’s Lecture

1 Numerical solution of Huggett model

2 Background: diffusion processes, HJB equations, Kolmogorovforward equations

3 “Wealth Distribution and the Business Cycle”

• Do income and wealth distribution matter for the

macroeconomy?

• better way (IMHO) of doing Krusell and Smith (1998)

2 / 74

Page 3: Lecture2 Stanford 235 Web Nomovie

Numerical Solution: Finite

Difference Methods

3 / 74

Page 4: Lecture2 Stanford 235 Web Nomovie

Finite Difference Methods

• See http://www.princeton.edu/~moll/HACTproject.htm

• Hard part: HJB equation. Easy part: KF equation.

• Explain HJB using neoclassical growth model, easily

generalized to Huggett model

ρV (k) = maxc

U(c) + V ′(k)[F (k)− δk − c]

• Functional forms

U(c) =c1−σ

1− σ, F (k) = kα

• Use finite difference method. MATLAB code HJB_NGM.m

• Approximate V (k) at I discrete points in the state space,

ki , i = 1, ..., I . Denote distance between grid points by ∆k .

• Shorthand notation

Vi = V (ki )

4 / 74

Page 5: Lecture2 Stanford 235 Web Nomovie

Finite Difference Approximations to V′(ki)

• Need to approximate V ′(ki ).

• Three different possibilities:

V ′(ki ) ≈Vi − Vi−1

∆k= V ′

i ,B backward difference

V ′(ki ) ≈Vi+1 − Vi

∆k= V ′

i ,F forward difference

V ′(ki ) ≈Vi+1 − Vi−1

2∆k= V ′

i ,C central difference

5 / 74

Page 6: Lecture2 Stanford 235 Web Nomovie

Finite Difference Approximations to V′(ki)

!

"# $ # #%$

!# !#%$

!# $

Central

Backward

Forward

6 / 74

Page 7: Lecture2 Stanford 235 Web Nomovie

Finite Difference Approximation

• FD approximation to HJB is

ρVi = U(ci ) + V ′i [F (ki )− δki − ci ] (∗)

where ci = (U ′)−1(V ′i ), and V ′

i is one of backward, forward,central FD approximations.

• Two complications:

1 which FD approximation to use? “Upwind scheme”

2 (∗) is extremely non-linear, need to solve iteratively:“explicit” vs. “implicit method”

7 / 74

Page 8: Lecture2 Stanford 235 Web Nomovie

Which FD Approximation?• Which of these you use it extremely important

• Correct way: use so-called “upwind scheme.” Rough idea:• forward difference whenever drift of state variable positive

• backward difference whenever drift of state variable negative

• In our example: define

si ,F = F (ki )−δki−(U ′)−1(V ′i ,F ), si ,B = F (ki )−δki−(U ′)−1(V ′

i ,B)

• Approximate derivative as follows

V ′i = V ′

i ,F1{si,F>0} + V ′i ,B1{si,B<0} + V ′

i 1{si,F<0<si,B}

where 1{·} is indicator function, and V ′i = U ′(F (ki )− δki ).

• Where does V ′i term come from? Answer:

• since V is concave, V ′i ,F < V ′

i ,B (see figure) ⇒ si ,F < si ,B

• if s ′i ,F < 0 < s ′i ,B , set si = 0 ⇒ V ′(ki) = U ′(F (ki )− δki ), i.e.we’re at a steady state.

• Note: importantly avoids using the points i = 0 and i = I + 18 / 74

Page 9: Lecture2 Stanford 235 Web Nomovie

Iterative Method

• Idea: Solve FOC for given V n, update V n+1 according to

V n+1i − V n

i

∆+ρV n

i = U(cni )+(V n)′(ki )[F (ki )−δki −cni ] (∗)

• Algorithm: Guess V 0i , i = 1, ..., I and for n = 0, 1, 2, ... follow

1 Compute (V n)′(ki ) using FD approx. on previous slide.

2 Compute cn from cni = (U ′)−1[(V n)′(ki)]

3 Find V n+1 from (∗).4 If V n+1 is close enough to V n: stop. Otherwise, go to step 1.

• See MATLAB code

http://www.princeton.edu/~moll/HACTproject/HJB_NGM.m

• Important parameter: ∆ = step size, cannot be too large

(“CFL condition”).

9 / 74

Page 10: Lecture2 Stanford 235 Web Nomovie

Convergence

• Do we know this converges? yes if

1 upwind scheme done correctly

2 step size ∆ not too large

• References that prove this (hard to read... but easy to cite!):

• Barles and Souganidis (1991), “Convergence of approximationschemes for fully nonlinear second order equations.”

• Oberman (2006), “Convergent Difference Schemes forDegenerate Elliptic and Parabolic Equations: Hamilton-JacobiEquations and Free Boundary Problems.”

10 / 74

Page 11: Lecture2 Stanford 235 Web Nomovie

Efficiency: Implicit Method

• Pretty inefficient: I need 5,990 iterations (though super fast)

• Efficiency can be improved by using an “implicit method”

V n+1i − V n

i

∆+ ρV n+1

i = U(cni )+ (V n+1i )′(ki )[F (ki )− δki − cni ]

• Each step n involves solving a linear system of the form

(

(ρ+ 1∆)I− An

)

V n+1 = U(cn) + 1∆V n

• Advantage: step size ∆ can be much larger ⇒ more stable.

• Implicit method preferable in general.

11 / 74

Page 12: Lecture2 Stanford 235 Web Nomovie

The Magic of Upwind Schemes• No boundary conditions are needed!

• How is that possible? Don’t we know from MAT 101 thatsolving a first order ODE requires a boundary condition?

• Answer: in effect, we used the steady state as a boundarycondition: V ′(k∗) = U ′(c∗)

V ′(ki ) = U ′(c∗), ki = k∗

V ′(ki ) =Vi+1 − Vi

∆k, ki < k∗

V ′(ki ) =Vi − Vi−1

∆k, ki > k∗

• But we did it without having to actually calculate steadystate (never use F ′(k∗) = ρ+ δ equation). Code choosessteady state by itself.

• Upwind magic much more general: works for all stochasticmodels – do not even have steady state – e.g. Huggett.

12 / 74

Page 13: Lecture2 Stanford 235 Web Nomovie

Deeper Reason: Unique Viscosity Solution• basically there is a sense in which the HJB equation for theneoclassical growth model has a unique solution,independent of boundary conditions

• 3 Theorems (references two slides ahead)

• Theorem 1: HJB equation has unique “viscosity solution”

• Theorem 2: viscosity solution equals value function, i.e.solution to “sequence problem.”

• Theorem 3: FD method converges to unique “viscositysolution”, upwind scheme picks “viscosity solution”

• Comments• “viscosity solution” = bazooka, not really needed, designed for

non-differentiable PDEs

• Important for our purposes: uniqueness property

• typically, viscosity solution = “nice”, concave solution.Definition ⇒ doesn’t blow up, no convex kinks

13 / 74

Page 14: Lecture2 Stanford 235 Web Nomovie

Deeper Reason: Unique Viscosity Solution• basically there is a sense in which the HJB equation for theneoclassical growth model has a unique solution,independent of boundary conditions

• 3 Theorems (references two slides ahead)

• Theorem 1: HJB equation has unique “nice” solution

• Theorem 2: “nice” solution equals value function, i.e.solution to “sequence problem.”

• Theorem 3: FD method converges to unique “nice”

solution, upwind scheme picks “nice” solution

• Comments• “viscosity solution” = bazooka, not really needed, designed for

non-differentiable PDEs

• Important for our purposes: uniqueness property

• typically, viscosity solution = “nice”, concave solution.Definition ⇒ doesn’t blow up, no convex kinks

14 / 74

Page 15: Lecture2 Stanford 235 Web Nomovie

References for Theorems

• Theorems 1 and 2 originally due to Crandall and Lions(1983), “Viscosity Solutions of Hamilton-Jacobi Equations.”

• somewhat more accessible: book by Bardi andCapuzzo-Dolcetta (1997) “Optimal Control and ViscositySolutions of Hamilton-Jacobi-Bellman Equations”

• Theorem 3 originally due to Barles and Souganidis (1991)“Convergence of approximation schemes for fully nonlinearsecond order equations.”

15 / 74

Page 16: Lecture2 Stanford 235 Web Nomovie

Deeper Reason: Unique Viscosity Solution• What do I mean by unique solution, independent of

boundary conditions?

• Consider HJB equation for V (x) on [x0, x1].

• MAT 101: given a boundary condition V (x0) = θ, say, HJBhas a unique “classical” solution V classical(x)

• Theorem 1 basically says: there is a unique θ such thatV classical(x) is “nice.”

• Related: in neoclassical growth model and many other models• drift positive at lower end of state space

• drift negative at upper end of state space

• Since HJB equation is forward-looking, it cannot possiblymatter what happens at boundaries of state space:the HJB equation “does not see the boundary condition”

• exception: models with state constraints, e.g. Huggett. Butneed boundary condition only if state constraint binds.

16 / 74

Page 17: Lecture2 Stanford 235 Web Nomovie

General Lesson from all this

• the MAT 101 way is not the right way to think about HJB

equations, i.e. you don’t want to think of them as ODEs

that require boundary conditions!

17 / 74

Page 18: Lecture2 Stanford 235 Web Nomovie

Numerical Solution of Huggett Model

http://www.princeton.edu/~moll/HACTproject.htm

18 / 74

Page 19: Lecture2 Stanford 235 Web Nomovie

Kolmogorov Forward Equation

• Given HJB equation, solving KF equation is a piece of cake!

• Recall: solving HJB equation using implicit scheme involvessolving linear system

(

(ρ+ 1∆)I− An

)

vn+1 = U(cn) + 1∆vn

• Key: matrix An encodes evolution of stochastic process

• Stationary distribution simply solves

ATg = 0.

• For details see section 2 ofhttp://www.princeton.edu/~moll/HACTproject/numerical_MATLAB.pdf

19 / 74

Page 20: Lecture2 Stanford 235 Web Nomovie

Saving and Wealth Distribution given r

Wealth, a

Savings,

si(a)

a

s1(a)

s2(a)

Wealth, a

Densities,

gi(a)

a

g1(a)

g2(a)

20 / 74

Page 21: Lecture2 Stanford 235 Web Nomovie

Generalization: Continuum of z-Types

0

5

10

15

20

25 0.5

1

1.5−0.8

−0.6

−0.4

−0.2

0

0.2

0.4

0.6

Productivity, zWealth, a

Sav

ings

s(a

,z)

21 / 74

Page 22: Lecture2 Stanford 235 Web Nomovie

Generalization: Continuum of z-Types

22 / 74

Page 23: Lecture2 Stanford 235 Web Nomovie

Transition Dynamics

movie here

23 / 74

Page 24: Lecture2 Stanford 235 Web Nomovie

Generalization: Aiyagari, Transition

3

3.5

4

4.5

5

5.5

6

6.5

0 5 10 15 20 25 30 35 40 45 50

Capital, K(t)

Years, t

(a) Capital

1.45

1.5

1.55

1.6

1.65

1.7

1.75

1.8

1.85

1.9

0 5 10 15 20 25 30 35 40 45 50

GDP, Y(t)

Years, t

(b) GDP

0.95

1

1.05

1.1

1.15

1.2

1.25

0 5 10 15 20 25 30 35 40 45 50

Wage, w(t)

Years, t

(c) Wage

0.04

0.05

0.06

0.07

0.08

0.09

0.1

0.11

0 5 10 15 20 25 30 35 40 45 50

Interest Rate, r(t)

Years, t

(d) Interest Rate24 / 74

Page 25: Lecture2 Stanford 235 Web Nomovie

Background: Diffusion Processes

25 / 74

Page 26: Lecture2 Stanford 235 Web Nomovie

Diffusion Processes

• A diffusion is simply a continuous-time Markov process (with

continuous sample paths, i.e. no jumps)

• Simplest possible diffusion: standard Brownian motion

(sometimes also called “Wiener process”)

• Definition: a standard Brownian motion is a stochastic

process W which satisfies

W (t +∆t)−W (t) = εt√∆t, εt ∼ N(0, 1), W (0) = 0

• Not hard to see

W (t) ∼ N(0, t)

• Continuous time analogue of a discrete time random walk:

Wt+1 = Wt + εt , εt ∼ N(0, 1)

26 / 74

Page 27: Lecture2 Stanford 235 Web Nomovie

Standard Brownian Motion

• Note: mean zero, E(W (t)) = 0...

• ... but blows up Var(W (t)) = t.27 / 74

Page 28: Lecture2 Stanford 235 Web Nomovie

Brownian Motion

• Can be generalized

x(t) = x(0) + µt + σW (t)

• Since E(W (t)) = 0 and Var(W (t)) = t

E[x(t)− x(0)] = µt, Var [x(t)− x(0)] = σ2t

• This is called a Brownian motion with drift µ and variance σ2

• Can write this in differential form as

dx(t) = µdt + σdW (t)

where dW (t) ≡ lim∆t→0 εt√∆t, with εt ∼ N(0, 1)

• This is called a stochastic differential equation

• Analogue of stochastic difference equation:

xt+1 = µt + xt + σεt , εt ∼ N(0, 1)

28 / 74

Page 29: Lecture2 Stanford 235 Web Nomovie

29 / 74

Page 30: Lecture2 Stanford 235 Web Nomovie

Further Generalizations: Diffusion

Processes

• Can be generalized further (suppressing dependence of x and

W on t)

dx = µ(x)dt + σ(x)dW

where µ and σ are any non-linear etc etc functions.

• This is called a “diffusion process”

• µ(·) is called the drift and σ(·) the diffusion.

• all results can be extended to the case where they depend on

t, µ(x , t), σ(x , t) but abstract from this for now.

• The amazing thing about diffusion processes: by choosing

functions µ and σ, you can get pretty much any

stochastic process you want (except jumps)

30 / 74

Page 31: Lecture2 Stanford 235 Web Nomovie

Example 1: Ornstein-Uhlenbeck Process

• Brownian motion dx = µdt + σdW is not stationary (random

walk). But the following process is

dx = θ(x − x)dt + σdW

• Analogue of AR(1) process, autocorrelation e−θ ≈ 1− θ

xt+1 = θx + (1− θ)xt + σεt

• That is, we just choose

µ(x) = θ(x − x)

and we get a nice stationary process!

• This is called an “Ornstein-Uhlenbeck process”

31 / 74

Page 32: Lecture2 Stanford 235 Web Nomovie

Ornstein-Uhlenbeck Process

• Can show: stationary distribution is N(x , σ2/(2θ))

32 / 74

Page 33: Lecture2 Stanford 235 Web Nomovie

Example 2: “Moll Process”

• Design a process that stays in the interval [0, 1] and

mean-reverts around 1/2

µ(x) = θ (1/2 − x) , σ(x) = σx(1− x)

dx = θ (1/2− x) dt + σx(1 − x)dW

• Note: diffusion goes to zero at boundaries σ(0) = σ(1) = 0 &

mean-reverts ⇒ always stay in [0, 1]

33 / 74

Page 34: Lecture2 Stanford 235 Web Nomovie

Other Examples

• Geometric Brownian motion:

dx = µxdt + σxdW

x ∈ [0,∞), no stationary distribution:

log x(t) ∼ N((µ − σ2/2)t, σ2t).

• Feller square root process (finance: “Cox-Ingersoll-Ross”)

dx = θ(x − x)dt + σ√xdW

x ∈ [0,∞), stationary distribution is Gamma(γ, 1/β), i.e.

f∞(x) ∝ e−βxxγ−1, β = 2θx/σ2, γ = 2θx/σ2

• Other processes in Wong (1964), “The Construction of a

Class of Stationary Markoff Processes.”34 / 74

Page 35: Lecture2 Stanford 235 Web Nomovie

Background: HJB Equations for

Diffusion Processes

35 / 74

Page 36: Lecture2 Stanford 235 Web Nomovie

Stochastic Optimal Control

• Generic problem:

V (x0) = maxu(t)∞t=0

E0

∫ ∞

0e−ρth (x (t) , u (t)) dt

subject to the law of motion for the state

dx(t) = g (x (t) , u (t)) dt + σ(x(t))dW (t) and u (t) ∈ U

for t ≥ 0, x(0) = x0 given.

• Deterministic problem: special case σ(x) ≡ 0.

• In general x ∈ Rm, u ∈ R

n. For now do scalar case.

36 / 74

Page 37: Lecture2 Stanford 235 Web Nomovie

Stochastic HJB Equation: Scalar Case

• Claim: the HJB equation is

ρV (x) = maxu∈U

h(x , u) + V ′(x)g(x , u) +1

2V ′′(x)σ2(x)

• Here: on purpose no derivation (“cookbook”)

• In case you care, see any textbook, e.g. chapter 2 in Stokey

(2008)

37 / 74

Page 38: Lecture2 Stanford 235 Web Nomovie

Just for Completeness: Multivariate Case

• Let x ∈ Rm, u ∈ R

n.

• For fixed x , define the m ×m covariance matrix

σ2(x) = σ(x)σ(x)′

(this is a function σ2 : Rm → Rm × R

m)

• The HJB equation is

ρV (x) = maxu∈U

h(x , u)+

m∑

i=1

∂iV (x)gi (x , u)+1

2

m∑

i=1

m∑

j=1

σ2ij(x)∂ijV (x)

• In vector notation

ρV (x) = maxu∈U

h(x , u)+∇xV (x)·g(x , u)+ 1

2tr(

∆xV (x)σ2(x))

• ∇xV (x): gradient of V (dimension m × 1)

• ∆xV (x): Hessian of V (dimension m ×m).

38 / 74

Page 39: Lecture2 Stanford 235 Web Nomovie

HJB Equation: Endogenous and

Exogenous State

• Lots of problems have the form x = (x1, x2)

• x1: endogenous state

• x2: exogenous state

dx1 = g(x1, x2, u)dt

dx2 = µ(x2)dt + σ(x2)dW

• Special case with

g(x) =

[

g(x1, x2, u)

µ(x2)

]

, σ(x) =

[

0

σ(x2)

]

• Claim: the HJB equation is

ρV (x1, x2) =maxu∈U

h(x1, x2, u) + ∂1V (x1, x2)g(x1, x2, u)

+∂2V (x1, x2)µ(x2) +1

2∂22V (x1, x2)σ

2(x2)39 / 74

Page 40: Lecture2 Stanford 235 Web Nomovie

Example 1: Real Business Cycle Model

V (k0,A0) = maxc(t)∞t=0

E0

∫ ∞

0e−ρtU(c(t))dt

subject to

dk = [AF (k)− δk − c]dt

dA = µ(A)dt + σ(A)dW

for t ≥ 0, k(0) = k0, A(0) = A0 given.

• Here: x1 = k , x2 = A, u = c

• h(x , u) = U(u)

• g(x , u) = F (x)− δx − u

40 / 74

Page 41: Lecture2 Stanford 235 Web Nomovie

Example 1: Real Business Cycle Model

• HJB equation is

ρV (k ,A) =maxc

U(c) + ∂kV (k ,A)[AF (k)− δk − c]

+ ∂AV (k ,A)µ(A) +1

2∂AAV (k ,A)σ2(A)

41 / 74

Page 42: Lecture2 Stanford 235 Web Nomovie

Example 2: Huggett Model

V (a0, z0) = maxc(t)∞t=0

E0

∫ ∞

0e−ρtU(c(t))dt s.t.

da = [z + ra − c]dt

dz = µ(z)dt + σ(z)dW

a ≥ a

for t ≥ 0, a(0) = a0, z(0) = z0 given.

• Here: x1 = k , x2 = z , u = c

• h(x , u) = U(u)

• g(x , u) = x2 + rx1 − u

42 / 74

Page 43: Lecture2 Stanford 235 Web Nomovie

Example 2: Huggett Model

• HJB equation is

ρV (a, z) =maxc

U(c) + ∂aV (a, z)[z + ra − c]

+ ∂zV (a, z)µ(z) +1

2∂zzV (a, z)σ2(z)

43 / 74

Page 44: Lecture2 Stanford 235 Web Nomovie

Example 2: Huggett Model

• Special Case 1: z is a geometric Brownian motion

dz = µzdt + σzdW

ρV (a, z) =maxc

U(c) + ∂aV (a, z)[z + ra − c]

+ ∂zV (a, z)µz +1

2∂zzV (a, z)σ2z2

• Special Case 2: z is a Feller square root process

dz = θ(z − z)dt + σ√zdW

ρV (a, z) =maxc

U(c) + ∂aV (a, z)[z + ra − c]

+ ∂zV (a, z)θ(z − z) +1

2∂zzV (a, z)σ2z

44 / 74

Page 45: Lecture2 Stanford 235 Web Nomovie

Background: Kolmogorov Forward

Equations

45 / 74

Page 46: Lecture2 Stanford 235 Web Nomovie

Kolmogorov Forward Equations

• Let x be a scalar diffusion

dx = µ(x)dt + σ(x)dW , x(0) = x0

• Suppose we’re interested in the evolution of the distribution

of x , f (x , t), and in particular in the limit limt→∞ f (x , t).

• Natural thing to care about especially in heterogenous agent

models

• Example 1: x = wealth

• µ(x) determined by saving behavior and return to investments

• σ(x) by return risk.

• microfound later

• Example 2: x = city size (Gabaix and others)

46 / 74

Page 47: Lecture2 Stanford 235 Web Nomovie

Kolmogorov Forward Equations

• Fact: Given an initial distribution f (x , 0) = f0(x), f (x , t)

satisfies the PDE

∂t f (x , t) = −∂x [µ(x)f (x , t)] +1

2∂xx [σ

2(x)f (x , t)]

• This PDE is called the “Kolmogorov Forward Equation”

• Note: in math this often called “Fokker-Planck Equation”

• Can be extended to case where x is a vector as well.

• Corollary: if a stationary distribution, limt→∞ f (x , t) = f (x)

exists, it satisfies the ODE

0 = − d

dx[µ(x)f (x)] +

1

2

d2

dx2[σ2(x)f (x)]

47 / 74

Page 48: Lecture2 Stanford 235 Web Nomovie

Just for Completeness: Multivariate Case

• Let x ∈ Rm.

• As before, define the m ×m covariance matrix

σ2(x) = σ(x)σ(x)′

• The Kolmogorov Forward Equation is

∂t f (x , t) = −m∑

i=1

∂i [µi(x)f (x , t)] +1

2

m∑

i=1

m∑

j=1

∂ij [σ2ij (x)f (x , t)]

48 / 74

Page 49: Lecture2 Stanford 235 Web Nomovie

Application: Stationary Distribution of

Huggett Model

• Recall Huggett model

ρV (a, z) =maxc

U(c) + ∂aV (a, z)[z + ra − c]

+ ∂zV (a, z)µ(z) +1

2∂zzV (a, z)σ2(z)

• Denote the optimal saving policy function by

s(a, z) = z + ra − c

• Then g(a, z , t) solves

∂tg(a, z , t) =− ∂a[s(a, z)g(a, z , t)]

− ∂z [µ(z)g(a, z , t)] +1

2∂zz [σ

2(z)g(a, z , t)]

49 / 74

Page 50: Lecture2 Stanford 235 Web Nomovie

Wealth Distribution and the

Business Cycle

50 / 74

Page 51: Lecture2 Stanford 235 Web Nomovie

What We Do

1 Do income and wealth distribution matter for themacroeconomy?

2 Do macroeconomic aggregates in heterogeneous agent

models with frictions behave like those in frictionlessrepresentative agent models?

Ask these questions in an economy in which

• heterogeneous producers

• face collateral constraints

• and fixed costs in production

51 / 74

Page 52: Lecture2 Stanford 235 Web Nomovie

Setup

• Continuum of entrepreneurs, heterogeneous in wealth a andproductivity z

• Continuum of productivity types, diffusion process

• For now: no workers, occupational choice etc

• Entrepreneurial preferences

E0

∫ ∞

0e−ρtu(ct)dt

52 / 74

Page 53: Lecture2 Stanford 235 Web Nomovie

Technologies

• Two technologies: productive and unproductive

y = fu(z ,A, k) = zAf (k),

y = fp(z ,A, k) =

{

zABf (k − κ), k ≥ κ

0, k < κ,

• B > 1, but fixed cost κ > 0

• f strictly increasing and concave, but fp non-convex

• Sources of uncertainty:

• z : idiosyncratic shock, diffusion process

• A: aggregate shock, two-state Poisson process

53 / 74

Page 54: Lecture2 Stanford 235 Web Nomovie

Setup

• Collateral constraints

k ≤ λa, λ ≥ 1.

• Profit maximization:

Π(a, z ,A; r) = max{Πu(a, z ,A; r),Πp(a, z ,A; r)}

Πj(a, z ,A; r) = maxk≤λa

fj(z ,A, k)− (r + δ)k , j = p, u.

• Entrepreneurs solve

max{ct}

E0

∫ ∞

0e−ρtu(ct)dt s.t.

dat = [Π(at , zt ,At ; rt) + rtat − ct ]dt

54 / 74

Page 55: Lecture2 Stanford 235 Web Nomovie

Plan

1 model without aggregate shocks, At = 1

2 model with aggregate shocks, At ∈ {Aℓ,Ah}, Poisson

55 / 74

Page 56: Lecture2 Stanford 235 Web Nomovie

Without Aggregate Shocks

• Capital market clearing:∫

[ku(a, z ;r(t))1{Πu>Πp} + kp(a, z ; r(t))1{Πu<Πp}]g(a, z , t)dadz

=

ag(a, z , t)dadz

(EQ)

ρv(a, z , t) =maxc

u(c) + ∂av(a, z , t)[Π(a, z ; r(t)) + r(t)a − c] (HJB)

+ ∂zv(a, z , t)µ(z) +1

2∂zzv(a, z , t)σ

2(z) + ∂tv(a, z , t)

∂tg(a, z , t) =− ∂a[s(a, z , t)g(a, z , t)]− ∂z [µ(z)g(a, z , t)] (KFE)

+1

2∂zz [σ

2(z)g(a, z , t)],

s(a, z , t) = Π(a, z ; r(t)) + r(t)a − c(a, z , t)

Given initial condition g0(a, z), the two PDEs (HJB) and (KFE)together with (EQ) fully characterize equilibrium.

56 / 74

Page 57: Lecture2 Stanford 235 Web Nomovie

Dynamics of Wealth Distribution

movie here

57 / 74

Page 58: Lecture2 Stanford 235 Web Nomovie

Dynamics of Wealth Distribution

Productivity,z

Density,g(a,z,t)

Wealth,a

58 / 74

Page 59: Lecture2 Stanford 235 Web Nomovie

With Aggregate Shocks

• At ∈ {Aℓ,Ah}, Poisson with intensities φℓ, φh

• As in discrete time: necessary to include entire wealthdistribution as state variable

• Aggregate state: (Ai , g), i = ℓ, h

• Optimal saving policy function: si (a, z , g), i = ℓ, h

• Interest rate ri (g), i = ℓ, h

• Capital demands: ki ,u(a, z , g ; r) and ki ,p(a, z , g ; r), i = ℓ, h

• Equilibrium interest rate ri(g) solves∫

[ki ,u(a, z , g ;ri (g))1{Πu>Πp} + ki ,p(a, z , g ; ri (g))1{Πu<Πp}]g(a, z)dadz

=

ag(a, z)dadz , i = ℓ, h

59 / 74

Page 60: Lecture2 Stanford 235 Web Nomovie

Entrepreneurs’ Problem

• Useful to write law of motion of distribution as

∂tg(a, z , t) = T [g(·, t), si (·, t)](a, z)

where T is the “Kolmogorov Forward operator”

T [g , si ](a, z) =− ∂a[si(a, z , g)g(a, z)]

− ∂z [µ(z)g(a, z)] +1

2∂zz [σ

2(z)g(a, z)]

60 / 74

Page 61: Lecture2 Stanford 235 Web Nomovie

Monster Bellman

• Entrepreneurs problem: recursive formulation

ρVi (a, z , g) = maxc

u(c) + ∂aVi (a, z , g)[Πi(a, z , g) + ri (g)a− c]

+ ∂zVi (a, z , g)µ(z) +1

2∂zzVi (a, z , g)σ

2(z)

+ φi [Vj(a, z , g)− Vi (a, z , g)]

+

δVi (a, z , g)

δg(a, z)T [g , si ](a, z)dadz .

• δVi/δg(a, z): functional derivative of Vi wrt to g at (a, z).

• Saving policy function

si(a, z , g) = Πi (a, z , g) + ri (g)a − ci (a, z , g)

= Πi (a, z , g) + ri (g)a − (u′)−1(∂aVi(a, z , g)).

• Recursive equilibrium: solution to these two equations.

61 / 74

Page 62: Lecture2 Stanford 235 Web Nomovie

Approximation Method: Basic Idea

1 use discrete time approximation to A process

2 consider only finitely many shocks

3 keep track of all possible histories

4 between shocks: same (HJB) and (KFE) as without shocks,one system for each history

5 then piece them together in correct way

• Example:• 5 shocks at times t = 5, 10, 15, 20, 25

• 25 = 32 histories

62 / 74

Page 63: Lecture2 Stanford 235 Web Nomovie

Histories with 8 Shocks every 20 Periods

1.4

1.6

1.8

2

2.2

2.4

2.6

2.8

3

3.2

0 20 40 60 80 100 120 140 160 180 200

• Obvious problem: blows up on you, e.g. 28 = 256.

• Working on: using this approximation as input into fancierapproximation scheme

• but the economics seems robust to me, will bet anyone whothinks that things will change a lot

63 / 74

Page 64: Lecture2 Stanford 235 Web Nomovie

Approximation Method: Details

• N shocks hit at times tn = ∆n, n = 0, 1, 2, ...,N

• Approximate Poisson process

Pr(An+1 = Ai |An = Ai) = 1− e−φi∆

Pr(An+1 = Aj |An = Ai) = e−φi∆

converges to Poisson as ∆ → 0 and N → ∞.

• Denote histories by An = (A0, ..., An) where An ∈ {Aℓ,Ah}

• Keep track of all

v(a, z , t, An) and g(a, z , t, An)

64 / 74

Page 65: Lecture2 Stanford 235 Web Nomovie

Approximation Method: DetailsBetween shocks: for all histories An

ρv(a, z , t, An) =maxc

u(c) + ∂av(a, z , t, An)[Π(a, z , t, An) + r(t, An)a− c]

+ ∂zv(a, z , t, An)µ(z) +

1

2∂zzv(a, z , t, A

n)σ2(z) + ∂tv(a, z , t, A

∂tg(a, z , t, An) =− ∂a[s(a, z , t, A

n)g(a, z , t, An)]

− ∂z [µ(z)g(a, z , t, An)] +

1

2∂zz [σ

2(z)g(a, z , t, An)]

Boundary conditions: at all times when shocks hit tn:

v(

a, z , tn+1, An)

=∑

An+1=Aℓ,Ah

Pr(An+1|An)v(

a, z , tn+1, An+1

)

g(

a, z , tn+1, An+1

)

= g(

a, z , tn+1, An)

, all An.

Note: uses

1 people forward-looking, form expectations over all branches2 continuity of g with respect to t (state variable)

65 / 74

Page 66: Lecture2 Stanford 235 Web Nomovie

The Experiments

• Experiment 1: comparison to rep. agent model

• two economies: economy with frictions and rep. agent

• at t = 0, both in steady state (corresponding to no A shocks)

• at t = 10, hit with A shock, compare impulse responses

• Experiment 2: effect of wealth distribution

• two economies with frictions: one more unequal than the other

• at t = 0, both in steady state (corresponding to no A shocks)

• at t = 10, hit with A shock, compare impulse responses

66 / 74

Page 67: Lecture2 Stanford 235 Web Nomovie

The Corresponding Representative Agent

• RBC model with utility function u(c) and followingaggregate production function

F (K ,A) = max{k(z)}

max{fu(z ,A, k(z)), fp(z ,A, k(z))}ψ(z)dz s.t.

k(z)ψ(z)dz ≤ K

where ψ(z) is the stationary distribution of the z-process.

• Solution:

F (K ,A) = maxz

AZ (z)(K − κ(1−Ψ(z)))α

Z (z) ≡(∫ z

0z

11−αψ(z)dz + B

11−α

∫ ∞

z

z1

1−αψ(z)dz

)1−α

67 / 74

Page 68: Lecture2 Stanford 235 Web Nomovie

Parameterization

• Functional forms

u(c) =c1−γ

1− γ, f (k) = kα

• Productivity process: Ornstein-Uhlenbeck

d log zt = −ν log zt + σdWt

but reflected at z , z ≥ 0.

• Parameter values

B = 1.8, α = 0.6, δ = 0.05, κ = 6, λ = 2,

ρ = 0.05, γ = 2, ν = 0.2, σz = 0.1

Ah = 1, Aℓ = 0.95

a = 0, a = 30, z = 0.6, z = 1.4, T = 100

68 / 74

Page 69: Lecture2 Stanford 235 Web Nomovie

Frictions ⇒ much Slower Recovery

0 10 20 30 400.94

0.95

0.96

0.97

0.98

0.99

1

1.01

Years

(a) Productivity, At

0 10 20 30 400.93

0.94

0.95

0.96

0.97

0.98

0.99

1

Years

(b) Capital Stock, K t

0 10 20 30 400.92

0.94

0.96

0.98

1

1.02

Years

(c) GDP, Y t

0 10 20 30 40

0.04

0.045

0.05

0.055

Years

(d) Interest Rate , r t

Frictionlesswith Frictions

Frictionlesswith Frictions

Frictionlesswith Frictions

Frictionlesswith Frictions

Note: GDP and interest rate computations currently imprecise69 / 74

Page 70: Lecture2 Stanford 235 Web Nomovie

Dynamics of Wealth Distribution

movie here

70 / 74

Page 71: Lecture2 Stanford 235 Web Nomovie

Wealth Distribution at Peak

Productivity,z

Density,g(a,z,t)

Year = 10

Wealth,a

71 / 74

Page 72: Lecture2 Stanford 235 Web Nomovie

Wealth Distribution at Trough

Productivity,z

Density,g(a,z,t)

Year = 20

Wealth,a

72 / 74

Page 73: Lecture2 Stanford 235 Web Nomovie

Wealth Distribution 10 Years after Trough

Productivity,z

Density,g(a,z,t)

Year = 30

Wealth,a

73 / 74

Page 74: Lecture2 Stanford 235 Web Nomovie

Next Steps

• Redistributive policies?

• Add workers, occupational choice

• Fancier approximation• computations suggest: densities live in relatively

low-dimensional space

• project densities on that space

• use as state variable in value function

• What else?

74 / 74