Upload
others
View
8
Download
0
Embed Size (px)
Citation preview
Sequential Monte Carlo Methods (Squared!)forDSGE Models1
Ed Herbst 1 Frank Schorfheide 2
1Federal Reserve Board
2University of Pennsylvania, CEPR, and NBER
April 22, 2015
1The views expressed in this paper are those of the authors and do not necessarilyreflect the views of the Federal Reserve Board of Governors or the Federal ReserveSystem.
E. Herbst and F. Schorfheide SMC2 for DSGE Models
Introduction
DSGE model: dynamic model of the macroeconomy, indexed by θ –vector of preference and technology parameters. Used forforecasting, policy experiments, interpreting past events.
In nearly all applications, DSGE models are estimated using Bayesianmethods:
p(θ|Y ) =p(Y |θ)p(θ)
p(Y )∝ p(Y |θ)p(θ).
Analyzing the posterior is difficult, because the likelihood isintractable.
“Standard” approach for linearized models (Schorfheide, 2000;Otrok, 2001): use Kalman filter to evaluate likelihood function andMarkov chain Monte Carlo (MCMC) methods to implementposterior inference for θ.
E. Herbst and F. Schorfheide SMC2 for DSGE Models
Sequential Monte Carlo (SMC) Methods
SMC can be used to
approximate the posterior of θ: Chopin (2002) ... Creal (2007),Herbst and Schorfheide (2014)
approximate the likelihood function (particle filtering): Gordon,Salmond, and Smith (1993) ... Fernandez-Villaverde andRubio-Ramirez (2007)
or both: SMC 2: Chopin, Jacob, and Papaspiliopoulos (2012) ... thistalk
E. Herbst and F. Schorfheide SMC2 for DSGE Models
This Talk
We will construct an SMC 2 algorithm to estimate a DSGE model:
we use SMC for inference about the static parameter θ;
we use SMC to obtain a particle filter approximation of the likelihoodfunction.
and document its accuracy.
Rather than delving straight into the SMC 2 algorithm we proceed ina step-wise manner:
discuss how SMC can be used for inference about θ in models inwhich the likelihood function can be evaluated with the Kalmanfilter; conduct simulation experiments to document the accuracy ofSMC approximation of posterior moments;
review how particle filters can be used to construct a Monte Carloapproximation of the likelihood function and conduct simulationexperiments to document the accuracy.
E. Herbst and F. Schorfheide SMC2 for DSGE Models
Why???
Likelihood evaluation for nonlinear DSGE models requires nonlinearfiltering −→ sequential Monte Carlo.For inference about the static parameter θ, “standard” MCMCmethods can be quite inaccurate. Multimodal posteriors may arisebecause it is difficult to
disentangling internal and external propagation mechanisms;disentangling the relative importance of shocks.
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Hours
E. Herbst and F. Schorfheide SMC2 for DSGE Models
The Laboratory
1 A state-space model with a global identification problem:
yt = [1 1]st , st =
[θ2
1 0(1− θ2
1)− θ1θ2 (1− θ21)
]st−1 +
[10
]εt .
2 Small scale DSGE model (Euler equation, NK Phillips curve,monetary policy rule, three AR(1) exogenous shock processes)estimated on output growth, inflation, and nominal interest rates.
3 Smets-Wouters model estimated based on seven observables.
E. Herbst and F. Schorfheide SMC2 for DSGE Models
Some Notation
Posterior distribution π(θ) = p(θ|Y ) with moments Eπ[θ] andVπ[θ].
We often consider functions h(θ) and write
hN =1
N
N∑i=1
h(θi )
where θi ’s are draws from posterior.
Note that under iid sampling under suitable regularity conditions
√N(hN − Eπ[h]) =⇒ N
(0,Vπ[h]
)Inefficiency factor:
InEffN =V[hN ]
Vπ[h]/N
E. Herbst and F. Schorfheide SMC2 for DSGE Models
Part I: From Importance Sampling to SequentialImportance Sampling
Bayes Theorem:
π(θ) =p(Y |θ)p(θ)
p(Y )=
f (θ)
Z.
Let g(θ) be a pdf. Write posterior mean of h(θ) as:
Eπ[h(θ)] =
∫h(θ)
f (θ)
Zdθ =
1
Z
∫h(θ)
f (θ)
g(θ)g(θ)dθ
If θi ’s are draws from g(·) then
Eπ[h] =
∫h(θ) f (θ)
g(θ)g(θ)dθ∫ f (θ)g(θ)g(θ)dθ
≈1N
∑Ni=1 h(θi )w(θi )
1N
∑Ni=1 w(θi )
, w(θ) =f (θ)
g(θ).
In general, it’s hard to construct a good proposal density g(θ)...
E. Herbst and F. Schorfheide SMC2 for DSGE Models
Illustration
θ1
0.00.2
0.40.6
0.81.0
n10
2030
4050
0
1
2
3
4
5
πn(θ) =[p(Y |θ)]φnp(θ)∫[p(Y |θ)]φnp(θ)dθ
=fn(θ)
Zn, φn =
(n
Nφ
)λ
E. Herbst and F. Schorfheide SMC2 for DSGE Models
SMC Algorithm for θ: A Graphical Illustration
C S M C S M C S M
−10
−5
0
5
10
φ0 φ1 φ2 φ3
πn(θ) is represented by a swarm of particles θin,W inNi=1:
hn,N =1
N
N∑i=1
W inh(θi )
a.s.−→ Eπn [h(θn)].
C is Correction; S is Selection; and M is Mutation.
E. Herbst and F. Schorfheide SMC2 for DSGE Models
SMC Algorithm
1 Initialization. (φ0 = 0). Draw the initial particles from the prior:
θi1iid∼ p(θ) and W i
1 = 1, i = 1, . . . ,N.2 Recursion. For n = 1, . . . ,Nφ,
1 Correction. Reweight the particles from stage n − 1 by defining theincremental weights
w in = [p(Y |θin−1)]φn−φn−1
and the normalized weights
W in =
w inW
in−1
1N
∑Ni=1 w
inW i
n−1
, i = 1, . . . ,N.
Then,
hn,N =1
N
N∑i=1
W inh(θin−1) ≈ Eπn [h(θ)].
2 Selection.3 Mutation.
E. Herbst and F. Schorfheide SMC2 for DSGE Models
SMC Algorithm
1 Initialization.2 Recursion. For n = 1, . . . ,Nφ,
1 Correction.2 Selection. (Optional Resampling) Let θNi=1 denote N iid draws
from a multinomial distribution characterized by support points andweights θin−1, W
inNi=1 and set W i
n = 1. Then,
hn,N =1
N
N∑i=1
W inh(θin) ≈ Eπn [h(θ)].
3 Mutation. Propagate the particles θi ,W in via NMH steps of a MH
algorithm with transition density θin ∼ Kn(θn|θin; ζn) and stationarydistribution πn(θ). Then,
hn,N =1
N
N∑i=1
h(θin)W in ≈ Eπn [h(θ)].
E. Herbst and F. Schorfheide SMC2 for DSGE Models
Remarks
Correction Step:
reweight particles from iteration n− 1 to create importance samplingapproximation of Eπn [h(θ)]
Selection Step: the resampling of the particles
(good) equalizes the particle weights and thereby increases accuracyof subsequent importance sampling approximations;(not good) adds a bit of noise to the MC approximation.
Mutation Step:
adapts particles to posterior πn(θ);imagine we don’t do it: then we would be using draws from priorp(θ) to approximate posterior π(θ), which can’t be good!
θ1
0.00.2
0.40.6
0.81.0
n10
2030
4050
0
1
2
3
4
5
E. Herbst and F. Schorfheide SMC2 for DSGE Models
The Transition Kernel in Mutation Step
Transition kernel Kn(θ|θn; ζn): generated by running M steps of aMetropolis-Hastings algorithm.
Suppose M = 1: Draw ϑit from
q(ϑit |θit);
define the acceptance ratio can be expressed as
α(ϑit |θit) = min
1,
p(Y1:t |ϑit)p(ϑit)/q(ϑit |θit)p(Y1:t |θit)p(θit)/q(θit |ϑit)
.
and set θit = ϑit with prob α(·) and equal to θit otherwise.
Example:
ϑ|(θin,Σ∗n,b) ∼ N
(θin, c
2nΣ∗n
).
Lessons from DSGE model MCMC:blocking of parameters can reduces persistence of Markov chain;mixture proposal density avoids “getting stuck.”
Adaptive choice of tuning parameters, e.g., ζn = cn,Σn.E. Herbst and F. Schorfheide SMC2 for DSGE Models
The Selling Point: SMC and Multi-modal Posteriors
In the small DSGE model, the shocks evolve according to:
zt = ρz zt−1 + εz,t , εz,t ∼ N(0, σ2z ),
gt = ρg gt−1 + εg ,t , εz,t ∼ N(0, σ2g ).
Now suppose we generalize the shock structure:[ztgt
]=
[ρz ρzgρgz ρg
] [zt−1
gt−1
]+
[εz,tεg ,t
],[
εz,tεg ,t
]∼ N
([00
],
[σ2z 0
0 σ2g
]).
Let’s compare posterior computations based on standard RWMHand SMC.
E. Herbst and F. Schorfheide SMC2 for DSGE Models
The Selling Point: SMC and Multimodal Posterior
“True” Posterior ρgz Pπρzg > 0
−1.0 −0.5 0.0 0.5 1.00.0
0.2
0.4
0.6
0.8
1.0
1.2
1.4
1.6
1.8
SMC RWMH
0.0
0.2
0.4
0.6
0.8
1.0
E. Herbst and F. Schorfheide SMC2 for DSGE Models
Theoretical Properties
Goal: strong law of large numbers (SLLN) and central limit theorem(CLT) as N −→∞ for every iteration n = 1, . . . ,Nφ.
Regularity conditions:proper prior;bounded likelihood function;2 + δ posterior moments of h(θ).
Idea of proof (Chopin, 2004):SLLN and CLT can be proved recursively.For step n assume that n − 1 approximation (with normalizedweights) yields
√N
(1
N
N∑i=1
h(θin−1)W in−1 − Eπn−1 [h(θ)]
)=⇒ N
(0,Ωn−1(h)
)Initialization: SLLN and CLT for iid random variables because wesample from prior.Then show that convergence also holds for stage n posterior.
E. Herbst and F. Schorfheide SMC2 for DSGE Models
Effect of λ on Inefficiency Factors InEffN [θ] (Small DSGE)
1 2 3 4 5 6 7 8λ
100
101
102
103
104
Notes: The figure depicts hairs of InEffN [θ] as function of λ. Theinefficiency factors are computed based on Nrun = 50 runs of the SMCalgorithm. Each hair corresponds to a DSGE model parameter.
E. Herbst and F. Schorfheide SMC2 for DSGE Models
Number of Stages Nφ vs Number of Particles N (SmallDSGE)
ρg σr ρr ρz σg κ σz π(A) ψ1 γ(Q) ψ2 r(A) τ10−3
10−2
10−1
100
101
Nφ = 400, N = 250
Nφ = 200, N = 500
Nφ = 100, N = 1000
Nφ = 50, N = 2000
Nφ = 25, N = 4000
Notes: V[θ]/Vπ[θ] for a specific configuration of the SMC algorithm.The inefficiency factors are computed based on Nrun = 50 runs of theSMC algorithm. Nblocks = 1, λ = 2, NMH = 1.
E. Herbst and F. Schorfheide SMC2 for DSGE Models
Number of blocks Nblocks in Mutation Step vs Number ofParticles N (Small DSGE)
ρg σg σz σr ρr κ ρz ψ1 π(A) τ γ(Q) r(A) ψ210−3
10−2
10−1
100
Nblocks = 4, N = 250
Nblocks = 2, N = 500
Nblocks = 1, N = 1000
Notes: V[θ]/Vπ[θ] for a specific configuration of the SMC algorithm.The inefficiency factors are computed based on Nrun = 50 runs of theSMC algorithm. Nφ = 100, λ = 2, NMH = 1.
E. Herbst and F. Schorfheide SMC2 for DSGE Models
Part II: Approximating the Likelihood with a Particle Filter
DSGE model takes the form of a state-space model withMeasurement equation: p(yt |st , θ)State-transition equation: p(st |st−1, θ)
Evaluation of the likelihood function p(Y1:T |θ) requires filter thatintegrates out the latent st ’s as a by-product:
p(yt |Y1:t−1) =
∫p(yt |st)p(st |st−1)p(st−1|Y1:t−1)dst−1:t
p(st |Y1:t) ∝ p(yt |st)∫
p(st |st−1)p(st−1|Y1:t−1)dst−1
Sequential Monte Carlo techniques ( = particle filters) can be usedto solve filtering problem.
Particles s jt ,W jt Mj=1 approximate
ht,M =1
M
M∑j=1
h(s jt )W jt ≈ E[h(st)|Y1:t , θ].
E. Herbst and F. Schorfheide SMC2 for DSGE Models
Generic Particle Filter
1 Initialization. Draw the initial particles from the distribution
s j0iid∼ p(s0) and set W j
0 = 1, j = 1, . . . ,M.2 Recursion. For t = 1, . . . ,T :
1 Forecasting st . Draw s jt from density gt(st |s jt−1, θ) and define
ωjt =
p(s jt |s jt−1, θ)
gt(sjt |s jt−1, θ)
.
Then,
ht,M =1
M
M∑j=1
h(s jt )ωjtW
jt−1 ≈ E[h(st)|Y1:t−1, θ].
2 Forecasting yt . Define the incremental weights
w jt = p(yt |s jt , θ)ωj
t .
The predictive density p(yt |Y1:t−1, θ) can be approximated by
p(yt |Y1:t−1, θ) =1
M
M∑j=1
w jtW
jt−1.
3 Updating.4 Selection.
3 Likelihood Approximation.E. Herbst and F. Schorfheide SMC2 for DSGE Models
Generic Particle Filter
1 Initialization.2 Recursion. For t = 1, . . . ,T : ...
Updating. Define the normalized weights
W jt =
w jtW
jt−1
1M
∑Mj=1 w
jtW
jt−1
.
Then,
ht,M =1
M
M∑j=1
h(s jt )Wjt ≈ E[h(st)|Y1:t , θ].
Selection (Optional). Resample the particles via multinomialresampling. Leads to s jt ,W j
t = 1Mj=1. Then,
ht,M =1
M
M∑j=1
h(s jt )Wjt ≈ E[h(st)|Y1:t , θ].
3 Likelihood Approximation.
ln p(Y1:T |θ) =T∑t=1
ln
1
M
M∑j=1
w jtW
jt−1
.
E. Herbst and F. Schorfheide SMC2 for DSGE Models
Adapting the Generic PF
Bootstrap particle filter:
gt(st |s jt−1) = p(st |s jt−1).
Easy to implement, but often very inaccurate.
Conditionally-optimal particle filter:
gt(st |s jt−1) = p(st |yt , s jt−1).
This is the posterior of st given s jt−1. Typically infeasible, but agood benchmark.
Approximately conditionally-optimal filter: e.g., from linearizedversion of DSGE model or approximate nonlinear filters.
E. Herbst and F. Schorfheide SMC2 for DSGE Models
Distribution of Log-Likelihood Approximation Errors:ln p(Y |θ)− ln p(Y |θ)
Small-Scale DSGE Smets-Wouters
−30 −20 −10 0 10 200.0
0.2
0.4
0.6
0.8
1.0
1.2
Den
sity
CO, θ = θm
BS, θ = θl
BS, θ = θm
−700−600−500−400−300−200−100 0 100 2000.00
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
0.09
CO,M = 4, 000
BS,M = 40, 000
BS,M = 400, 000
Notes: Nrun = 100 for the bootstrap (BS) particle filter andconditionally-optimal (CO) particle filter.
E. Herbst and F. Schorfheide SMC2 for DSGE Models
Part III: Putting the Pieces Together – SMC 2
Start from SMC algorithm ... replace actual likelihood by particlefilter approximation in the correction and mutation steps of SMCalgorithm.
Data tempering instead of likelihood tempering: πDn (θ) = p(θ|Y1:tn).
Key Idea: let
p(Y1:tn |θn) = g(Y1:tn |θn,U1:tn).
where U1:tn ∼ p(U1:tn) is an array of iid uniform random variablesgenerated by the particle filter.
Important Result: Particle filter delivers an unbiased estimate of theincremental weight p(Ytn−1+1:tn |θ):∫
g(Y1:tn |θn,U1:tn)p(U1:tn)dU1:tn = p(Y1:tn |θn).
E. Herbst and F. Schorfheide SMC2 for DSGE Models
Particle System for SMC 2 Sampler After Stage n
Parameter State
(θ1n,W
1n ) (s1,1
tn ,W1,1tn ) (s1,2
tn ,W1,2tn ) · · · (s1,M
tn ,W1,Mtn )
(θ2n,W
2n ) (s2,1
tn ,W2,1tn ) (s2,2
tn ,W2,2tn ) · · · (s2,M
tn ,W2,Mtn )
......
.... . .
...
(θNn ,WNn ) (sN,1tn ,WN,1
tn ) (sN,2tn ,WN,2tn ) · · · (sN,Mtn ,WN,M
tn )
To simplify notation, we add one observation at a time, n = t, and writeθt and πt(·).
E. Herbst and F. Schorfheide SMC2 for DSGE Models
SMC 2
1 Initialization. Draw the initial particles from the prior: θi0iid∼ p(θ)
and W i0 = 1, i = 1, . . . ,N.
2 Recursion. For t = 1, . . . ,T ,1 Correction. Reweight the particles from stage t − 1 by defining the
incremental weights
w it = p(yt |Y1:t−1, θ
it−1) = g(yt |Y1:t−1, θ
it−1,U
i1:t)
and the normalized weights
W it =
w inW
it−1
1N
∑Ni=1 w
itW
it−1
, i = 1, . . . ,N.
Then,
ht,N =1
N
N∑i=1
W it h(θit−1) ≈ Eπt [h(θ)].
2 Selection. (unchanged)3 Mutation.
E. Herbst and F. Schorfheide SMC2 for DSGE Models
SMC 2
1 Initialization.2 Recursion. For t = 1, . . . ,T ,
1 Correction.2 Selection.3 Mutation. Propagate the particles θit ,W i
t via 1 step of an MHalgorithm. The proposal distribution is given by
q(ϑit |θit)p(U∗i
1:t)
and the acceptance ratio can be expressed as
α(ϑit |θit) = min
1,
g(Y1:t |ϑit ,U
∗i1:t)p(ϑi
t)p(U∗i1:t)/q(ϑi
t |θit)p(U∗i1:t)
g(Y1:t |θit ,U i1:t)p(θit)p(U i
1:t)/q(θit |ϑit)p(U i
1:t)
.
Then,
ht,N =1
N
N∑i=1
h(θit)Wit ≈ Eπt [h(θ)].
E. Herbst and F. Schorfheide SMC2 for DSGE Models
Why Does SMC 2 Work?
Work on enlarged probability space that includes sequence ofrandom vectors U i
1:t−1 that underlies the simulation approximationof the particle filter.
At the end of iteration t − 1:
Particles θit−1,Ui1:t−1,W
it−1Ni=1.
For each parameter value θit−1 there is PF approx of the likelihood:p(Y1:t−1|θit−1) = g(Y1:t−1|θit−1,U
i1:t−1).
Swarm of particles s i,jt−1,Wi,jt−1
Mj=1 that represents the distribution
p(st−1|Y1:t−1, θit−1).
The triplets θit−1,Ui1:t−1,W
it−1Ni=1 approximate:∫ ∫
h(θ,U1:t−1)p(U1:t−1)p(θ|Y1:t−1)dU1:t−1dθ
≈ 1
N
N∑i=1
h(θit−1,Ui1:t−1)W i
t−1.
E. Herbst and F. Schorfheide SMC2 for DSGE Models
Correction Step
Write the particle filter approximation of the likelihood increment as
w it = p(yt |Y1:t−1, θ
it−1) = g(yt |Y1:t−1,U
i1:t , θ
it−1).
By induction, we can deduce that 1N
∑Ni=1 h(θit−1)w i
tWit−1
approximates the following integral∫ ∫h(θ)g(yt |Y1:t−1,U1:t , θ)p(U1:t)p(θ|Y1:t−1)dU1:tdθ
=
∫h(θ)
[∫g(yt |Y1:t−1,U1:t , θ)p(U1:t)dU1:t
]p(θ|Y1:t−1)dθ.
Provided that the particle filter approximation of the likelihoodincrement is unbiased, that is,∫
g(yt |Y1:t−1,U1:t , θ)p(U1:t)dU1:t = p(yt |Y1:t−1, θ)
for each θ, we deduce that ht,N is a consistent estimator ofEπt [h(θ)].
E. Herbst and F. Schorfheide SMC2 for DSGE Models
Selection Step
Similar to regular SMC.
We resample in every period for expositional purposes.
We are keeping track of the ancestry information in the vector At .This is important, because for each resampled particle i we not onlyneed to know its value θit but we also want to track thecorresponding value of the likelihood function p(Y1:t |θit) as well as
the particle approximation of the state, given by s i,jt ,W i,jt , and the
set of random numbers U i1:t .
In the implementation, the likelihood values are needed for themutation step.
E. Herbst and F. Schorfheide SMC2 for DSGE Models
Mutation Step
For each particle i we have:
a proposed value ϑit ;
a sequence of random vectors U∗1:t drawn from the distribution
p(U1:t);
an associated particle filter approximation of the likelihood:
p(Y1:t |ϑit) = g(Y1:t |ϑi
t ,U∗1:t).
The densities p(U i1:t) and p(U∗1:t) cancel from the formula for the
acceptance probability α(ϑit |θit):
α(ϑ|θi−1) = min
1,
g(Y |ϑ,U∗)p(U∗)p(ϑ)q(ϑ|θ(i−1))p(U∗)
g(Y |θ(i−1),U(i−1))p(U(i−1))p(θ(i−1))q(θ(i−1)|θ∗)p(U(i−1))
= min
1,
p(Y |ϑ)p(ϑ)/q(ϑ|θ(i−1))
p(Y |θ(i−1))p(θ(i−1))/q(θ(i−1)|ϑ)
.
E. Herbst and F. Schorfheide SMC2 for DSGE Models
Application to Small-Scale DSGE Model
Results are based on Nrun = 20 runs of the SMC 2 algorithm withN = 4, 000 particles.
D is data tempering and L is likelihood tempering.
KF is Kalman filter, CO-PF is conditionally-optimal PF withM = 400, BS-PF is bootstrap PF with M = 40, 000. CO-PF andBS-PF use data tempering.
E. Herbst and F. Schorfheide SMC2 for DSGE Models
Accuracy of SMC 2 Approximations
Posterior Mean (Pooled) Inefficiency Factors Std Dev of MeansKF(L) KF(D) CO-PF BS-PF KF(L) KF(D) CO-PF BS-PF KF(L) KF(D) CO-PF BS-PF
τ 2.65 2.67 2.68 2.53 1.51 10.41 47.60 6570 0.01 0.03 0.07 0.76κ 0.81 0.81 0.81 0.70 1.40 8.36 40.60 7223 0.00 0.01 0.01 0.18ψ1 1.87 1.88 1.87 1.89 3.29 18.27 22.56 4785 0.01 0.02 0.02 0.27ψ2 0.66 0.66 0.67 0.65 2.72 10.02 43.30 4197 0.01 0.02 0.03 0.34ρr 0.75 0.75 0.75 0.72 1.31 11.39 60.18 14979 0.00 0.00 0.01 0.08ρg 0.98 0.98 0.98 0.95 1.32 4.28 250.34 21736 0.00 0.00 0.00 0.04ρz 0.88 0.88 0.88 0.84 3.16 15.06 35.35 10802 0.00 0.00 0.00 0.05r (A) 0.45 0.46 0.44 0.46 1.09 26.58 73.78 7971 0.00 0.02 0.04 0.42π(A) 3.32 3.31 3.31 3.56 2.15 40.45 158.64 6529 0.01 0.03 0.06 0.40γ(Q) 0.59 0.59 0.59 0.64 2.35 32.35 133.25 5296 0.00 0.01 0.03 0.16σr 0.24 0.24 0.24 0.26 0.75 7.29 43.96 16084 0.00 0.00 0.00 0.06σg 0.68 0.68 0.68 0.73 1.30 1.48 20.20 5098 0.00 0.00 0.00 0.08σz 0.32 0.32 0.32 0.42 2.32 3.63 26.98 41284 0.00 0.00 0.00 0.11ln p(Y ) -358.75 -357.34 -356.33 -340.47 0.120 1.191 4.374 14.49
E. Herbst and F. Schorfheide SMC2 for DSGE Models
Computational Considerations
The SMC 2 results are obtained by utilizing 40 processors.
We parallelized the likelihood evaluations p(Y1:t |θit) for the θitparticles rather than the particle filter computations for the swarmss i,jt ,W i,j
t Mj=1.
The run time for the SMC 2 with conditionally-optimal PF(N = 4, 000, M = 400) is 23:24 [mm:ss] minutes, where as thealgorithm with bootstrap PF (N = 4, 000 and M = 40, 000) runs for08:05:35 [hh:mm:ss] hours.
Due to memory constraints we re-computed the entire likelihood forY1:t in each iteration.
Our sequential (data-tempering) implementation of the SMC 2
algorithm suffers from particle degeneracy in the intial stages, i.e.,for small sample sizes.
E. Herbst and F. Schorfheide SMC2 for DSGE Models
Conclusion
We explored SMC 2 methods for DSGE models.
These methods are promising, because they can handle multi-modalposterior surfaces and they can be parallelized.
However, careful tuning is required and the particle filterapproximation of the likelihood function needs to be sufficientlyaccurate.
The method worked well for a small-scale DSGE model, but not forthe Smets-Wouters model, because there was too much noise in thelikelihood approximation.
E. Herbst and F. Schorfheide SMC2 for DSGE Models