85
An intro to particle methods for parameter inference in state-space models PhD course FMS020F–NAMS002 “Statistical inference for partially observed stochastic processes”, Lund University http://goo.gl/sX8vU9 Umberto Picchini Centre for Mathematical Sciences, Lund University www.maths.lth.se/matstat/staff/umberto/ Umberto Picchini ([email protected])

Particle MCMC methods for parameter inference in state space models

Embed Size (px)

Citation preview

Page 1: Particle MCMC methods for parameter inference in state space models

An intro to particle methods for parameterinference in state-space models

PhD course FMS020F–NAMS002 “Statistical inference for partiallyobserved stochastic processes”, Lund University

http://goo.gl/sX8vU9

Umberto PicchiniCentre for Mathematical Sciences,

Lund Universitywww.maths.lth.se/matstat/staff/umberto/

Umberto Picchini ([email protected])

Page 2: Particle MCMC methods for parameter inference in state space models

This lecture will explore possibilities offered by particle filters, alsoknown as Sequential Monte Carlo (SMC) methods, when applied toparameter estimation problems.

We will not give a thorough introduction to SMC methods.

Instead, we will introduce the most basic (popular) SMC method withthe main goal of performing inference for θ, a vector of unknownparameters entering a state space / Hidden Markov model.

Results anticipation

Thanks to recent and astonishing results, we show how it is possibleto construct a practical method for exact Bayesian inference on theparameters of a state-space model.

Umberto Picchini ([email protected])

Page 3: Particle MCMC methods for parameter inference in state space models

Acknowledgements (early enough, not to forget...)

Part of the presented material, slides, MATLAB codes etc. has beenkindly shared by:

Magnus Wiktorsson (Matematikcentrum, Lund);

Fredrik Lindsten (Automatic Control, Linköping).

Umberto Picchini ([email protected])

Page 4: Particle MCMC methods for parameter inference in state space models

Working framework

We consider hereafter the (important) case where data are modelledaccording to an Hidden Markov Model (HMM), often denotedstate-space model (SSM).

(In recent literature the terms SSM and HMM have been usedinterchangeably. We do the same here, though elsewhere it issometimes assumed that HMMs→discrete space andSSM →continuous space)

A state-space model refers to a class of probabilistic graphical modelsthat describes the probabilistic dependence between latent statevariables and the observed measurements of a dynamical system.

Umberto Picchini ([email protected])

Page 5: Particle MCMC methods for parameter inference in state space models

Our system of interest is {Xt}t>0. This is a latent/unobservablestochastic process.

(Unavailable) values of the system on a discrete time grid{0, 1, ..., T} are X0:T = {X0, ..., XT }

(we consider integer times t for ease of notation).

Actual observations (data) are noisy y1:T = {y1, ..., yT }

That is at time t each yt is a noisy/corrupted version of the truestate of the system Xt.

{Xt}t>0 is assumed a Markovian stochastic process.

Observations yt are assumed conditionally independent given thecorresponding latent state Xt.

We are going to formalize this in next slide.

Umberto Picchini ([email protected])

Page 6: Particle MCMC methods for parameter inference in state space models

SSMs as “graphical models”

Graphically:

"!# "!# "!#

"!# "!# "!#

Yt−1 Yt Yt+1

Xt−1 Xt Xt+1- - - -

6 6 6

... ... (Markov chain)

(Observations)

Yt|Xt = xt ∼ p(yt|xt) (Observation density)

Xt+1|Xt = xt ∼ p(xt+1|xt) (Transition density)

X0 ∼ π(x0) (Initial distribution)

Umberto Picchini ([email protected])

Page 7: Particle MCMC methods for parameter inference in state space models

Example: Gaussian random walk

A most trivial example (linear, Gaussian SSM).

xt = xt−1 + ξt, ξt∼iidN(0, q2)

yt = xt + et, et ∼iid N(0, r2)

Therefore

p(xt|xt−1) = N(xt; xt−1, q2)

p(yt|xt) = N(yt; xt, r2)

Notation: here and in the following N(µ,σ2) is the Gaussiandistribution with mean µ and variance σ2. N(x;µ,σ2) is theevaluation at x of the pdf of a N(µ,σ2) distribution.

Umberto Picchini ([email protected])

Page 8: Particle MCMC methods for parameter inference in state space models

Notation reminder

Always remember X0 is NOT the state for the first observational time,that one is X1. Instead X0 is the (typically unknown) system’s initialstate for process {Xt}.

X0 can be set to be a deterministic constant (as in the example we aregoing to discuss soon).

Umberto Picchini ([email protected])

Page 9: Particle MCMC methods for parameter inference in state space models

Markov property of hidden states

Briefly: Xt (and actually the whole future Xt+1, Xt+2,...) given Xt−1 isindependent from anything that has happened before time t − 1:

p(xt|x0:t−1, y1:t−1) = p(xt|xt−1), t = 1, ..., T

Also, past is independent of the future given the present:

p(xt−1|xt:T , yt:T) = p(xt−1|xt)

Umberto Picchini ([email protected])

Page 10: Particle MCMC methods for parameter inference in state space models

Conditional independence of measurements

The current measurement yt given xt is conditionally independent ofthe measurement and state histories:

p(yt|x0:t, y1:t−1) = p(yt|xt).

The Markov property on {Xt} and the conditional independence on{Yt} are the key ingredients for defining an HMM or SSM.

Umberto Picchini ([email protected])

Page 11: Particle MCMC methods for parameter inference in state space models

In general {Xt} and {Yt} can be either continuous– or discrete–valuedstochastic processes. However in the following we assume {Xt} and{Yt} to be defined on continuous spaces.

Umberto Picchini ([email protected])

Page 12: Particle MCMC methods for parameter inference in state space models

Our goal

In all previous slides we haven’t made explicit reference to thepresence of unknown constants that we wish to estimate.

For example in the Gaussian random walk example the unknownswould be the variances (q2, r2) (and perhaps the initial state x0, insome situation).

In that case we would like to estimate θ = (q2, r2) using observationsy1:T . More in general...

Main goal

We introduce general methods for SSM producing inference for thevector of parameters θ. We will be particularly interested in Bayesianinference.

Of course θ can contain all sorts of unknowns, not just variances.Umberto Picchini ([email protected])

Page 13: Particle MCMC methods for parameter inference in state space models

A quick look into the final goal

p(y1:T |θ) is the likelihood of the measurements conditionally onθ.π(θ) is the prior density of θ (we always assumecontinuous-valued parameters). It encloses knowledge about θbefore we “see” our current data y1:T .Bayes theorem gives the posterior distribution:

π(θ|y1:T) =p(y1:T |θ)π(θ)

p(y1:T)∝ p(y1:T |θ)π(θ)

inference based on π(θ|y1:T) is called Bayesian inference.p(y1:T) is the marginal likelihood (evidence), independent of θ.Goal: calculate (sample draws from) π(θ|y1:T).Remark: θ is a random quantity in the Bayesian framework.

Umberto Picchini ([email protected])

Page 14: Particle MCMC methods for parameter inference in state space models

As you know... (remember we set some background requirements fortaking this course) Bayesian inference can rarely be performedwithout using some Monte Carlo sampling.

Except for simple cases (e.g. data y1, ..., yT independently distributedfrom some member of the exponential family) it is usually impossibleto write the likelihood p(y1:T |θ) in closed form.

Since early 90’s MCMC (Markov chain Monte Carlo) has opened thepossibility to perform Bayesian inference in practice.

Are we obsessed with Bayesian inference? NO! However itsometimes offers the easiest way to deal with complex, non-trivialmodels. Some good reads

The Bayesian approach actually opens the possibility for a surprisingresult (see later on...).

Umberto Picchini ([email protected])

Page 15: Particle MCMC methods for parameter inference in state space models

The likelihood function for SSMs

First of all, notice that in the Bayesian framework, since θ israndom, we do not simply write the likelihood function aspθ(y1:T) nor p(y1:T ; θ), but we must condition on θ and writep(y1:T |θ).

In a SSM data are not independent, they are only conditionallyindependent→ complication!:

p(y1:T |θ) = p(y1|θ)

T∏t=2

p(yt|y1:t−1, θ) =?

We don’t have a closed for expression for the product above becausewe do not know how to calculate p(yt|y1:t−1, θ).

Let’s see why.Umberto Picchini ([email protected])

Page 16: Particle MCMC methods for parameter inference in state space models

In a SSM the observed process is assumed to depend on the latentMarkov process {Xt}: we can write

p(y1:T |θ) =

∫p(y1:T , x0:T |θ)dx0:T =

∫p(y1:T |x0:T , θ)︸ ︷︷ ︸

use cond. indep.

× p(x0:T |θ)︸ ︷︷ ︸use Markovianity

dx0:T

=

∫ T∏t=1

p(yt|xt, θ)×{

p(x0|θ)

T∏t=1

p(xt|xt−1, θ)}

dx0:T

Problems

The expression above is a (T + 1)-dimensional integral /

For most (nontrivial) models, transition densities p(xt|xt−1) areunknown /

Umberto Picchini ([email protected])

Page 17: Particle MCMC methods for parameter inference in state space models

Special cases

Despite the analytic difficulties, find approximations for thelikelihood function is possible (we’ll consider some approachsoon).

In some simple cases, closed form solutions do exist: forexample when the SSM is linear and Gaussian (see the Gaussianrandom walk example) then the classic Kalman filter gives theexact likelihood.

In the SSM literature important (Gaussian) approximations aregiven by the extended and unscented Kalman filters. However,approximations offered by particle filters (a.k.a. sequentialMonte Carlo) are presently the state-of-art for general non-linearnon-Gaussian SSM.

Umberto Picchini ([email protected])

Page 18: Particle MCMC methods for parameter inference in state space models

What if we had p(y1:T |θ)...

Ideally if we had p(y1:T |θ) we could use Metropolis-Hastings (MH) tosample from the posterior π(θ|y1:T). At iteration r + 1 we have:

1 current value is θr, propose a new θ∗ ∼ q(θ∗|θr), e.g.θ∗ ∼ N(θr,Σθ) for some covariance matrix Σθ.

2 draw u ∼ U(0, 1)accept θ∗ if u < min(1, A), where

A =p(y1:T |θ

∗)

p(y1:T |θr)× π(θ

∗)

π(θr)× q(θr|θ

∗)

q(θ∗|θr)

then set θr+1 := θ∗ and p(y1:T |θr+1) := p(y1:T |θ∗). Otherwise,

reject θ∗, set θr+1 := θr. Set r := r + 1, go to 1 and repeat.

Umberto Picchini ([email protected])

Page 19: Particle MCMC methods for parameter inference in state space models

By repeating steps 1-2 as much as wanted we are guaranteed that, bydiscarding a “long enough” number of iterations (burnin), theremaining draws form

a Markov chain (hence dependent values) having π(θ|y1:T) astheir stationary distribution.

Therefore if you have produced R = R1 + R2 iterations ofMetropolis-Hastings, where R1 is a sufficiently long burnin, forscalar θ you can then plot the histogram of the last R2 drawsθR1+1, ..., θR−R1 . Such histogram gives the density π(θ|y1:T), upto a Monte Carlo error induced by using a finite R2.

for a vector valued θ ∈ Rp create p separate histograms of thedraws pertaining each component of θ. Such histogramsrepresent the posterior marginals π(θj|y), j = 1, ..., p.

Umberto Picchini ([email protected])

Page 20: Particle MCMC methods for parameter inference in state space models

Brush up MH with a simple example

A simple toy problem not related to state-space modelling.

Data: n = 1, 000 observations i.i.d yj ∼ N(3, 1).

Now assume µ = E(yj) unknown. Estimate µ.

Assume for example a “wide” prior: µ ∼ N(2, 22) (do not look atdata!)

Gaussian random walk to propose µ∗ = µr + 0.1 · ξ, withξ ∼ N(0, 1). Notice Gaussian proposals are symmetric, henceq(µ∗|µr) = q(µr|µ

∗) therefore

A =p(y1:n|µ

∗)

p(y1:n|µr)× π(µ

∗)

π(µr)

Umberto Picchini ([email protected])

Page 21: Particle MCMC methods for parameter inference in state space models

Start far at µ = 10. Produce 10,000 iterations.

MCMC iterations0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000

2

3

4

5

6

7

8

9

10

True value µ = 3-1 0 1 2 3 4 5 6

0

2

4

6

8

10

12

14posterior π(µ|y) after burnin

posteriorprior

The posterior on the right plot discards the initial 500 draws (burnin).

Umberto Picchini ([email protected])

Page 22: Particle MCMC methods for parameter inference in state space models

Remark

Notice Metropolis-Hastings is not an optimization algorithm.

Unlike in maximum likelihood estimation here we are not trying toconverge towards some mode.

What we want is to explore thoroughly (i.e. sample from) π(θ|data),including its tails.

This way we can directly assess the uncertainty about θ by looking atπ(θ|data), instead of having to resort to asymptotic argumentsregarding the sampling distribution of θ̂n when n→∞ as inML-theory.

Umberto Picchini ([email protected])

Page 23: Particle MCMC methods for parameter inference in state space models

Example: a nonlinear SSM

{xt = 0.5xt−1 + 25 xt−1

(1+x2t−1)

+ 8 cos(1.2(t − 1)) + vt,

yt = 0.05x2t + et,

with a deterministic x0 = 0, measurements at times t = 1, 2, ..., 100.

vt ∼ N(0, q), et ∼ N(0, r) all independent.

Data generated with q = 0.1 and r = 1.

time0 10 20 30 40 50 60 70 80 90 100

-20

-15

-10

-5

0

5

10

15

20

XY

Umberto Picchini ([email protected])

Page 24: Particle MCMC methods for parameter inference in state space models

Priors: q ∼ InvGamma(0.01, 0.01), r ∼ InvGamma(0.01, 0.01)

0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 20

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4Prior density InvGamma(0.01,0.01)

Recall true values are q = 0.1 and r = 1. The chosen priors coverboth.

Umberto Picchini ([email protected])

Page 25: Particle MCMC methods for parameter inference in state space models

Results anticipation...

Perform an MCMC (how about the likelihood? we’ll see this later).

R = 10, 000 iterations (generous burnin = 3, 000 iterations).

Proposals: qnew ∼ N(qold, 0.22), and similarly for rnew.

Start far away: initial values qinit = 1, rinit = 0.1.

0 1000 2000 3000 4000 5000 6000 7000 8000 9000 100000

0.5

1

1.5Trace plot of p(q | y

1:T), acc. rate 1.103015e+01 percent

0 1000 2000 3000 4000 5000 6000 7000 8000 9000 100000

1

2

3Trace plot of p(r | y

1:T), acc. rate 1.103015e+01 percent

-0.05 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.450

5

10

15

20Empirical PDF of p(q | y

1:T) after burnin

0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.20

1

2

3Empirical PDF of p(r | y

1:T) after burnin

Umberto Picchini ([email protected])

Page 26: Particle MCMC methods for parameter inference in state space models

How do we deal with general SSM for which the exact likelihoodp(y1:T |θ) is unavailable? That’s the case of the previous example.

For example, the previously analysed model is nonlinear in xt andtherefore the exact Kalman filter can’t be used.

Umberto Picchini ([email protected])

Page 27: Particle MCMC methods for parameter inference in state space models

General Monte Carlo integrationRecall our likelihood function:

p(y1:T |θ) = p(y1|θ)

T∏t=2

p(yt |y1:t−1, θ)

We can write p(yt |y1:t−1, θ) as

p(yt |y1:t−1, θ) =∫

p(yt |xt, θ)p(xt |y1:t−1, θ)dxt = E(p(yt |xt, θ))

Use Monte Carlo integration: generate N draws from p(xt |y1:t−1, θ), then invoke thelaw of large numbers.

produce N independent draws xit ∼ p(xt |y1:t−1, θ), i = 1, ..., N

for each xit compute p(yt |xi

t, θ)and by LLN we have

1N

N∑i=1

p(yt |xit, θ)→ E(p(yt |xt, θ)), N →∞

error term is O(N−1/2) regardless the dimension of xt ,

Umberto Picchini ([email protected])

Page 28: Particle MCMC methods for parameter inference in state space models

SMC and the bootstrap filter

But how to generate “good” draws (particles) xit ∼ p(xt|y1:t−1, θ)?

Here “good” means that we want particles such that the values ofp(yt|xi

t, θ) are not negligible (“explain” a large fraction of theintegrand).

For SSM, sequential Monte Carlo (SMC) is the winning strategy.

We will NOT give a thorough introduction to SMC methods. We onlyuse a few notions to solve our parameter inference problem.

Umberto Picchini ([email protected])

Page 29: Particle MCMC methods for parameter inference in state space models

Importance sampling

Let’s remove for a moment the dependence on θ.

p(yt|y1:t−1) =

∫p(yt|x0:t)p(x0:t|y1:t−1)dx0:t

=

∫p(yt|xt)p(xt|xt−1)p(x0:t−1|y1:t−1)dx0:t

=

∫p(yt|xt)p(xt|xt−1)p(x0:t−1|y1:t−1)

h(x0:t|y1:t)h(x0:t|y1:t)dx0:t

where h(·) is an arbitrary (positive) density function called“importance density”. Choose one easy to simulate from.

1 simulate N samples: xi0:t ∼ h(x0:t|y1:t), i = 1, ..., N

2 construct importance weights wit =

p(yt|xit)p(xi

t|xit−1)p(xi

0:t−1|y1:t−1)

h(xi0:t|y1:t)

3 p(yt|y1:t−1) = E(p(yt|xt)) ≈ 1N

∑Ni=1 wi

t

Umberto Picchini ([email protected])

Page 30: Particle MCMC methods for parameter inference in state space models

Importance Sampling

Importance Sampling is an appealing and revolutionary idea forMonte Carlo integration.

However generating at each time a “cloud of particles” xi0:t is not

really computationally appealing, and it’s not clear how to do so.

We need something enabling a sort of sequential mechanism, as tincreases.

Umberto Picchini ([email protected])

Page 31: Particle MCMC methods for parameter inference in state space models

Sequential Importance Sampling

When h(·) is chosen in an intelligent way, an important property is theone that allows sequential update of weights. After some derivationon the board, we have (see p. 121 in Särkkä1 and p. 5 in Cappe et al.2)

wit ∝

p(yt|xit)p(x

it|x

it−1)

h(xit|xi

0:t−1, y1:t)wi

t−1.

However, the dependence of wt on wt−1 is a curse (with remedy) asdescribed in next slide.

1Särkkä, available here.2Cappe, Godsill and Moulines. Available here.

Umberto Picchini ([email protected])

Page 32: Particle MCMC methods for parameter inference in state space models

The particle degeneracy problem

Particle degeneracy occurs when at time t all but one of theimportance weights wi

t are close to zero. This implies a poorapproximation to p(yt|y1:t−1).

Notice that when a particle gets a zero weight (or a “small” positiveweight that your computer sets to zero→ numerical underflow) thatparticle is doomed! Since

wit ∝

p(yt|xit)p(x

it|x

it−1)

h(xit|xi

0:t−1, y1:t)wi

t−1.

if for a given i we have wit−1 = 0 particle i will have zero weight for

all subsequent times.

Umberto Picchini ([email protected])

Page 33: Particle MCMC methods for parameter inference in state space models

The particle degeneracy problem

For example, suppose a new data point yt is encountered and thisis an outlier.

Then most particles will end up far away from yt and theirweights will be ≈ 0.

Those particles will die.

Eventually, for a time horizon T which is long enough, all butone particle will have zero weight.

This means that the methodology is very fragile and particledegeneracy has affected SMC methods for decades.

Umberto Picchini ([email protected])

Page 34: Particle MCMC methods for parameter inference in state space models

The Resampling idea

A life saving solution is to use resampling with replacement (Gordonet al.3).

1 interpret w̃it as the probability to sample xi

t from the weighted set{xi

t, w̃it, i = 1, ..., N}, with w̃i

t := wit/∑

i wit.

2 draw N times with replacement from the weighted set. Replacethe old particles with the new ones {x̃1

t , ..., x̃Nt }.

3 Reset weights wit = 1/N (the resampling has destroyed the

information on “how” we reached time t).

Since resampling is done with replacement, a particle with a largeweight is likely to be drawn multiple times. Particles with very smallweights are not likely to be drawn at all. Nice!

3Gordon, Salmond and Smith. IEEE Proceedings F. 140(2) 1993.Umberto Picchini ([email protected])

Page 35: Particle MCMC methods for parameter inference in state space models

Sequential Importance Sampling Re-sampling (SISR)

We are now in the position to introduce a method that samples fromh(·), so that these samples can be used as if they were fromp(x0:t|y1:t−1) provided these are appropriately re-weighted.

1 Sample x1t , ..., xN

t ∼ h(·) (same as before)

2 compute weights wit (same as before)

3 normalize weights w̃it = wi

t/∑N

i=1 wit

4 We have a discrete probability distribution {xit, w̃i

t}, i = 1, ..., N

5 Resample N times with replacement from the set {x1t , ..., xN

t }

having associate probabilities {w̃1t , ..., w̃N

t }, to generate anothersample {x̃1

t , ..., x̃Nt }.

6 {x̃1t , ..., x̃N

t } is an approximate sample from p(x0:t|y1:t−1).

Sketch of Proof on page 111 in Wilkinson’s book.

Umberto Picchini ([email protected])

Page 36: Particle MCMC methods for parameter inference in state space models

The next animation illustrates the concept of sequential importancesampling resampling with N = 5 particles.

Light blue: observed trajectory (data)

dark blue: forward simulation of the latent process {Xt}

pink balls: particles xit

green balls: selected particles x̃it from resampling

red curve: density p(yt|xt)

Umberto Picchini ([email protected])

Page 37: Particle MCMC methods for parameter inference in state space models

Umberto Picchini ([email protected])

Page 38: Particle MCMC methods for parameter inference in state space models

Umberto Picchini ([email protected])

Page 39: Particle MCMC methods for parameter inference in state space models

Umberto Picchini ([email protected])

Page 40: Particle MCMC methods for parameter inference in state space models

Umberto Picchini ([email protected])

Page 41: Particle MCMC methods for parameter inference in state space models

Umberto Picchini ([email protected])

Page 42: Particle MCMC methods for parameter inference in state space models

Umberto Picchini ([email protected])

Page 43: Particle MCMC methods for parameter inference in state space models

Umberto Picchini ([email protected])

Page 44: Particle MCMC methods for parameter inference in state space models

Umberto Picchini ([email protected])

Page 45: Particle MCMC methods for parameter inference in state space models

Umberto Picchini ([email protected])

Page 46: Particle MCMC methods for parameter inference in state space models

Umberto Picchini ([email protected])

Page 47: Particle MCMC methods for parameter inference in state space models

Umberto Picchini ([email protected])

Page 48: Particle MCMC methods for parameter inference in state space models

Umberto Picchini ([email protected])

Page 49: Particle MCMC methods for parameter inference in state space models

Umberto Picchini ([email protected])

Page 50: Particle MCMC methods for parameter inference in state space models

Umberto Picchini ([email protected])

Page 51: Particle MCMC methods for parameter inference in state space models

Umberto Picchini ([email protected])

Page 52: Particle MCMC methods for parameter inference in state space models

Umberto Picchini ([email protected])

Page 53: Particle MCMC methods for parameter inference in state space models

Umberto Picchini ([email protected])

Page 54: Particle MCMC methods for parameter inference in state space models

Umberto Picchini ([email protected])

Page 55: Particle MCMC methods for parameter inference in state space models

Umberto Picchini ([email protected])

Page 56: Particle MCMC methods for parameter inference in state space models

Umberto Picchini ([email protected])

Page 57: Particle MCMC methods for parameter inference in state space models

Umberto Picchini ([email protected])

Page 58: Particle MCMC methods for parameter inference in state space models

Umberto Picchini ([email protected])

Page 59: Particle MCMC methods for parameter inference in state space models

Umberto Picchini ([email protected])

Page 60: Particle MCMC methods for parameter inference in state space models

Umberto Picchini ([email protected])

Page 61: Particle MCMC methods for parameter inference in state space models

Bootstrap filter

This is the method we will actually use to ease Bayesian inference. Thebootstrap filter is a simple example of SISR.

Choose h(xit|x

i0:t−1, y1:t) := p(xi

t|xit−1)

Recall in SISR after performing resampling reset all weights→ wi

t−1 = 1N

Thus

wit ∝

p(yt|xit)p(x

it|x

it−1)

h(xit|xi

0:t−1, y1:t)wi

t−1

=1N

p(yt|xit) ∝ p(yt|xi

t), i = 1, ..., N

This is a SISR strategy therefore we still have that xit ≈ p(x0:t|y1:t−1)

Umberto Picchini ([email protected])

Page 62: Particle MCMC methods for parameter inference in state space models

Bootstrap filter in detail

1 t = 0 (initialize) xi0 ∼ π(x0), assign w̃i

0 = 1/N, i = 1, ..., N

2 at the current t assume we have the weighted particles {xit, w̃i

t}

3 from the current sample, resample with replacement N times toobtain {x̃i

t, i = 1, ..., N}.

4 From your model propagate forward xit+1 ∼ p(xt+1|x̃i

t),i = 1, ..., N.

5 Compute wit+1 = p(yt+1|xi

t+1) and normalise weightsw̃i

t+1 = wit+1/∑N

i=1 wit+1

6 set t := t + 1 and if t < T go to step 2.

Umberto Picchini ([email protected])

Page 63: Particle MCMC methods for parameter inference in state space models

So the bootstrap filter by et al. (1993)4 easily provide what we need!

p̂(yt|y1:t−1) =1N

∑Ni=1 wi

t

Finally a likelihood approximation:

p̂(y1:T) = p̂(y1)

T∏t=2

p̂(yt|y1:t−1)

Put back θ in the notation so to obtain:

approximate maximum likelihood

θmle = argmaxθp̂(y1:T ; θ)

or

exact Bayesian inference by using p̂(y1:T |θ) insideMetropolis-Hastings.why exact?? let’s check it after the example...

4Gordon, Salmond and Smith. IEEE Proceedings F. 140(2) 1993.Umberto Picchini ([email protected])

Page 64: Particle MCMC methods for parameter inference in state space models

Back to the nonlinear SSM example

We can now comment on how we obtained previously shown results(reproposed here). We used the bootstrap filter with N = 500 particles andR = 10, 000 MCMC iterations.

forward propagation: xt+1 = 0.5xt + 25 xt(1+x2

t )+ 8 cos(1.2t) + vt+1

wit+1 = N(0.05(xi

t+1)2, r)

0 1000 2000 3000 4000 5000 6000 7000 8000 9000 100000

0.5

1

1.5Trace plot of p(q | y

1:T), acc. rate 1.103015e+01 percent

0 1000 2000 3000 4000 5000 6000 7000 8000 9000 100000

1

2

3Trace plot of p(r | y

1:T), acc. rate 1.103015e+01 percent

-0.05 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.450

5

10

15

20Empirical PDF of p(q | y

1:T) after burnin

0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.20

1

2

3Empirical PDF of p(r | y

1:T) after burnin

Umberto Picchini ([email protected])

Page 65: Particle MCMC methods for parameter inference in state space models

Numerical issues with particle degeneracy

When coding your algorithms try to consider the following beforenormalizing weights

code unnormalised weights on log-scale: e.g. whenwi

t = N(yt|xit) the exp() in the Gaussian pdf will likely produce

an underflow (wit = 0) for xi

t far from yt.Solution: reason in terms of logw instead of w.

However afterwards we necessarily have to go back tow:=exp(logw) then normalize and the above might still notbe enough.Solution: subtract the maximum (log)weight from each(log)weight, e.g. set logw:=logw-max(logw). This istotally ok, the importance of particles is unaffected, weights areonly scaled by the same constant c=max(logw).

Umberto Picchini ([email protected])

Page 66: Particle MCMC methods for parameter inference in state space models

Safe (log)likelihood computation

As usual, better compute the (approximate) loglikelihood instead ofthe likelihood:

log p̂(y1:T |θ) =

T∑t=1

(− log N + log

N∑i=1

wit

)At time t set ct = max{log wi

t, i = 1, ..., N} and setwi∗

t := exp(log wit − ct)

Once all the wi∗t are available compute

log∑N

i=1 wit = ct + log

∑Ni=1 wi∗

t .

(the latter follows from wit = wi∗

t exp{ct} which you don’t need toevaluate)

Umberto Picchini ([email protected])

Page 67: Particle MCMC methods for parameter inference in state space models

Incredible result

Quite astonishingly Andrieu and Roberts5 proved that using an unbiasedestimate of the likelihood function into the MCMC routine is sufficient toobtain exact Bayesian inference!

That is using the acceptance ratio

A =p̂(y1:T |θ

∗)

p̂(y1:T |θ)× π(θ

∗)

π(θ)× q(θ|θ∗)

q(θ∗|θ)

will return a Markov chain with stationary distribution π(θ|y1:T) regardlessthe finite number N of particles used to approximate the likelihood!.

The good news is that E(p̂(y1:T |θ)) = p(y1:T |θ) with p̂(y1:T |θ) obtained viaSMC.

5Andrieu and Roberts (2009), Annals of Statistics, 37(2) 697–725.Umberto Picchini ([email protected])

Page 68: Particle MCMC methods for parameter inference in state space models

The previous result will be considered, in my opinion, one of the mostimportant statistical results of the XXI century.

In fact, it offers an “exact-approximate” approach, where because ofcomputing limitations we can only produce N <∞ particles, while still bereassured to obtain exact (Bayesian) inference under minor assumptions.

But let’s give a rapid (technically informal) look at why it works.

Key result: unbiasedness (del Moral 20046)

We have that

E(p̂(y1:T |θ)) =

∫p̂(y1:T |θ, ξ)p(ξ)dξ = p(y1:T |θ)

with ξ ∼ p(ξ) vector of all random variates generated during SMC (both topropagate forward the state and to perform particles resampling).

6Easier to look at Pitt, Silva, Giordani, Kohn. J. Econometrics 171, 2012.Umberto Picchini ([email protected])

Page 69: Particle MCMC methods for parameter inference in state space models

To prove the exactness of the approach we look at the (easier and lessgeneral) argument in sec. 2.2 of Pitt, Silva, Giordani, Kohn. J.Econometrics 171, 2012.

To simplify the notation take y := y1:T .

π̂(θ, ξ|y) approximate joint posterior of (θ, ξ) obtained via SMC

π̂(θ, ξ|y) =p̂(y|θ, ξ)p(ξ)π(θ)

p(y)

(notice ξ and θ are assumed a-priori independent)

Notice we put p(y) not p̂(y) at the denominator: this followsfrom the unbiasedeness assumption as we obtain∫ ∫

p̂(y|θ, ξ)p(ξ)π(θ)dξdθ =∫π(θ){

∫p̂(y|θ, ξ)p(ξ)dξ}dθ =∫

π(θ)p(y|θ)dθ = p(y).

Umberto Picchini ([email protected])

Page 70: Particle MCMC methods for parameter inference in state space models

The exact (unavailable) posterior of θ is

π(θ|y) =p(y|θ)π(θ)

p(y)

therefore the marginal likelihood (evidence) is

p(y) =p(y|θ)π(θ)π(θ|y)

and

π̂(θ, ξ|y) =p̂(y|θ, ξ)p(ξ)π(θ)

p(y)

=π(θ|y)p̂(y|θ, ξ)p(ξ)���π(θ)

p(y|θ)���π(θ)

Umberto Picchini ([email protected])

Page 71: Particle MCMC methods for parameter inference in state space models

Now, we know that applying an MCMC targeting π̂(θ, ξ|y) thendiscarding the output pertaining to ξ corresponds to integrate-out ξfrom the posterior∫

π̂(θ, ξ|y)dξ =π(θ|y)p(y|θ)

∫p̂(y|θ, ξ)p(ξ)dξ︸ ︷︷ ︸E(p̂(y|θ))=p(y|θ)

= π(θ|y)

We are thus performing a pseudo-marginal approach: “marginal”because we disregard ξ; pseudo because we use p̂(·) not p(·).

Therefore we proved that, using MCMC on an (artificially)augmented posterior, then discard from the output all the randomvariates created during SMC returns exact Bayesian inference.

Notice that discarding the ξ is something that we naturally do inMetropolis-Hastings hence nothing strange is happening here. The ξare just instrumental, uninteresting, variates independent of θ andindependent of {Xt}.

Umberto Picchini ([email protected])

Page 72: Particle MCMC methods for parameter inference in state space models

One more example

Let’s study the stochastic Ricker model.{yt ∼ Pois(φNt)

Nt = r · Nt−1e−Nt−1+et , et ∼ N(0,σ2)

Nt is the size of a population at t

r is the growth rate

φ is a scale parameter

et environmental noise

So this example shows a observational model for yt given by adiscrete distribution.

Classic Kalman based filtering can’t accommodate discreteness.

Umberto Picchini ([email protected])

Page 73: Particle MCMC methods for parameter inference in state space models

We can use the R package pomp to fit this model.

pomp supports a number of inference methods of interest to us:

particle marginal methods

iterated filtering

approximate Bayesian computation (ABC)

synthetic likelihoods

...and more.

Essentially a user who’s interested in fitting a SSM can benefit from awell tested platform, and experiment with a number of possibilities.

I have never used it used it until 4 days ago (!). But now I have aworking example (also as an optional exercise!).

Umberto Picchini ([email protected])

Page 74: Particle MCMC methods for parameter inference in state space models

Here are 50 observations from the Ricker model at times (1, 2, ..., 50)generated with (r,σ,φ) = (44.7, 0.3, 10) and starting size N0 = 7 att = 0.

010

020

0

y0

510

20

N−

0.4

0.0

0.4

0 10 20 30 40 50

e

time

myobservedricker

You can also notice the path of the (latent) process Nt as well as the(not so interesting) realizations of et.

Umberto Picchini ([email protected])

Page 75: Particle MCMC methods for parameter inference in state space models

The pomp pmcmc function runs a pseudo-marginal MCMC. Assumewe are only interested in (r,φ) and keep σ = 0.3 constant (“known”).Here we use 500 particles with 5,000 MCMC iterations. We start at(r,φ) = (7.4, 5). We assumed flat (improper) priors.

−80

0−

500

−20

0

logl

ik−

1.0

0.0

0.5

1.0

log.

prio

r0

48

12

nfai

l

0 1000 2000 3000 4000 5000

PMCMC iteration

1030

50

r5

67

89

11

phi

0 1000 2000 3000 4000 5000

PMCMC iteration

PMCMC convergence diagnostics

Umberto Picchini ([email protected])

Page 76: Particle MCMC methods for parameter inference in state space models

We used a non-adaptive Gaussian random walk (variances are keptfixed). You might get better results with an adaptive version.

A topic set as an (optional) exercise is to have a thought at why themethod fails at estimating parameters of a nearly deterministic(smaller σ) stochastic Ricker model.

Furthermore, for Nt deterministic (σ = 0), where particle filters arenot applicable (nor needed), exact likelihood calculation is alsochallenging.

A great coverage of this issue is in Fasiolo et al. (2015)arXiv:1411.4564 comparing particle marginal methods, approximateBayesian computation, iterated filtering and more. Very muchrecommended read.

Umberto Picchini ([email protected])

Page 77: Particle MCMC methods for parameter inference in state space models

Conclusions

We have outlined a powerful methodology for exact Bayesianinference, regardless the number of particles N

a too small N will have a negative impact on chain mixing→many rejections, sticky chain.

the methodology is perfectly suited for state-space modelling.

the methodology is “plug-and-play” as besides knowing theobservations density p(yt|xt) nothing else is required.

in fact we only need the ability to forward simulate xt given xt−1which can be performed numerically from the state model.

the above implies that knowledge of the transition densityexpression p(xt|xt−1) is not required. BINGO!

Umberto Picchini ([email protected])

Page 78: Particle MCMC methods for parameter inference in state space models

Software

I’m not very much updated on all available software for parameterinference via SMC (tend to write my code from scratch). Somepossibilities are:

LiBBi (C++ template library)

Biips (C++ with interfaces to MATLAB/R)

demo("PMCMC") in the R package smfsb.

R package pomp

accompanying MATLAB code for the book by S. Särkkä. Codeavailable here.

the MATLAB code for the presented example is available at myGithub page.

Umberto Picchini ([email protected])

Page 79: Particle MCMC methods for parameter inference in state space models

Important issues

How to tune the number of particles?

Doucet, Pitt, and Kohn. Efficient implementation of Markovchain Monte Carlo when using an unbiased likelihood estimator.arXiv:1210.1871 (2012).

Pitt, dos Santos Silva, Giordani and Kohn. On some properties ofMarkov chain Monte Carlo simulation methods based on theparticle filter. Journal of Econometrics 171, no. 2 (2012):134-151.

Sherlock, Thiery, Roberts and Rosenthal. On the efficiency ofpseudo-marginal random walk Metropolis algorithms.arXiv:1309.7209 (2013).

Umberto Picchini ([email protected])

Page 80: Particle MCMC methods for parameter inference in state space models

Important issues

Comparison of resampling schemes.

Douc, Cappe and Moulines. Comparison of resampling schemesfor particle filtering.http://biblio.telecom-paristech.fr/cgi-bin/download.cgi?id=5755

Hol, Schön and Gustaffson. On resampling algorithms forparticle filters, http://people.isy.liu.se/rt/schon/Publications/HolSG2006.pdf

Umberto Picchini ([email protected])

Page 81: Particle MCMC methods for parameter inference in state space models

Appendix

Umberto Picchini ([email protected])

Page 82: Particle MCMC methods for parameter inference in state space models

Multinomial resampling

This is the simplest to explain (not the most efficient though) resamplingscheme.

At time t we wish to sample from a population of weighted particles(xi

t, w̃it, i = 1, ..., N). What we actually do is to sample N times particle

indeces with replacement from the population (i, w̃it, i = 1, ..., N). This will

be a sample of size N from a multinomial distribution.

Pick at particle from the “urn”, the larger its probability w̃it the more likely it

will be picked. Record its index i and put it back in the urn. Repeat for atotal of N times.

To code the sampling procedure we just need to recall the inverse transformmethod.For a generic random variable X, let FX be an invertible cdf. We can samplean x from FX using x := F−1

X (u), with u ∼ U(0, 1).

Umberto Picchini ([email protected])

Page 83: Particle MCMC methods for parameter inference in state space models

For example let’s start from a simple example of multinomialdistribution, the Bernoulli distribution.

X ∈ {0, 1} with p = P(X = 1), 1 − p = P(X = 0). Then

FX(x) =

0 x < 01 − p 0 6 x < 11 x > 1

(1)

Draw the “stair” represented by the plot of FX . Generate a u ∼ U(0, 1)and “hit the stair’s steps”. If 0 < u 6 1 − p then set x := 0 and ifu > 1 − p set x := 1.

For the multinomial case it is a simple generalization. Drop time t andset w̃i = pi. FX is a stair with N steps. Shoot a u ∼ U(0, 1) and returnindex i

i := min{

i ′ ∈ {1, ..., N}; (i ′∑

i=1

pi) − u > 0}

.

Umberto Picchini ([email protected])

Page 84: Particle MCMC methods for parameter inference in state space models

Cool reads on Bayesian methods (titles are linked)

You have an engineeristic/signal processing background: checkS. Särkkä “Bayesian Filtering and Smoothing” (free PDF fromthe author!)

You are a data-scientist: check K. Murphy “Machine Learning: aprobabilistic perspective”.

You are a theoretical statistician: C. Robert “The BayesianChoice”.

You are interested in bioinformatics/systems biology: check D.Wilkinson “Stochastic Modelling for Systems Biology, 2ed.”.

You are interested in inference for SDEs with applications to lifesciences: check the book by Wilkinson above and C. Fuchs“Inference for diffusion processes”.

Umberto Picchini ([email protected])