176
From MCMC to ABC Methods From MCMC to ABC Methods Christian P. Robert Universit´ e Paris-Dauphine, IuF, & CREST http://www.ceremade.dauphine.fr/ ~ xian O’Bayes 11, Shanghai, June 10, 2011

Shanghai tutorial

Embed Size (px)

DESCRIPTION

tutorial for the O'Bayes 2011 meeting

Citation preview

Page 1: Shanghai tutorial

From MCMC to ABC Methods

From MCMC to ABC Methods

Christian P. Robert

Universite Paris-Dauphine, IuF, & CREST

http://www.ceremade.dauphine.fr/~xian

O’Bayes 11, Shanghai, June 10, 2011

Page 2: Shanghai tutorial

From MCMC to ABC Methods

Outline

Computational issues in Bayesian statistics

The Metropolis-Hastings Algorithm

The Gibbs Sampler

Approximate Bayesian computation

ABC for model choice

Page 3: Shanghai tutorial

From MCMC to ABC Methods

Computational issues in Bayesian statistics

A typology of Bayes computational problems

(i). use of a complex parameter space, as for instance inconstrained parameter sets like those resulting from imposingstationarity constraints in dynamic models;

Page 4: Shanghai tutorial

From MCMC to ABC Methods

Computational issues in Bayesian statistics

A typology of Bayes computational problems

(i). use of a complex parameter space, as for instance inconstrained parameter sets like those resulting from imposingstationarity constraints in dynamic models;

(ii). use of a complex sampling model with an intractablelikelihood, as for instance in some latent variable or graphicalmodels or in inverse problems;

Page 5: Shanghai tutorial

From MCMC to ABC Methods

Computational issues in Bayesian statistics

A typology of Bayes computational problems

(i). use of a complex parameter space, as for instance inconstrained parameter sets like those resulting from imposingstationarity constraints in dynamic models;

(ii). use of a complex sampling model with an intractablelikelihood, as for instance in some latent variable or graphicalmodels or in inverse problems;

(iii). use of a huge dataset;

Page 6: Shanghai tutorial

From MCMC to ABC Methods

Computational issues in Bayesian statistics

A typology of Bayes computational problems

(i). use of a complex parameter space, as for instance inconstrained parameter sets like those resulting from imposingstationarity constraints in dynamic models;

(ii). use of a complex sampling model with an intractablelikelihood, as for instance in some latent variable or graphicalmodels or in inverse problems;

(iii). use of a huge dataset;

(iv). use of a complex prior distribution (which may be theposterior distribution associated with an earlier sample);

Page 7: Shanghai tutorial

From MCMC to ABC Methods

Computational issues in Bayesian statistics

A typology of Bayes computational problems

(i). use of a complex parameter space, as for instance inconstrained parameter sets like those resulting from imposingstationarity constraints in dynamic models;

(ii). use of a complex sampling model with an intractablelikelihood, as for instance in some latent variable or graphicalmodels or in inverse problems;

(iii). use of a huge dataset;

(iv). use of a complex prior distribution (which may be theposterior distribution associated with an earlier sample);

(v). use of a particular inferential procedure as for instance, Bayesfactors

Bπ01(x) =

P (θ ∈ Θ0 | x)P (θ ∈ Θ1 | x)

/π(θ ∈ Θ0)

π(θ ∈ Θ1).

Page 8: Shanghai tutorial

From MCMC to ABC Methods

The Metropolis-Hastings Algorithm

The Metropolis-Hastings Algorithm

Computational issues in Bayesian statistics

The Metropolis-Hastings AlgorithmMonte Carlo basicsMonte Carlo Methods based on Markov ChainsThe Metropolis–Hastings algorithmRandom-walk Metropolis-Hastings algorithms

The Gibbs Sampler

Approximate Bayesian computation

ABC for model choice

Page 9: Shanghai tutorial

From MCMC to ABC Methods

The Metropolis-Hastings Algorithm

Monte Carlo basics

General purpose

Given a density π known up to a normalizing constant, and anintegrable function h, compute

Π(h) =

∫h(x)π(x)µ(dx) =

∫h(x)π(x)µ(dx)∫π(x)µ(dx)

when∫h(x)π(x)µ(dx) is intractable.

Page 10: Shanghai tutorial

From MCMC to ABC Methods

The Metropolis-Hastings Algorithm

Monte Carlo basics

Monte Carlo 101

Generate an iid sample x1, . . . , xN from π and estimate Π(h) by

ΠMCN (h) = N−1

N∑

i=1

h(xi).

LLN: ΠMCN (h)

as−→ Π(h)If Π(h2) =

∫h2(x)π(x)µ(dx) < ∞,

CLT:√N(ΠMC

N (h)−Π(h))

L N

(0,Π

{[h−Π(h)]2

}).

Page 11: Shanghai tutorial

From MCMC to ABC Methods

The Metropolis-Hastings Algorithm

Monte Carlo basics

Monte Carlo 101

Generate an iid sample x1, . . . , xN from π and estimate Π(h) by

ΠMCN (h) = N−1

N∑

i=1

h(xi).

LLN: ΠMCN (h)

as−→ Π(h)If Π(h2) =

∫h2(x)π(x)µ(dx) < ∞,

CLT:√N(ΠMC

N (h)−Π(h))

L N

(0,Π

{[h−Π(h)]2

}).

Caveat announcing MCMC

Often impossible or inefficient to simulate directly from Π

Page 12: Shanghai tutorial

From MCMC to ABC Methods

The Metropolis-Hastings Algorithm

Monte Carlo basics

Importance Sampling

For Q proposal distribution such that Q(dx) = q(x)µ(dx),alternative representation

Π(h) =

∫h(x){π/q}(x)q(x)µ(dx).

Page 13: Shanghai tutorial

From MCMC to ABC Methods

The Metropolis-Hastings Algorithm

Monte Carlo basics

Importance Sampling

For Q proposal distribution such that Q(dx) = q(x)µ(dx),alternative representation

Π(h) =

∫h(x){π/q}(x)q(x)µ(dx).

Principle of importance

Generate an iid sample x1, . . . , xN ∼ Q and estimate Π(h) by

ΠISQ,N (h) = N−1

N∑

i=1

h(xi){π/q}(xi).

Page 14: Shanghai tutorial

From MCMC to ABC Methods

The Metropolis-Hastings Algorithm

Monte Carlo basics

Properties of importance

ThenLLN: ΠIS

Q,N (h)as−→ Π(h) and if Q((hπ/q)2) < ∞,

CLT:√N(ΠIS

Q,N (h)−Π(h))L N

(0, Q{(hπ/q −Π(h))2}

).

Page 15: Shanghai tutorial

From MCMC to ABC Methods

The Metropolis-Hastings Algorithm

Monte Carlo basics

Properties of importance

ThenLLN: ΠIS

Q,N (h)as−→ Π(h) and if Q((hπ/q)2) < ∞,

CLT:√N(ΠIS

Q,N (h)−Π(h))L N

(0, Q{(hπ/q −Π(h))2}

).

Caveat

If normalizing constant of π unknown, impossible to use ΠISQ,N

Generic problem in Bayesian Statistics: π(θ|x) ∝ f(x|θ)π(θ).

Page 16: Shanghai tutorial

From MCMC to ABC Methods

The Metropolis-Hastings Algorithm

Monte Carlo basics

Self-Normalised Importance Sampling

Self normalized version

ΠSNISQ,N (h) =

(N∑

i=1

{π/q}(xi))−1 N∑

i=1

h(xi){π/q}(xi).

Page 17: Shanghai tutorial

From MCMC to ABC Methods

The Metropolis-Hastings Algorithm

Monte Carlo basics

Self-Normalised Importance Sampling

Self normalized version

ΠSNISQ,N (h) =

(N∑

i=1

{π/q}(xi))−1 N∑

i=1

h(xi){π/q}(xi).

LLN : ΠSNISQ,N (h)

as−→ Π(h)

and if Π((1 + h2)(π/q)) < ∞,

CLT :√N(ΠSNIS

Q,N (h)−Π(h))L N

(0, π {(π/q)(h−Π(h)}2)

).

Page 18: Shanghai tutorial

From MCMC to ABC Methods

The Metropolis-Hastings Algorithm

Monte Carlo basics

Self-Normalised Importance Sampling

Self normalized version

ΠSNISQ,N (h) =

(N∑

i=1

{π/q}(xi))−1 N∑

i=1

h(xi){π/q}(xi).

LLN : ΠSNISQ,N (h)

as−→ Π(h)

and if Π((1 + h2)(π/q)) < ∞,

CLT :√N(ΠSNIS

Q,N (h)−Π(h))L N

(0, π {(π/q)(h−Π(h)}2)

).

c© The quality of the SNIS approximation depends on thechoice of Q

Page 19: Shanghai tutorial

From MCMC to ABC Methods

The Metropolis-Hastings Algorithm

Monte Carlo Methods based on Markov Chains

Running Monte Carlo via Markov Chains (MCMC)

It is not necessary to use a sample from the distribution f toapproximate the integral

I =

∫h(x)f(x)dx ,

Page 20: Shanghai tutorial

From MCMC to ABC Methods

The Metropolis-Hastings Algorithm

Monte Carlo Methods based on Markov Chains

Running Monte Carlo via Markov Chains (MCMC)

It is not necessary to use a sample from the distribution f toapproximate the integral

I =

∫h(x)f(x)dx ,

We can obtain X1, . . . , Xn ∼ f (approx) without directlysimulating from f , using an ergodic Markov chain withstationary distribution f

Page 21: Shanghai tutorial

From MCMC to ABC Methods

The Metropolis-Hastings Algorithm

Monte Carlo Methods based on Markov Chains

Running Monte Carlo via Markov Chains (2)

Idea

For an arbitrary starting value x(0), an ergodic chain (X(t)) isgenerated using a transition kernel with stationary distribution f

Page 22: Shanghai tutorial

From MCMC to ABC Methods

The Metropolis-Hastings Algorithm

Monte Carlo Methods based on Markov Chains

Running Monte Carlo via Markov Chains (2)

Idea

For an arbitrary starting value x(0), an ergodic chain (X(t)) isgenerated using a transition kernel with stationary distribution f

◮ Insures the convergence in distribution of (X(t)) to a randomvariable from f .

◮ For a “large enough” T0, X(T0) can be considered as

distributed from f

◮ Produce a dependent sample X(T0), X(T0+1), . . ., which isgenerated from f , sufficient for most approximation purposes.

Page 23: Shanghai tutorial

From MCMC to ABC Methods

The Metropolis-Hastings Algorithm

The Metropolis–Hastings algorithm

The Metropolis–Hastings algorithm

BasicsThe algorithm uses the target density

f

and a conditional densityq(y|x)

called the instrumental (or proposal) distribution

Page 24: Shanghai tutorial

From MCMC to ABC Methods

The Metropolis-Hastings Algorithm

The Metropolis–Hastings algorithm

The MH algorithm

Algorithm (Metropolis–Hastings)

Given x(t),

1. Generate Yt ∼ q(y|x(t)).2. Take

X(t+1) =

{Yt with prob. ρ(x(t), Yt),

x(t) with prob. 1− ρ(x(t), Yt),

where

ρ(x, y) = min

{f(y)

f(x)

q(x|y)q(y|x) , 1

}.

Page 25: Shanghai tutorial

From MCMC to ABC Methods

The Metropolis-Hastings Algorithm

The Metropolis–Hastings algorithm

Features

◮ Independent of normalizing constants for both f and q(·|x)(ie, those constants independent of x)

◮ Never move to values with f(y) = 0

◮ The chain (x(t))t may take the same value several times in arow, even though f is a density wrt Lebesgue measure

◮ The sequence (yt)t is usually not a Markov chain

Page 26: Shanghai tutorial

From MCMC to ABC Methods

The Metropolis-Hastings Algorithm

The Metropolis–Hastings algorithm

Convergence properties

Under irreducibility,

1. The M-H Markov chain is reversible, withinvariant/stationary density f since it satisfies the detailedbalance condition

f(y)K(y, x) = f(x)K(x, y)

Page 27: Shanghai tutorial

From MCMC to ABC Methods

The Metropolis-Hastings Algorithm

The Metropolis–Hastings algorithm

Convergence properties

Under irreducibility,

1. The M-H Markov chain is reversible, withinvariant/stationary density f since it satisfies the detailedbalance condition

f(y)K(y, x) = f(x)K(x, y)

2. As f is a probability measure, the chain is positive recurrent

Page 28: Shanghai tutorial

From MCMC to ABC Methods

The Metropolis-Hastings Algorithm

The Metropolis–Hastings algorithm

Convergence properties (2)

4. Ifq(y|x) > 0 for every (x, y), (2)

the chain is irreducible

Page 29: Shanghai tutorial

From MCMC to ABC Methods

The Metropolis-Hastings Algorithm

The Metropolis–Hastings algorithm

Convergence properties (2)

4. Ifq(y|x) > 0 for every (x, y), (2)

the chain is irreducible

5. For M-H, f -irreducibility implies Harris recurrence

Page 30: Shanghai tutorial

From MCMC to ABC Methods

The Metropolis-Hastings Algorithm

The Metropolis–Hastings algorithm

Convergence properties (2)

4. Ifq(y|x) > 0 for every (x, y), (2)

the chain is irreducible

5. For M-H, f -irreducibility implies Harris recurrence

6. Thus, for M-H satisfying (1) and (2)(i) For h, with Ef |h(X)| < ∞,

limT→∞

1

T

T∑

t=1

h(X(t)) =

∫h(x)df(x) a.e. f.

(ii) and

limn→∞

∥∥∥∥∫

Kn(x, ·)µ(dx)− f

∥∥∥∥TV

= 0

for every initial distribution µ, where Kn(x, ·) denotes thekernel for n transitions.

Page 31: Shanghai tutorial

From MCMC to ABC Methods

The Metropolis-Hastings Algorithm

Random-walk Metropolis-Hastings algorithms

Random walk Metropolis–Hastings

Use of a local perturbation as proposal

Yt = X(t) + εt,

where εt ∼ g, independent of X(t).The instrumental density is of the form g(y − x) and the Markovchain is a random walk if we take g to be symmetric g(x) = g(−x)

Page 32: Shanghai tutorial

From MCMC to ABC Methods

The Metropolis-Hastings Algorithm

Random-walk Metropolis-Hastings algorithms

Algorithm (Random walk Metropolis)

Given x(t)

1. Generate Yt ∼ g(y − x(t))

2. Take

X(t+1) =

Yt with prob. min

{1,

f(Yt)

f(x(t))

},

x(t) otherwise.

Page 33: Shanghai tutorial

From MCMC to ABC Methods

The Metropolis-Hastings Algorithm

Random-walk Metropolis-Hastings algorithms

Optimizing the Acceptance Rate

Problem of choice of the transition kernel from a practical point ofviewMost common alternatives:

1. an instrumental density g which approximates f , such thatf/g is bounded for uniform ergodicity to apply;

2. a random walk

In both cases, the choice of g is critical,

Page 34: Shanghai tutorial

From MCMC to ABC Methods

The Metropolis-Hastings Algorithm

Random-walk Metropolis-Hastings algorithms

Case of the random walk

Different approach to acceptance ratesA high acceptance rate does not indicate that the algorithm ismoving correctly since it indicates that the random walk is movingtoo slowly on the surface of f .

Page 35: Shanghai tutorial

From MCMC to ABC Methods

The Metropolis-Hastings Algorithm

Random-walk Metropolis-Hastings algorithms

Case of the random walk

Different approach to acceptance ratesA high acceptance rate does not indicate that the algorithm ismoving correctly since it indicates that the random walk is movingtoo slowly on the surface of f .If x(t) and yt are close, i.e. f(x(t)) ≃ f(yt) y is accepted withprobability

min

(f(yt)

f(x(t)), 1

)≃ 1 .

For multimodal densities with well separated modes, the negativeeffect of limited moves on the surface of f clearly shows.

Page 36: Shanghai tutorial

From MCMC to ABC Methods

The Metropolis-Hastings Algorithm

Random-walk Metropolis-Hastings algorithms

Case of the random walk (2)

If the average acceptance rate is low, the successive values of f(yt)tend to be small compared with f(x(t)), which means that therandom walk moves quickly on the surface of f since it oftenreaches the “borders” of the support of f

Page 37: Shanghai tutorial

From MCMC to ABC Methods

The Metropolis-Hastings Algorithm

Random-walk Metropolis-Hastings algorithms

Case of the random walk (2)

If the average acceptance rate is low, the successive values of f(yt)tend to be small compared with f(x(t)), which means that therandom walk moves quickly on the surface of f since it oftenreaches the “borders” of the support of f

In small dimensions, aim at an average acceptance rate of 50%. Inlarge dimensions, at an average acceptance rate of 25%.

[Gelman,Gilks and Roberts, 1995]

Page 38: Shanghai tutorial

From MCMC to ABC Methods

The Metropolis-Hastings Algorithm

Random-walk Metropolis-Hastings algorithms

Example (Noisy AR(1))

Hidden Markov chain from a regular AR(1) model,

xt+1 = ϕxt + ǫt+1 ǫt ∼ N (0, τ2)

and observablesyt|xt ∼ N (x2t , σ

2)

Page 39: Shanghai tutorial

From MCMC to ABC Methods

The Metropolis-Hastings Algorithm

Random-walk Metropolis-Hastings algorithms

Example (Noisy AR(1))

Hidden Markov chain from a regular AR(1) model,

xt+1 = ϕxt + ǫt+1 ǫt ∼ N (0, τ2)

and observablesyt|xt ∼ N (x2t , σ

2)

The distribution of xt given xt−1, xt+1 and yt is

exp−1

2τ2

{(xt − ϕxt−1)

2 + (xt+1 − ϕxt)2 +

τ2

σ2(yt − x2t )

2

}.

Page 40: Shanghai tutorial

From MCMC to ABC Methods

The Metropolis-Hastings Algorithm

Random-walk Metropolis-Hastings algorithms

Example (Noisy AR(1) continued)

For a Gaussian random walk with scale ω small enough, therandom walk never jumps to the other mode. But if the scale ω issufficiently large, the Markov chain explores both modes and give asatisfactory approximation of the target distribution.

Page 41: Shanghai tutorial

From MCMC to ABC Methods

The Metropolis-Hastings Algorithm

Random-walk Metropolis-Hastings algorithms

Markov chain based on a random walk with scale ω = .1.

Page 42: Shanghai tutorial

From MCMC to ABC Methods

The Metropolis-Hastings Algorithm

Random-walk Metropolis-Hastings algorithms

Markov chain based on a random walk with scale ω = .5.

Page 43: Shanghai tutorial

From MCMC to ABC Methods

The Metropolis-Hastings Algorithm

Random-walk Metropolis-Hastings algorithms

MA(2)

xt = ǫt − θ1ǫt−1 − θ2ǫt−2

Since the constraints on (ϑ1, ϑ2) are well-defined, use of a flatprior over the triangle as prior.Simple representation of the likelihood

library(mnormt)

ma2like=function(theta){

n=length(y)

sigma = toeplitz(c(1 +theta[1]^2+theta[2]^2,

theta[1]+theta[1]*theta[2],theta[2],rep(0,n-3)))

dmnorm(y,rep(0,n),sigma,log=TRUE)

}

Page 44: Shanghai tutorial

From MCMC to ABC Methods

The Metropolis-Hastings Algorithm

Random-walk Metropolis-Hastings algorithms

Basic RWHM for MA(2)

Algorithm 1 RW-HM-MA(2) sampler

set ω and ϑ(1)

for i = 2 to T dogenerate ϑj ∼ U(ϑ(i−1)

j − ω, ϑ(i−1)j + ω)

set p = 0 and ϑ(i) = ϑ(i−1)

if ϑ within the triangle thenp = exp(ma2like(ϑ)−ma2like(ϑ(i−1)))

end ifif U < p thenϑ(i) = ϑ

end ifend for

Page 45: Shanghai tutorial

From MCMC to ABC Methods

The Metropolis-Hastings Algorithm

Random-walk Metropolis-Hastings algorithms

Outcome

Result with a simulated sample of 100 points and ϑ1 = 0.6,ϑ2 = 0.2 and scale ω = 0.2

Page 46: Shanghai tutorial

From MCMC to ABC Methods

The Metropolis-Hastings Algorithm

Random-walk Metropolis-Hastings algorithms

Outcome

Result with a simulated sample of 100 points and ϑ1 = 0.6,ϑ2 = 0.2 and scale ω = 0.5

Page 47: Shanghai tutorial

From MCMC to ABC Methods

The Metropolis-Hastings Algorithm

Random-walk Metropolis-Hastings algorithms

Outcome

Result with a simulated sample of 100 points and ϑ1 = 0.6,ϑ2 = 0.2 and scale ω = 2.0

Page 48: Shanghai tutorial

From MCMC to ABC Methods

The Gibbs Sampler

The Gibbs Sampler

The Gibbs SamplerGeneral PrinciplesSlice samplingConvergence

Page 49: Shanghai tutorial

From MCMC to ABC Methods

The Gibbs Sampler

General Principles

General Principles

A very specific simulation algorithm based on the targetdistribution f :

1. Uses the conditional densities f1, . . . , fp from f

Page 50: Shanghai tutorial

From MCMC to ABC Methods

The Gibbs Sampler

General Principles

General Principles

A very specific simulation algorithm based on the targetdistribution f :

1. Uses the conditional densities f1, . . . , fp from f

2. Start with the random variable X = (X1, . . . , Xp)

Page 51: Shanghai tutorial

From MCMC to ABC Methods

The Gibbs Sampler

General Principles

General Principles

A very specific simulation algorithm based on the targetdistribution f :

1. Uses the conditional densities f1, . . . , fp from f

2. Start with the random variable X = (X1, . . . , Xp)

3. Simulate from the conditional densities,

Xi|x1, x2, . . . , xi−1, xi+1, . . . , xp

∼ fi(xi|x1, x2, . . . , xi−1, xi+1, . . . , xp)

for i = 1, 2, . . . , p.

Page 52: Shanghai tutorial

From MCMC to ABC Methods

The Gibbs Sampler

General Principles

Algorithm (Gibbs sampler)

Given x(t) = (x(t)1 , . . . , x

(t)p ), generate

1. X(t+1)1 ∼ f1(x1|x(t)2 , . . . , x

(t)p );

2. X(t+1)2 ∼ f2(x2|x(t+1)

1 , x(t)3 , . . . , x

(t)p ),

. . .

p. X(t+1)p ∼ fp(xp|x(t+1)

1 , . . . , x(t+1)p−1 )

X(t+1) → X ∼ f

Page 53: Shanghai tutorial

From MCMC to ABC Methods

The Gibbs Sampler

General Principles

Properties

The full conditionals densities f1, . . . , fp are the only densities usedfor simulation. Thus, even in a high dimensional problem, all ofthe simulations may be univariate

Page 54: Shanghai tutorial

From MCMC to ABC Methods

The Gibbs Sampler

General Principles

Properties

The full conditionals densities f1, . . . , fp are the only densities usedfor simulation. Thus, even in a high dimensional problem, all ofthe simulations may be univariateThe Gibbs sampler is not reversible with respect to f . However,each of its p components is. Besides, it can be turned into areversible sampler, either using the Random Scan Gibbs sampler

see section or running instead the (double) sequence

f1 · · · fp−1fpfp−1 · · · f1

Page 55: Shanghai tutorial

From MCMC to ABC Methods

The Gibbs Sampler

General Principles

Limitations of the Gibbs sampler

Formally, a special case of a sequence of 1-D M-H kernels, all withacceptance rate uniformly equal to 1.The Gibbs sampler

1. limits the choice of instrumental distributions

Page 56: Shanghai tutorial

From MCMC to ABC Methods

The Gibbs Sampler

General Principles

Limitations of the Gibbs sampler

Formally, a special case of a sequence of 1-D M-H kernels, all withacceptance rate uniformly equal to 1.The Gibbs sampler

1. limits the choice of instrumental distributions

2. requires some knowledge of f

Page 57: Shanghai tutorial

From MCMC to ABC Methods

The Gibbs Sampler

General Principles

Limitations of the Gibbs sampler

Formally, a special case of a sequence of 1-D M-H kernels, all withacceptance rate uniformly equal to 1.The Gibbs sampler

1. limits the choice of instrumental distributions

2. requires some knowledge of f

3. is, by construction, multidimensional

Page 58: Shanghai tutorial

From MCMC to ABC Methods

The Gibbs Sampler

General Principles

Limitations of the Gibbs sampler

Formally, a special case of a sequence of 1-D M-H kernels, all withacceptance rate uniformly equal to 1.The Gibbs sampler

1. limits the choice of instrumental distributions

2. requires some knowledge of f

3. is, by construction, multidimensional

4. does not apply to problems where the number of parametersvaries as the resulting chain is not irreducible.

Page 59: Shanghai tutorial

From MCMC to ABC Methods

The Gibbs Sampler

General Principles

A wee mixture problem

−1 0 1 2 3 4

−1

01

23

4

µ1

µ2

Gibbs started at random

Page 60: Shanghai tutorial

From MCMC to ABC Methods

The Gibbs Sampler

General Principles

A wee mixture problem

−1 0 1 2 3 4

−1

01

23

4

µ1

µ2

Gibbs started at random

Gibbs stuck at the wrong mode

−1 0 1 2 3

−1

01

23

µ1

µ2

Page 61: Shanghai tutorial

From MCMC to ABC Methods

The Gibbs Sampler

Slice sampling

Slice sampler as generic Gibbs

If f(θ) can be written as a product

k∏

i=1

fi(θ),

Page 62: Shanghai tutorial

From MCMC to ABC Methods

The Gibbs Sampler

Slice sampling

Slice sampler as generic Gibbs

If f(θ) can be written as a product

k∏

i=1

fi(θ),

it can be completed as

k∏

i=1

I0≤ωi≤fi(θ),

leading to the following Gibbs algorithm:

Page 63: Shanghai tutorial

From MCMC to ABC Methods

The Gibbs Sampler

Slice sampling

Algorithm (Slice sampler)

Simulate

1. ω(t+1)1 ∼ U[0,f1(θ(t))]

;

. . .

k. ω(t+1)k ∼ U[0,fk(θ(t))]

;

k+1. θ(t+1) ∼ UA(t+1) , with

A(t+1) = {y; fi(y) ≥ ω(t+1)i , i = 1, . . . , k}.

Page 64: Shanghai tutorial

From MCMC to ABC Methods

The Gibbs Sampler

Slice sampling

Example of results with a truncated N (−3, 1) distribution

0.0 0.2 0.4 0.6 0.8 1.0

0.00

00.

002

0.00

40.

006

0.00

80.

010

x

y

Number of Iterations 2

Page 65: Shanghai tutorial

From MCMC to ABC Methods

The Gibbs Sampler

Slice sampling

Example of results with a truncated N (−3, 1) distribution

0.0 0.2 0.4 0.6 0.8 1.0

0.00

00.

002

0.00

40.

006

0.00

80.

010

x

y

Number of Iterations 2, 3

Page 66: Shanghai tutorial

From MCMC to ABC Methods

The Gibbs Sampler

Slice sampling

Example of results with a truncated N (−3, 1) distribution

0.0 0.2 0.4 0.6 0.8 1.0

0.00

00.

002

0.00

40.

006

0.00

80.

010

x

y

Number of Iterations 2, 3, 4

Page 67: Shanghai tutorial

From MCMC to ABC Methods

The Gibbs Sampler

Slice sampling

Example of results with a truncated N (−3, 1) distribution

0.0 0.2 0.4 0.6 0.8 1.0

0.00

00.

002

0.00

40.

006

0.00

80.

010

x

y

Number of Iterations 2, 3, 4, 5

Page 68: Shanghai tutorial

From MCMC to ABC Methods

The Gibbs Sampler

Slice sampling

Example of results with a truncated N (−3, 1) distribution

0.0 0.2 0.4 0.6 0.8 1.0

0.00

00.

002

0.00

40.

006

0.00

80.

010

x

y

Number of Iterations 2, 3, 4, 5, 10

Page 69: Shanghai tutorial

From MCMC to ABC Methods

The Gibbs Sampler

Slice sampling

Example of results with a truncated N (−3, 1) distribution

0.0 0.2 0.4 0.6 0.8 1.0

0.00

00.

002

0.00

40.

006

0.00

80.

010

x

y

Number of Iterations 2, 3, 4, 5, 10, 50

Page 70: Shanghai tutorial

From MCMC to ABC Methods

The Gibbs Sampler

Slice sampling

Example of results with a truncated N (−3, 1) distribution

0.0 0.2 0.4 0.6 0.8 1.0

0.00

00.

002

0.00

40.

006

0.00

80.

010

x

y

Number of Iterations 2, 3, 4, 5, 10, 50, 100

Page 71: Shanghai tutorial

From MCMC to ABC Methods

The Gibbs Sampler

Slice sampling

Good slices, tough slices

The slice sampler usually enjoys good theoretical properties (likegeometric ergodicity and even uniform ergodicity under bounded fand bounded X ).As k increases, the determination of the set A(t+1) may getincreasingly complex.

Page 72: Shanghai tutorial

From MCMC to ABC Methods

The Gibbs Sampler

Convergence

Properties of the Gibbs sampler

Theorem (Convergence)

For(Y1, Y2, · · · , Yp) ∼ g(y1, . . . , yp),

if either[Positivity condition]

(i) g(i)(yi) > 0 for every i = 1, · · · , p, implies thatg(y1, . . . , yp) > 0, where g(i) denotes the marginal distributionof Yi, or

(ii) the transition kernel is absolutely continuous with respect to g,

then the chain is irreducible and positive Harris recurrent.

Page 73: Shanghai tutorial

From MCMC to ABC Methods

The Gibbs Sampler

Convergence

Properties of the Gibbs sampler (2)

Consequences

(i) If∫h(y)g(y)dy < ∞, then

limnT→∞

1

T

T∑

t=1

h1(Y(t)) =

∫h(y)g(y)dy a.e. g.

(ii) If, in addition, (Y (t)) is aperiodic, then

limn→∞

∥∥∥∥∫

Kn(y, ·)µ(dx)− f

∥∥∥∥TV

= 0

for every initial distribution µ.

Page 74: Shanghai tutorial

From MCMC to ABC Methods

The Gibbs Sampler

Convergence

Hammersley-Clifford theorem

An illustration that conditionals determine the joint distribution

Theorem

If the joint density g(y1, y2) have conditional distributionsg1(y1|y2) and g2(y2|y1), then

g(y1, y2) =g2(y2|y1)∫

g2(v|y1)/g1(y1|v) dv.

[Hammersley & Clifford, circa 1970]

Page 75: Shanghai tutorial

From MCMC to ABC Methods

The Gibbs Sampler

Convergence

General HC decomposition

Under the positivity condition, the joint distribution g satisfies

g(y1, . . . , yp) ∝p∏

j=1

gℓj (yℓj |yℓ1 , . . . , yℓj−1 , y′ℓj+1

, . . . , y′ℓp)

gℓj (y′ℓj|yℓ1 , . . . , yℓj−1 , y

′ℓj+1

, . . . , y′ℓp)

for every permutation ℓ on {1, 2, . . . , p} and every y′ ∈ Y .

Page 76: Shanghai tutorial

From MCMC to ABC Methods

Approximate Bayesian computation

Approximate Bayesian computation

Computational issues in Bayesian statistics

The Metropolis-Hastings Algorithm

The Gibbs Sampler

Approximate Bayesian computationABC basicsAlphabet soupCalibration of ABC

ABC for model choice

Page 77: Shanghai tutorial

From MCMC to ABC Methods

Approximate Bayesian computation

ABC basics

Untractable likelihoods

Cases when the likelihood function f(y|θ) is unavailable and whenthe completion step

f(y|θ) =∫

Z

f(y, z|θ) dz

is impossible or too costly because of the dimension of z

c© MCMC cannot be implemented!

Page 78: Shanghai tutorial

From MCMC to ABC Methods

Approximate Bayesian computation

ABC basics

Illustrations

Example

Stochastic volatility model: fort = 1, . . . , T,

yt = exp(zt)ǫt , zt = a+bzt−1+σηt ,

T very large makes it difficult toinclude z within the simulatedparameters

0 200 400 600 800 1000

−0.

4−

0.2

0.0

0.2

0.4

t

Highest weight trajectories

Page 79: Shanghai tutorial

From MCMC to ABC Methods

Approximate Bayesian computation

ABC basics

Illustrations

Example

Potts model: if y takes values on a grid Y of size kn and

f(y|θ) ∝ exp

{θ∑

l∼i

Iyl=yi

}

where l∼i denotes a neighbourhood relation, n moderately largeprohibits the computation of the normalising constant

Page 80: Shanghai tutorial

From MCMC to ABC Methods

Approximate Bayesian computation

ABC basics

Illustrations

Example

Inference on CMB: in cosmology, study of the Cosmic MicrowaveBackground via likelihoods immensely slow to computate (e.gWMAP, Plank), because of numerically costly spectral transforms[Data is a Fortran program]

[Kilbinger et al., 2010, MNRAS]

Page 81: Shanghai tutorial

From MCMC to ABC Methods

Approximate Bayesian computation

ABC basics

Illustrations

Example

Phylogenetic tree: in populationgenetics, reconstitution of a commonancestor from a sample of genes viaa phylogenetic tree that is close toimpossible to integrate out[100 processor days with 4parameters]

[Cornuet et al., 2009, Bioinformatics]

Page 82: Shanghai tutorial

From MCMC to ABC Methods

Approximate Bayesian computation

ABC basics

The ABC method

Bayesian setting: target is π(θ)f(x|θ)

Page 83: Shanghai tutorial

From MCMC to ABC Methods

Approximate Bayesian computation

ABC basics

The ABC method

Bayesian setting: target is π(θ)f(x|θ)When likelihood f(x|θ) not in closed form, likelihood-free rejectiontechnique:

Page 84: Shanghai tutorial

From MCMC to ABC Methods

Approximate Bayesian computation

ABC basics

The ABC method

Bayesian setting: target is π(θ)f(x|θ)When likelihood f(x|θ) not in closed form, likelihood-free rejectiontechnique:

ABC algorithm

For an observation y ∼ f(y|θ), under the prior π(θ), keep jointlysimulating

θ′ ∼ π(θ) , z ∼ f(z|θ′) ,until the auxiliary variable z is equal to the observed value, z = y.

[Tavare et al., 1997]

Page 85: Shanghai tutorial

From MCMC to ABC Methods

Approximate Bayesian computation

ABC basics

Why does it work?!

The proof is trivial:

f(θi) ∝∑

z∈D

π(θi)f(z|θi)Iy(z)

∝ π(θi)f(y|θi)= π(θi|y) .

[Accept–Reject 101]

Page 86: Shanghai tutorial

From MCMC to ABC Methods

Approximate Bayesian computation

ABC basics

A as approximative

When y is a continuous random variable, equality z = y is replacedwith a tolerance condition,

(y, z) ≤ ǫ

where is a distance

Page 87: Shanghai tutorial

From MCMC to ABC Methods

Approximate Bayesian computation

ABC basics

A as approximative

When y is a continuous random variable, equality z = y is replacedwith a tolerance condition,

(y, z) ≤ ǫ

where is a distanceOutput distributed from

π(θ)Pθ{(y, z) < ǫ} ∝ π(θ|(y, z) < ǫ)

Page 88: Shanghai tutorial

From MCMC to ABC Methods

Approximate Bayesian computation

ABC basics

ABC algorithm

Algorithm 2 Likelihood-free rejection sampler 2

for i = 1 to N dorepeatgenerate θ′ from the prior distribution π(·)generate z from the likelihood f(·|θ′)

until ρ{η(z), η(y)} ≤ ǫset θi = θ′

end for

where η(y) defines a (not necessarily sufficient) statistic

Page 89: Shanghai tutorial

From MCMC to ABC Methods

Approximate Bayesian computation

ABC basics

Output

The likelihood-free algorithm samples from the marginal in z of:

πǫ(θ, z|y) =π(θ)f(z|θ)IAǫ,y

(z)∫Aǫ,y×Θ π(θ)f(z|θ)dzdθ ,

where Aǫ,y = {z ∈ D|ρ(η(z), η(y)) < ǫ}.

Page 90: Shanghai tutorial

From MCMC to ABC Methods

Approximate Bayesian computation

ABC basics

Output

The likelihood-free algorithm samples from the marginal in z of:

πǫ(θ, z|y) =π(θ)f(z|θ)IAǫ,y

(z)∫Aǫ,y×Θ π(θ)f(z|θ)dzdθ ,

where Aǫ,y = {z ∈ D|ρ(η(z), η(y)) < ǫ}.The idea behind ABC is that the summary statistics coupled with asmall tolerance should provide a good approximation of theposterior distribution:

πǫ(θ|y) =∫

πǫ(θ, z|y)dz ≈ π(θ|y) .

Page 91: Shanghai tutorial

From MCMC to ABC Methods

Approximate Bayesian computation

ABC basics

MA example

Back to the MA(q) model

xt = ǫt +

q∑

i=1

ϑiǫt−i

Simple prior: uniform over the inverse [real and complex] roots in

Q(u) = 1−q∑

i=1

ϑiui

under the identifiability conditions

Page 92: Shanghai tutorial

From MCMC to ABC Methods

Approximate Bayesian computation

ABC basics

MA example

Back to the MA(q) model

xt = ǫt +

q∑

i=1

ϑiǫt−i

Simple prior: uniform prior over the identifiability zone, e.g.triangle for MA(2)

Page 93: Shanghai tutorial

From MCMC to ABC Methods

Approximate Bayesian computation

ABC basics

MA example (2)

ABC algorithm thus made of

1. picking a new value (ϑ1, ϑ2) in the triangle

2. generating an iid sequence (ǫt)−q<t≤T

3. producing a simulated series (x′t)1≤t≤T

Page 94: Shanghai tutorial

From MCMC to ABC Methods

Approximate Bayesian computation

ABC basics

MA example (2)

ABC algorithm thus made of

1. picking a new value (ϑ1, ϑ2) in the triangle

2. generating an iid sequence (ǫt)−q<t≤T

3. producing a simulated series (x′t)1≤t≤T

Distance: basic distance between the series

ρ((x′t)1≤t≤T , (xt)1≤t≤T ) =T∑

t=1

(xt − x′t)2

or distance between summary statistics like the q autocorrelations

τj =T∑

t=j+1

xtxt−j

Page 95: Shanghai tutorial

From MCMC to ABC Methods

Approximate Bayesian computation

ABC basics

Comparison of distance impact

Evaluation of the tolerance on the ABC sample against bothdistances (ǫ = 100%, 10%, 1%, 0.1%) for an MA(2) model

Page 96: Shanghai tutorial

From MCMC to ABC Methods

Approximate Bayesian computation

ABC basics

Comparison of distance impact

Evaluation of the tolerance on the ABC sample against bothdistances (ǫ = 100%, 10%, 1%, 0.1%) for an MA(2) model

Page 97: Shanghai tutorial

From MCMC to ABC Methods

Approximate Bayesian computation

ABC basics

Comparison of distance impact

Evaluation of the tolerance on the ABC sample against bothdistances (ǫ = 100%, 10%, 1%, 0.1%) for an MA(2) model

Page 98: Shanghai tutorial

From MCMC to ABC Methods

Approximate Bayesian computation

ABC basics

ABC advances

Simulating from the prior is often poor in efficiency

Page 99: Shanghai tutorial

From MCMC to ABC Methods

Approximate Bayesian computation

ABC basics

ABC advances

Simulating from the prior is often poor in efficiencyEither modify the proposal distribution on θ to increase the densityof x’s within the vicinity of y...

[Marjoram et al, 2003; Bortot et al., 2007; Sisson et al., 2007]

Page 100: Shanghai tutorial

From MCMC to ABC Methods

Approximate Bayesian computation

ABC basics

ABC advances

Simulating from the prior is often poor in efficiencyEither modify the proposal distribution on θ to increase the densityof x’s within the vicinity of y...

[Marjoram et al, 2003; Bortot et al., 2007; Sisson et al., 2007]

...or by viewing the problem as a conditional density estimationand by developing techniques to allow for larger ǫ

[Beaumont et al., 2002]

Page 101: Shanghai tutorial

From MCMC to ABC Methods

Approximate Bayesian computation

ABC basics

ABC advances

Simulating from the prior is often poor in efficiencyEither modify the proposal distribution on θ to increase the densityof x’s within the vicinity of y...

[Marjoram et al, 2003; Bortot et al., 2007; Sisson et al., 2007]

...or by viewing the problem as a conditional density estimationand by developing techniques to allow for larger ǫ

[Beaumont et al., 2002]

.....or even by including ǫ in the inferential framework [ABCµ][Ratmann et al., 2009]

Page 102: Shanghai tutorial

From MCMC to ABC Methods

Approximate Bayesian computation

Alphabet soup

ABC-NP

Better usage of [prior] simulations byadjustement: instead of throwing awayθ′ such that ρ(η(z), η(y)) > ǫ, replaceθs with locally regressed

θ∗ = θ − {η(z)− η(y)}Tβ[Csillery et al., TEE, 2010]

where β is obtained by [NP] weighted least square regression on(η(z)− η(y)) with weights

Kδ {ρ(η(z), η(y))}

[Beaumont et al., 2002, Genetics]

Page 103: Shanghai tutorial

From MCMC to ABC Methods

Approximate Bayesian computation

Alphabet soup

ABC-MCMC

Markov chain (θ(t)) created via the transition function

θ(t+1) =

θ′ ∼ Kω(θ′|θ(t)) if x ∼ f(x|θ′) is such that x = y

and u ∼ U(0, 1) ≤ π(θ′)Kω(θ(t)|θ′)

π(θ(t))Kω(θ′|θ(t)),

θ(t) otherwise,

Page 104: Shanghai tutorial

From MCMC to ABC Methods

Approximate Bayesian computation

Alphabet soup

ABC-MCMC

Markov chain (θ(t)) created via the transition function

θ(t+1) =

θ′ ∼ Kω(θ′|θ(t)) if x ∼ f(x|θ′) is such that x = y

and u ∼ U(0, 1) ≤ π(θ′)Kω(θ(t)|θ′)

π(θ(t))Kω(θ′|θ(t)),

θ(t) otherwise,

has the posterior π(θ|y) as stationary distribution[Marjoram et al, 2003]

Page 105: Shanghai tutorial

From MCMC to ABC Methods

Approximate Bayesian computation

Alphabet soup

ABC-MCMC (2)

Algorithm 3 Likelihood-free MCMC sampler

Use Algorithm 2 to get (θ(0), z(0))for t = 1 to N doGenerate θ′ from Kω

(·|θ(t−1)

),

Generate z′ from the likelihood f(·|θ′),Generate u from U[0,1],

if u ≤ π(θ′)Kω(θ(t−1)|θ′)

π(θ(t−1)Kω(θ′|θ(t−1))IAǫ,y

(z′) then

set (θ(t), z(t)) = (θ′, z′)else(θ(t), z(t))) = (θ(t−1), z(t−1)),

end ifend for

Page 106: Shanghai tutorial

From MCMC to ABC Methods

Approximate Bayesian computation

Alphabet soup

Why does it work?

Acceptance probability that does not involve the calculation of thelikelihood and

πǫ(θ′, z′|y)

πǫ(θ(t−1), z(t−1)|y) ×Kω(θ

(t−1)|θ′)f(z(t−1)|θ(t−1))

Kω(θ′|θ(t−1))f(z′|θ′)

=π(θ′) f(z′|θ′) IAǫ,y

(z′)

π(θ(t−1)) f(z(t−1)|θ(t−1))IAǫ,y(z(t−1))

× Kω(θ(t−1)|θ′) f(z(t−1)|θ(t−1))

Kω(θ′|θ(t−1)) f(z′|θ′)

=π(θ′)Kω(θ

(t−1)|θ′)π(θ(t−1)Kω(θ′|θ(t−1))

IAǫ,y(z′) .

Page 107: Shanghai tutorial

From MCMC to ABC Methods

Approximate Bayesian computation

Alphabet soup

ABCµ

[Ratmann, Andrieu, Wiuf and Richardson, 2009, PNAS]

Use of a joint density

f(θ, ǫ|y) ∝ ξ(ǫ|y, θ)× πθ(θ)× πǫ(ǫ)

where y is the data, and ξ(ǫ|y, θ) is the prior predictive density ofρ(η(z), η(y)) given θ and x when z ∼ f(z|θ)

Page 108: Shanghai tutorial

From MCMC to ABC Methods

Approximate Bayesian computation

Alphabet soup

ABCµ

[Ratmann, Andrieu, Wiuf and Richardson, 2009, PNAS]

Use of a joint density

f(θ, ǫ|y) ∝ ξ(ǫ|y, θ)× πθ(θ)× πǫ(ǫ)

where y is the data, and ξ(ǫ|y, θ) is the prior predictive density ofρ(η(z), η(y)) given θ and x when z ∼ f(z|θ)Warning! Replacement of ξ(ǫ|y, θ) with a non-parametric kernelapproximation.

Page 109: Shanghai tutorial

From MCMC to ABC Methods

Approximate Bayesian computation

Alphabet soup

ABCµ details

Multidimensional distances ρk (k = 1, . . . ,K) and errorsǫk = ρk(ηk(z), ηk(y)), with

ǫk ∼ ξk(ǫ|y, θ) ≈ ξk(ǫ|y, θ) =1

Bhk

b

K[{ǫk−ρk(ηk(zb), ηk(y))}/hk]

then used in replacing ξ(ǫ|y, θ) with mink ξk(ǫ|y, θ)

Page 110: Shanghai tutorial

From MCMC to ABC Methods

Approximate Bayesian computation

Alphabet soup

ABCµ details

Multidimensional distances ρk (k = 1, . . . ,K) and errorsǫk = ρk(ηk(z), ηk(y)), with

ǫk ∼ ξk(ǫ|y, θ) ≈ ξk(ǫ|y, θ) =1

Bhk

b

K[{ǫk−ρk(ηk(zb), ηk(y))}/hk]

then used in replacing ξ(ǫ|y, θ) with mink ξk(ǫ|y, θ)ABCµ involves acceptance probability

π(θ′, ǫ′)

π(θ, ǫ)

q(θ′, θ)q(ǫ′, ǫ)

q(θ, θ′)q(ǫ, ǫ′)

mink ξk(ǫ′|y, θ′)

mink ξk(ǫ|y, θ)

Page 111: Shanghai tutorial

From MCMC to ABC Methods

Approximate Bayesian computation

Alphabet soup

ABCµ multiple errors

[ c© Ratmann et al., PNAS, 2009]

Page 112: Shanghai tutorial

From MCMC to ABC Methods

Approximate Bayesian computation

Alphabet soup

ABCµ for model choice

[ c© Ratmann et al., PNAS, 2009]

Page 113: Shanghai tutorial

From MCMC to ABC Methods

Approximate Bayesian computation

Alphabet soup

Questions about ABCµ

For each model under comparison, marginal posterior on ǫ used toassess the fit of the model (HPD includes 0 or not).

Page 114: Shanghai tutorial

From MCMC to ABC Methods

Approximate Bayesian computation

Alphabet soup

Questions about ABCµ

For each model under comparison, marginal posterior on ǫ used toassess the fit of the model (HPD includes 0 or not).

◮ Is the data informative about ǫ? [Identifiability]

◮ How is the prior π(ǫ) impacting the comparison?

◮ How is using both ξ(ǫ|x0, θ) and πǫ(ǫ) compatible with astandard probability model? [remindful of Wilkinson]

◮ Where is the penalisation for complexity in the modelcomparison?

[X, Mengersen & Chen, 2010, PNAS]

Page 115: Shanghai tutorial

From MCMC to ABC Methods

Approximate Bayesian computation

Alphabet soup

A PMC version

Generate a sample at iteration t by

πt(θ(t)) ∝

N∑

j=1

ω(t−1)j Kt(θ

(t)|θ(t−1)j )

modulo acceptance of the associated xt, and use an importance

weight associated with an accepted simulation θ(t)i

ω(t)i ∝ π(θ

(t)i )/πt(θ

(t)i ) .

c© Still likelihood free[Beaumont et al., 2009]

Page 116: Shanghai tutorial

From MCMC to ABC Methods

Approximate Bayesian computation

Alphabet soup

The ABC-PMC algorithm

Given a decreasing sequence of approximation levels ǫ1 ≥ . . . ≥ ǫT ,

1. At iteration t = 1,

For i = 1, ..., N

Simulate θ(1)i ∼ π(θ) and x ∼ f(x|θ(1)i ) until (x, y) < ǫ1

Set ω(1)i = 1/N

Take τ2 as twice the empirical variance of the θ(1)i ’s

2. At iteration 2 ≤ t ≤ T ,

For i = 1, ..., N , repeat

Pick θ⋆i from the θ(t−1)j ’s with probabilities ω

(t−1)j

generate θ(t)i |θ⋆i ∼ N (θ⋆i , σ

2t ) and x ∼ f(x|θ(t)i )

until (x, y) < ǫt

Set ω(t)i ∝ π(θ

(t)i )/

∑N

j=1 ω(t−1)j ϕ

(σ−1t

{θ(t)i − θ

(t−1)j )

})

Take τ2t+1 as twice the weighted empirical variance of the θ(t)i ’s

Page 117: Shanghai tutorial

From MCMC to ABC Methods

Approximate Bayesian computation

Alphabet soup

Sequential Monte Carlo

SMC is a simulation technique to approximate a sequence ofrelated probability distributions πn with π0 “easy” and πT astarget.Iterated IS: particles moved from time n to time n via kernel Kn

and use of a sequence of extended targets πn

πn(z0:n) = πn(zn)n∏

j=0

Lj(zj+1, zj)

where the Lj ’s are backward Markov kernels [check that πn(zn) isa marginal]

[Del Moral, Doucet & Jasra, Series B, 2006]

Page 118: Shanghai tutorial

From MCMC to ABC Methods

Approximate Bayesian computation

Alphabet soup

Sequential Monte Carlo (2)

Algorithm 4 SMC sampler

sample z(0)i ∼ γ0(x) (i = 1, . . . , N)

compute weights w(0)i = π0(z

(0)i ))/γ0(z

(0)i )

for t = 1 to N doif ESS(w(t−1)) < NT thenresample N particles z(t−1) and set weights to 1

end ifgenerate z

(t−1)i ∼ Kt(z

(t−1)i , ·) and set weights to

w(t)i = W

(t−1)i−1

πt(z(t)i ))Lt−1(z

(t)i ), z

(t−1)i ))

πt−1(z(t−1)i ))Kt(z

(t−1)i ), z

(t)i ))

end for

[Del Moral, Doucet & Jasra, Series B, 2006]

Page 119: Shanghai tutorial

From MCMC to ABC Methods

Approximate Bayesian computation

Alphabet soup

ABC-SMC

[Del Moral, Doucet & Jasra, 2009]

True derivation of an SMC-ABC algorithmUse of a kernel Kn associated with target πǫn and derivation of thebackward kernel

Ln−1(z, z′) =

πǫn(z′)Kn(z

′, z)

πn(z)

Update of the weights

win ∝ wi(n−1)

∑Mm=1 IAǫn

(xmin)∑Mm=1 IAǫn−1

(xmi(n−1))

when xmin ∼ K(xi(n−1), ·)

Page 120: Shanghai tutorial

From MCMC to ABC Methods

Approximate Bayesian computation

Alphabet soup

ABC-SMCM

Modification: Makes M repeated simulations of the pseudo-data z

given the parameter, rather than using a single [M = 1]simulation, leading to weight that is proportional to the number ofaccepted zis

ω(θ) =1

M

M∑

i=1

Iρ(η(y),η(zi))<ǫ

[limit in M means exact simulation from (tempered) target]

Page 121: Shanghai tutorial

From MCMC to ABC Methods

Approximate Bayesian computation

Alphabet soup

Properties of ABC-SMC

The ABC-SMC method properly uses a backward kernel L(z, z′) tosimplify the importance weight and to remove the dependence onthe unknown likelihood from this weight. Update of importanceweights is reduced to the ratio of the proportions of survivingparticles

Page 122: Shanghai tutorial

From MCMC to ABC Methods

Approximate Bayesian computation

Alphabet soup

Properties of ABC-SMC

The ABC-SMC method properly uses a backward kernel L(z, z′) tosimplify the importance weight and to remove the dependence onthe unknown likelihood from this weight. Update of importanceweights is reduced to the ratio of the proportions of survivingparticlesAdaptivity in ABC-SMC algorithm only found in on-lineconstruction of the thresholds ǫt, slowly enough to keep a largenumber of accepted transitions

Page 123: Shanghai tutorial

From MCMC to ABC Methods

Approximate Bayesian computation

Alphabet soup

Semi-automatic ABC

Fearnhead and Prangle (2010) study ABC and the selection of thesummary statistic in close proximity to Wilkinson’s proposal

ABC then considered from a purely inferential viewpoint andcalibrated for estimation purposes.Use of a randomised (or ‘noisy’) version of the summary statistics

η(y) = η(y) + τǫ

Derivation of a well-calibrated version of ABC, i.e. an algorithmthat gives proper predictions for the distribution associated withthis randomised summary statistic.

Page 124: Shanghai tutorial

From MCMC to ABC Methods

Approximate Bayesian computation

Alphabet soup

Summary statistics

Optimality of the posterior expectations of the parameters ofinterest as summary statistics!

Page 125: Shanghai tutorial

From MCMC to ABC Methods

Approximate Bayesian computation

Alphabet soup

Summary statistics

Optimality of the posterior expectations of the parameters ofinterest as summary statistics!Use of the standard quadratic loss function

(θ − θ0)TA(θ − θ0) .

Page 126: Shanghai tutorial

From MCMC to ABC Methods

Approximate Bayesian computation

Calibration of ABC

Which summary?

Fundamental difficulty of the choice of the summary statistic whenthere is no non-trivial sufficient statistics [except when done by theexperimenters in the field]

Page 127: Shanghai tutorial

From MCMC to ABC Methods

Approximate Bayesian computation

Calibration of ABC

Which summary?

Fundamental difficulty of the choice of the summary statistic whenthere is no non-trivial sufficient statistics [except when done by theexperimenters in the field]Starting from a large collection of summary statistics is available,Joyce and Marjoram (2008) consider the sequential inclusion intothe ABC target, with a stopping rule based on a likelihood ratiotest.

Page 128: Shanghai tutorial

From MCMC to ABC Methods

Approximate Bayesian computation

Calibration of ABC

Which summary?

Fundamental difficulty of the choice of the summary statistic whenthere is no non-trivial sufficient statistics [except when done by theexperimenters in the field]Starting from a large collection of summary statistics is available,Joyce and Marjoram (2008) consider the sequential inclusion intothe ABC target, with a stopping rule based on a likelihood ratiotest.

◮ Does not taking into account the sequential nature of the tests

◮ Depends on parameterisation

◮ Order of inclusion matters.

Page 129: Shanghai tutorial

From MCMC to ABC Methods

ABC for model choice

ABC for model choice

Computational issues in Bayesian statistics

The Metropolis-Hastings Algorithm

The Gibbs Sampler

Approximate Bayesian computation

ABC for model choiceModel choiceGibbs random fieldsModel choice via ABCIllustrationsGeneric ABC model choice

Page 130: Shanghai tutorial

From MCMC to ABC Methods

ABC for model choice

Model choice

Bayesian model choice

Several models M1,M2, . . . are considered simultaneously for adataset y and the model index M is part of the inference.Use of a prior distribution. π(M = m), plus a prior distribution onthe parameter conditional on the value m of the model index,πm(θm)Goal is to derive the posterior distribution of M , challengingcomputational target when models are complex.

Page 131: Shanghai tutorial

From MCMC to ABC Methods

ABC for model choice

Model choice

Generic ABC for model choice

Algorithm 5 Likelihood-free model choice sampler (ABC-MC)

for t = 1 to T dorepeatGenerate m from the prior π(M = m)Generate θm from the prior πm(θm)Generate z from the model fm(z|θm)

until ρ{η(z), η(y)} < ǫSet m(t) = m and θ

(t) = θm

end for

Page 132: Shanghai tutorial

From MCMC to ABC Methods

ABC for model choice

Model choice

ABC estimates

Posterior probability π(M = m|y) approximated by the frequencyof acceptances from model m

1

T

T∑

t=1

Im(t)=m .

Issues with implementation:

◮ should tolerances ǫ be the same for all models?

◮ should summary statistics vary across models (incl. theirdimension)?

◮ should the distance measure ρ vary as well?

Page 133: Shanghai tutorial

From MCMC to ABC Methods

ABC for model choice

Model choice

ABC estimates

Posterior probability π(M = m|y) approximated by the frequencyof acceptances from model m

1

T

T∑

t=1

Im(t)=m .

Issues with implementation:

◮ should tolerances ǫ be the same for all models?

◮ should summary statistics vary across models (incl. theirdimension)?

◮ should the distance measure ρ vary as well?

Extension to a weighted polychotomous logistic regression estimateof π(M = m|y), with non-parametric kernel weights

[Cornuet et al., DIYABC, 2009]

Page 134: Shanghai tutorial

From MCMC to ABC Methods

ABC for model choice

Model choice

The Great ABC controversy

On-going controvery in phylogeographic genetics about the validityof using ABC for testing

Against: Templeton, 2008,2009, 2010a, 2010b, 2010cargues that nested hypothesescannot have higher probabilitiesthan nesting hypotheses (!)

Page 135: Shanghai tutorial

From MCMC to ABC Methods

ABC for model choice

Model choice

The Great ABC controversy

On-going controvery in phylogeographic genetics about the validityof using ABC for testing

Against: Templeton, 2008,2009, 2010a, 2010b, 2010cargues that nested hypothesescannot have higher probabilitiesthan nesting hypotheses (!)

Replies: Fagundes et al., 2008,Beaumont et al., 2010, Berger etal., 2010, Csillery et al., 2010point out that the criticisms areaddressed at [Bayesian]model-based inference and havenothing to do with ABC...

Page 136: Shanghai tutorial

From MCMC to ABC Methods

ABC for model choice

Gibbs random fields

Gibbs random fields

Gibbs distribution

The rv y = (y1, . . . , yn) is a Gibbs random field associated withthe graph G if

f(y) =1

Zexp

{−∑

c∈C

Vc(yc)

},

where Z is the normalising constant, C is the set of cliques of Gand Vc is any function also called potentialU(y) =

∑c∈C

Vc(yc) is the energy function

Page 137: Shanghai tutorial

From MCMC to ABC Methods

ABC for model choice

Gibbs random fields

Gibbs random fields

Gibbs distribution

The rv y = (y1, . . . , yn) is a Gibbs random field associated withthe graph G if

f(y) =1

Zexp

{−∑

c∈C

Vc(yc)

},

where Z is the normalising constant, C is the set of cliques of Gand Vc is any function also called potentialU(y) =

∑c∈C

Vc(yc) is the energy function

c© Z is usually unavailable in closed form

Page 138: Shanghai tutorial

From MCMC to ABC Methods

ABC for model choice

Gibbs random fields

Potts model

Potts model

Vc(y) is of the form

Vc(y) = θS(y) = θ∑

l∼i

δyl=yi

where l∼i denotes a neighbourhood structure

Page 139: Shanghai tutorial

From MCMC to ABC Methods

ABC for model choice

Gibbs random fields

Potts model

Potts model

Vc(y) is of the form

Vc(y) = θS(y) = θ∑

l∼i

δyl=yi

where l∼i denotes a neighbourhood structure

In most realistic settings, summation

Zθ =∑

x∈X

exp{θTS(x)}

involves too many terms to be manageable and numericalapproximations cannot always be trusted

[Cucala, Marin, CPR & Titterington, 2009]

Page 140: Shanghai tutorial

From MCMC to ABC Methods

ABC for model choice

Model choice via ABC

Bayesian Model Choice

Comparing a model with potential S0 taking values in Rp0 versus a

model with potential S1 taking values in Rp1 can be done through

the Bayes factor corresponding to the priors π0 and π1 on eachparameter space

Bm0/m1(x) =

∫exp{θT

0 S0(x)}/Zθ0,0π0(dθ0)∫

exp{θT1 S1(x)}/Zθ1,1

π1(dθ1)

Page 141: Shanghai tutorial

From MCMC to ABC Methods

ABC for model choice

Model choice via ABC

Bayesian Model Choice

Comparing a model with potential S0 taking values in Rp0 versus a

model with potential S1 taking values in Rp1 can be done through

the Bayes factor corresponding to the priors π0 and π1 on eachparameter space

Bm0/m1(x) =

∫exp{θT

0 S0(x)}/Zθ0,0π0(dθ0)∫

exp{θT1 S1(x)}/Zθ1,1

π1(dθ1)

Use of Jeffreys’ scale to select most appropriate model

Page 142: Shanghai tutorial

From MCMC to ABC Methods

ABC for model choice

Model choice via ABC

Neighbourhood relations

Choice to be made between M neighbourhood relations

im∼ i′ (0 ≤ m ≤ M − 1)

withSm(x) =

im∼i′

I{xi=xi′}

driven by the posterior probabilities of the models.

Page 143: Shanghai tutorial

From MCMC to ABC Methods

ABC for model choice

Model choice via ABC

Model index

Formalisation via a model index M that appears as a newparameter with prior distribution π(M = m) andπ(θ|M = m) = πm(θm)

Page 144: Shanghai tutorial

From MCMC to ABC Methods

ABC for model choice

Model choice via ABC

Model index

Formalisation via a model index M that appears as a newparameter with prior distribution π(M = m) andπ(θ|M = m) = πm(θm)Computational target:

P(M = m|x) ∝∫

Θm

fm(x|θm)πm(θm) dθm π(M = m) ,

Page 145: Shanghai tutorial

From MCMC to ABC Methods

ABC for model choice

Model choice via ABC

Sufficient statistics

By definition, if S(x) sufficient statistic for the joint parameters(M, θ0, . . . , θM−1),

P(M = m|x) = P(M = m|S(x)) .

Page 146: Shanghai tutorial

From MCMC to ABC Methods

ABC for model choice

Model choice via ABC

Sufficient statistics

By definition, if S(x) sufficient statistic for the joint parameters(M, θ0, . . . , θM−1),

P(M = m|x) = P(M = m|S(x)) .For each model m, own sufficient statistic Sm(·) andS(·) = (S0(·), . . . , SM−1(·)) also sufficient.

Page 147: Shanghai tutorial

From MCMC to ABC Methods

ABC for model choice

Model choice via ABC

Sufficient statistics

By definition, if S(x) sufficient statistic for the joint parameters(M, θ0, . . . , θM−1),

P(M = m|x) = P(M = m|S(x)) .For each model m, own sufficient statistic Sm(·) andS(·) = (S0(·), . . . , SM−1(·)) also sufficient.For Gibbs random fields,

x|M = m ∼ fm(x|θm) = f1m(x|S(x))f2

m(S(x)|θm)

=1

n(S(x))f2m(S(x)|θm)

wheren(S(x)) = ♯ {x ∈ X : S(x) = S(x)}

c© S(x) is therefore also sufficient for the joint parameters[Specific to Gibbs random fields!]

Page 148: Shanghai tutorial

From MCMC to ABC Methods

ABC for model choice

Model choice via ABC

ABC model choice Algorithm

ABC-MC◮ Generate m∗ from the prior π(M = m).

◮ Generate θ∗m∗ from the prior πm∗(·).◮ Generate x∗ from the model fm∗(·|θ∗m∗).

◮ Compute the distance ρ(S(x0), S(x∗)).

◮ Accept (θ∗m∗ ,m∗) if ρ(S(x0), S(x∗)) < ǫ.

Note When ǫ = 0 the algorithm is exact

Page 149: Shanghai tutorial

From MCMC to ABC Methods

ABC for model choice

Model choice via ABC

ABC approximation to the Bayes factor

Frequency ratio:

BFm0/m1(x0) =

P(M = m0|x0)

P(M = m1|x0)× π(M = m1)

π(M = m0)

=♯{mi∗ = m0}♯{mi∗ = m1}

× π(M = m1)

π(M = m0),

Page 150: Shanghai tutorial

From MCMC to ABC Methods

ABC for model choice

Model choice via ABC

ABC approximation to the Bayes factor

Frequency ratio:

BFm0/m1(x0) =

P(M = m0|x0)

P(M = m1|x0)× π(M = m1)

π(M = m0)

=♯{mi∗ = m0}♯{mi∗ = m1}

× π(M = m1)

π(M = m0),

replaced with

BFm0/m1(x0) =

1 + ♯{mi∗ = m0}1 + ♯{mi∗ = m1}

× π(M = m1)

π(M = m0)

to avoid indeterminacy (also Bayes estimate).

Page 151: Shanghai tutorial

From MCMC to ABC Methods

ABC for model choice

Illustrations

Toy example

iid Bernoulli model versus two-state first-order Markov chain, i.e.

f0(x|θ0) = exp

(θ0

n∑

i=1

I{xi=1}

)/{1 + exp(θ0)}n ,

versus

f1(x|θ1) =1

2exp

(θ1

n∑

i=2

I{xi=xi−1}

)/{1 + exp(θ1)}n−1 ,

with priors θ0 ∼ U(−5, 5) and θ1 ∼ U(0, 6) (inspired by “phasetransition” boundaries).

Page 152: Shanghai tutorial

From MCMC to ABC Methods

ABC for model choice

Illustrations

Toy example (2)

(left) Comparison of the true BFm0/m1(x0) with BFm0/m1

(x0)(in logs) over 2, 000 simulations and 4.106 proposals from theprior. (right) Same when using tolerance ǫ corresponding to the1% quantile on the distances.

Page 153: Shanghai tutorial

From MCMC to ABC Methods

ABC for model choice

Generic ABC model choice

Back to sufficiency

If η1(x) sufficient statistic for model m = 1 and parameter θ1 andη2(x) sufficient statistic for model m = 2 and parameter θ2,(η1(x), η2(x)) is not always sufficient for (m, θm)

Page 154: Shanghai tutorial

From MCMC to ABC Methods

ABC for model choice

Generic ABC model choice

Back to sufficiency

If η1(x) sufficient statistic for model m = 1 and parameter θ1 andη2(x) sufficient statistic for model m = 2 and parameter θ2,(η1(x), η2(x)) is not always sufficient for (m, θm)

c© Potential loss of information at the testing level[X, Cornuet, Marin, and Pillai, 2011]

Page 155: Shanghai tutorial

From MCMC to ABC Methods

ABC for model choice

Generic ABC model choice

Limiting behaviour of B12 (T → ∞)

ABC approximation

B12(y) =

∑Tt=1 Imt=1 Iρ{η(zt),η(y)}≤ǫ∑Tt=1 Imt=2 Iρ{η(zt),η(y)}≤ǫ

,

where the (mt, zt)’s are simulated from the (joint) prior

Page 156: Shanghai tutorial

From MCMC to ABC Methods

ABC for model choice

Generic ABC model choice

Limiting behaviour of B12 (T → ∞)

ABC approximation

B12(y) =

∑Tt=1 Imt=1 Iρ{η(zt),η(y)}≤ǫ∑Tt=1 Imt=2 Iρ{η(zt),η(y)}≤ǫ

,

where the (mt, zt)’s are simulated from the (joint) priorAs T go to infinity, limit

Bǫ12(y) =

∫Iρ{η(z),η(y)}≤ǫπ1(θ1)f1(z|θ1) dz dθ1∫Iρ{η(z),η(y)}≤ǫπ2(θ2)f2(z|θ2) dz dθ2

=

∫Iρ{η,η(y)}≤ǫπ1(θ1)f

η1 (η|θ1) dη dθ1∫

Iρ{η,η(y)}≤ǫπ2(θ2)fη2 (η|θ2) dη dθ2

,

where fη1 (η|θ1) and fη

2 (η|θ2) distributions of η(z)

Page 157: Shanghai tutorial

From MCMC to ABC Methods

ABC for model choice

Generic ABC model choice

Limiting behaviour of B12 (ǫ → 0)

When ǫ goes to zero,

Bη12(y) =

∫π1(θ1)f

η1 (η(y)|θ1) dθ1∫

π2(θ2)fη2 (η(y)|θ2) dθ2

,

Page 158: Shanghai tutorial

From MCMC to ABC Methods

ABC for model choice

Generic ABC model choice

Limiting behaviour of B12 (ǫ → 0)

When ǫ goes to zero,

Bη12(y) =

∫π1(θ1)f

η1 (η(y)|θ1) dθ1∫

π2(θ2)fη2 (η(y)|θ2) dθ2

,

Bayes factor based on the sole observation of η(y)

Page 159: Shanghai tutorial

From MCMC to ABC Methods

ABC for model choice

Generic ABC model choice

Limiting behaviour of B12 (under sufficiency)

If η(y) sufficient statistic for both models,

fi(y|θi) = gi(y)fηi (η(y)|θi)

Thus

B12(y) =

∫Θ1

π(θ1)g1(y)fη1 (η(y)|θ1) dθ1∫

Θ2π(θ2)g2(y)f

η2 (η(y)|θ2) dθ2

=g1(y)

∫π1(θ1)f

η1 (η(y)|θ1) dθ1

g2(y)∫π2(θ2)f

η2 (η(y)|θ2) dθ2

=g1(y)

g2(y)Bη

12(y) .

[Didelot, Everitt, Johansen & Lawson, 2011]

Page 160: Shanghai tutorial

From MCMC to ABC Methods

ABC for model choice

Generic ABC model choice

Limiting behaviour of B12 (under sufficiency)

If η(y) sufficient statistic for both models,

fi(y|θi) = gi(y)fηi (η(y)|θi)

Thus

B12(y) =

∫Θ1

π(θ1)g1(y)fη1 (η(y)|θ1) dθ1∫

Θ2π(θ2)g2(y)f

η2 (η(y)|θ2) dθ2

=g1(y)

∫π1(θ1)f

η1 (η(y)|θ1) dθ1

g2(y)∫π2(θ2)f

η2 (η(y)|θ2) dθ2

=g1(y)

g2(y)Bη

12(y) .

[Didelot, Everitt, Johansen & Lawson, 2011]

c© No discrepancy only when cross-model sufficiency

Page 161: Shanghai tutorial

From MCMC to ABC Methods

ABC for model choice

Generic ABC model choice

Poisson/geometric example

Samplex = (x1, . . . , xn)

from either a Poisson P(λ) or from a geometric G(p) Then

S =n∑

i=1

yi = η(x)

sufficient statistic for either model but not simultaneously

Discrepancy ratio

g1(x)

g2(x)=

S!n−S/∏

i yi!

1/(

n+S−1S

)

Page 162: Shanghai tutorial

From MCMC to ABC Methods

ABC for model choice

Generic ABC model choice

Poisson/geometric discrepancy

Range of B12(x) versus Bη12(x) B12(x): The values produced have

nothing in common.

Page 163: Shanghai tutorial

From MCMC to ABC Methods

ABC for model choice

Generic ABC model choice

Formal recovery

Creating an encompassing exponential family

f(x|θ1, θ2, α1, α2) ∝ exp{θT1 η1(x)+ θT1 η1(x)+α1t1(x)+α2t2(x)}

leads to a sufficient statistic (η1(x), η2(x), t1(x), t2(x))[Didelot, Everitt, Johansen & Lawson, 2011]

Page 164: Shanghai tutorial

From MCMC to ABC Methods

ABC for model choice

Generic ABC model choice

Formal recovery

Creating an encompassing exponential family

f(x|θ1, θ2, α1, α2) ∝ exp{θT1 η1(x)+ θT1 η1(x)+α1t1(x)+α2t2(x)}

leads to a sufficient statistic (η1(x), η2(x), t1(x), t2(x))[Didelot, Everitt, Johansen & Lawson, 2011]

In the Poisson/geometric case, if∏

i xi! is added to S, nodiscrepancy

Page 165: Shanghai tutorial

From MCMC to ABC Methods

ABC for model choice

Generic ABC model choice

Formal recovery

Creating an encompassing exponential family

f(x|θ1, θ2, α1, α2) ∝ exp{θT1 η1(x)+ θT1 η1(x)+α1t1(x)+α2t2(x)}

leads to a sufficient statistic (η1(x), η2(x), t1(x), t2(x))[Didelot, Everitt, Johansen & Lawson, 2011]

Only applies in genuine sufficiency settings...

c© Inability to evaluate loss brought by summary statistics

Page 166: Shanghai tutorial

From MCMC to ABC Methods

ABC for model choice

Generic ABC model choice

Meaning of the ABC-Bayes factor

In the Poisson/geometric case, if E[yi] = θ0 > 0,

limn→∞

Bη12(y) =

(θ0 + 1)2

θ0e−θ0

Page 167: Shanghai tutorial

From MCMC to ABC Methods

ABC for model choice

Generic ABC model choice

MA(q) divergence

Evolution [against ǫ] of ABC Bayes factor, in terms of frequencies ofvisits to models MA(1) (left) and MA(2) (right) when ǫ equal to10, 1, .1, .01% quantiles on insufficient autocovariance distances. Sampleof 50 points from a MA(2) with θ1 = 0.6, θ2 = 0.2. True Bayes factorequal to 17.71.

Page 168: Shanghai tutorial

From MCMC to ABC Methods

ABC for model choice

Generic ABC model choice

MA(q) divergence

Evolution [against ǫ] of ABC Bayes factor, in terms of frequencies ofvisits to models MA(1) (left) and MA(2) (right) when ǫ equal to10, 1, .1, .01% quantiles on insufficient autocovariance distances. Sampleof 50 points from a MA(1) model with θ1 = 0.6. True Bayes factor B21

equal to .004.

Page 169: Shanghai tutorial

From MCMC to ABC Methods

ABC for model choice

Generic ABC model choice

A population genetics evaluation

Population genetics example with

◮ 3 populations

◮ 2 scenari

◮ 15 individuals

◮ 5 loci

◮ single mutation parameter

Page 170: Shanghai tutorial

From MCMC to ABC Methods

ABC for model choice

Generic ABC model choice

A population genetics evaluation

Population genetics example with

◮ 3 populations

◮ 2 scenari

◮ 15 individuals

◮ 5 loci

◮ single mutation parameter

◮ 24 summary statistics

◮ 2 million ABC proposal

◮ importance [tree] sampling alternative

Page 171: Shanghai tutorial

From MCMC to ABC Methods

ABC for model choice

Generic ABC model choice

Stability of importance sampling

Page 172: Shanghai tutorial

From MCMC to ABC Methods

ABC for model choice

Generic ABC model choice

Comparison with ABC

Use of 24 summary statistics and DIY-ABC logistic correction

Page 173: Shanghai tutorial

From MCMC to ABC Methods

ABC for model choice

Generic ABC model choice

Comparison with ABC

Use of 15 summary statistics and DIY-ABC logistic correction

Page 174: Shanghai tutorial

From MCMC to ABC Methods

ABC for model choice

Generic ABC model choice

Comparison with ABC

Use of 24 summary statistics and DIY-ABC logistic correction

Page 175: Shanghai tutorial

From MCMC to ABC Methods

ABC for model choice

Generic ABC model choice

The only safe cases

Besides specific models like Gibbs random fields,

using distances over the data itself escapes the discrepancy...[Toni & Stumpf, 2010; Sousa et al., 2009]

Page 176: Shanghai tutorial

From MCMC to ABC Methods

ABC for model choice

Generic ABC model choice

The only safe cases

Besides specific models like Gibbs random fields,

using distances over the data itself escapes the discrepancy...[Toni & Stumpf, 2010; Sousa et al., 2009]

...and so does the use of more informal model fitting measures[Ratmann et al., 2009, 2011]