25
Convergence of ensemble Kalman filters in the large ensemble limit and infinite dimension Jan Mandel Department of Mathematical and Statistical Sciences University of Colorado Denver Based on joint work with Loren Cobb, Evan Kwiatkowski, and Kody Law Supported by NSF grant DMS-1216481 Colorado School of Mines, Golden, CO, April 24, 2015

Convergence of ensemble Kalman lters in the large ensemble ...jmandel/slides/mines2015-enkf-theory.pdf · Ensemble weather forecast: Independent randomized simulations. If it rains

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Convergence of ensemble Kalman lters in the large ensemble ...jmandel/slides/mines2015-enkf-theory.pdf · Ensemble weather forecast: Independent randomized simulations. If it rains

Convergence of ensemble Kalman filters in thelarge ensemble limit and infinite dimension

Jan Mandel

Department of Mathematical and Statistical Sciences

University of Colorado Denver

Based on joint work with Loren Cobb, Evan Kwiatkowski, and Kody Law

Supported by NSF grant DMS-1216481

Colorado School of Mines, Golden, CO, April 24, 2015

Page 2: Convergence of ensemble Kalman lters in the large ensemble ...jmandel/slides/mines2015-enkf-theory.pdf · Ensemble weather forecast: Independent randomized simulations. If it rains

Data assimilation - Analysis cycle

• Goal: Inject data into a running model

analysisstep

advancein time

analysisstep

forecast −→ analysis −→ forecast −→ analysis . . .

data data

• Kalman filter used already in Apollo 11 navigation, now in GPS,

computer vision, weather forecasting, remote sensing,...

Page 3: Convergence of ensemble Kalman lters in the large ensemble ...jmandel/slides/mines2015-enkf-theory.pdf · Ensemble weather forecast: Independent randomized simulations. If it rains

From least squares to Bayes theorem

Inverse problem Hu ≈ d with background knowledge u ≈ uf

|u− uf |2Q−1 + |d−Hu|2R−1 → minu

uf=forecast, what we think the state should be, d=data,

H=observation operator, Hu=what the data should be given state u.

e−1

2

(∣∣u−uf ∣∣2Q−1+|d−Hu|2

R−1

)︸ ︷︷ ︸

∝ pa(u)=p(u|d)analysis (posterior) density

= e−1

2

∣∣u−uf ∣∣2Q−1︸ ︷︷ ︸

∝ p(u)forecast (prior) density

e−1

2|d−Hu|2R−1︸ ︷︷ ︸

∝ p(d|u)data likelihood

→ maxu

This is Bayes theorem for probability densities. ∝ means proportional.

The analysis density pa (u) ∝ e−1

2|u−ua|2Qa−1

mean and covariance are

ua = uf +K(d−Huf), Qa = (I −KH)Q,

K = K(Q) = QHT(HQHT +R)−1 is the Kalman gain

Page 4: Convergence of ensemble Kalman lters in the large ensemble ...jmandel/slides/mines2015-enkf-theory.pdf · Ensemble weather forecast: Independent randomized simulations. If it rains

Kalman filter (KF)

• Add advancing in time - model is operator u 7→Mu+ b:mean and covariance advanced as Mu and MQMT

• Hard to advance the covariance matrix when the model is nonlinear.• Need tangent and adjoint model operators.• Can’t maintain the covariance matrix for large problems, such asfrom discretizations of PDEs.

Page 5: Convergence of ensemble Kalman lters in the large ensemble ...jmandel/slides/mines2015-enkf-theory.pdf · Ensemble weather forecast: Independent randomized simulations. If it rains

Ensemble forecasting to the rescue

• Ensemble weather forecast: Independent randomized simulations. If

it rains in 30% of them, then say that “the chance of rain is 30%”.

• Ensemble Kalman filter (EnKF, Evensen 1994): replace the

covariance in KF by the sample covariance of the ensemble, then apply

the KF formula for the mean to each ensemble member independently.

• But the analysis covariance was wrong – too small even if the

covariance were exact.

• Fix: randomize also the data by sampling from the data error

distribution. (Burgers, Evensen, van Leeuwen, 1998).

• Then all is good and we should get the right ensembles...

at least in the Gaussian case, and up to the approximation by the

sample covariance.

Page 6: Convergence of ensemble Kalman lters in the large ensemble ...jmandel/slides/mines2015-enkf-theory.pdf · Ensemble weather forecast: Independent randomized simulations. If it rains

The EnKF with data randomization is statistically exactif the covariance is exact

Lemma. Let Uf ∼ N(uf , Qf), D = d+ E, E ∼ N(0, R) and

Ua=Uf+K(D-HUf), K=QfHT(HQfHT+R)−1.

Then Ua ∼ N (ua, Qa), i.e., U has the correct analysis distribution

from the Bayes theorem and the Kalman filter.

Proof. Computation in Burgers, Evensen, van Leeuwen, 1998.

Corollary. If the forecast ensemble [Uf1 , . . . , U

fN ] is a sample from

Gaussian forecast distribution, and the exact covariance is used, then

the analysis ensemble [Ua1 , . . . , UaN ] is a sample from the analysis

distribution.

Page 7: Convergence of ensemble Kalman lters in the large ensemble ...jmandel/slides/mines2015-enkf-theory.pdf · Ensemble weather forecast: Independent randomized simulations. If it rains

EnKF properties

• The model is needed only as a black box.

• The ensemble members interact through ensemble mean and

covariance only

• The EnKF was derived for Gaussian distributions but the algorithm

does not depend on this.

• So it is often used for nonlinear models and non-Gaussian

distributions anyway.

• Folklore: “high-dimensional problems require large ensembles” a.k.a.

“curse of dimensionality”. Not necessarily so.

• Curse of dimensionality: slow convergence in high dimension. With

arbitrary probability distributions, sure. But probability distributions in

practice are not arbitrary: the state is a discretization of a random

function.The smoother the functions, the faster the EnKF convergence.

Page 8: Convergence of ensemble Kalman lters in the large ensemble ...jmandel/slides/mines2015-enkf-theory.pdf · Ensemble weather forecast: Independent randomized simulations. If it rains

Convergence of EnKF in the large ensemble limit

• Laws of large numbers to guarantee that the EnKF gives correct

results for large ensembles, in the Gaussian case: Le Gland et al.

(2011), Mandel et al (2011).

• In general, the EnKF converges to a mean-field limit (Le Gland et al.

2011).

- mean-field approximation = the effect of all other particles on any

one particle is replaced by a single averaged effect

- mean field limit = large number of particles, the influence of each

becomes negligible.

For convergence in high state space dimension, we look at theinfinite dimension first - just like in numerical analysis where westudy first the PDE and only then its numerical approximation

Page 9: Convergence of ensemble Kalman lters in the large ensemble ...jmandel/slides/mines2015-enkf-theory.pdf · Ensemble weather forecast: Independent randomized simulations. If it rains

Tools: Random elements on a separable Hilbert space

The mean of a random element X on Hilbert space V :

E (X) ∈ V, 〈v,E (X)〉 = E (〈v,X〉) ∀v ∈ V

Similarly for random bounded operator A on V ,

E (A) ∈ V, 〈v,E (A)u〉 = E (〈v,Au〉) ∀u, v ∈ V

Tensor product of vectors xyT, x, y ∈ V , becomes

x⊗ y ∈ [V ] , (x⊗ y) v = x 〈y, v〉 ∀v ∈ V

Covariance of a random element X becomes

C = Cov (X) = E ((X − E (X))⊗ (X − E (X)))

= E (X ⊗X)− E (X)⊗ E (X)

Covariance must be a compact operator with finite trace,

TrC =∑∞n=1 λn <∞ (λn eigenvalues).

Page 10: Convergence of ensemble Kalman lters in the large ensemble ...jmandel/slides/mines2015-enkf-theory.pdf · Ensemble weather forecast: Independent randomized simulations. If it rains

Tools: Karhunen-Loeve expansion and random functions

Suppose X ∈ L2 (Ω, V ) =X : E

(|X|2

)<∞

. Then

C = Cov (X)compact and self-adjoint, has complete orthonormal

system en of eigenvectors, Cen = λnen, eigenvalues λn ≥ 0, and

X = E (X) +∑∞n=1λ

1/2n ξnen, E (ξn) = 0.

is convergent a.s. in V , and ξm are uncorrelated random variables

〈ξm, ξn〉L2(Ω) = E (ξm, ξn) = δmn. Examples:

X =(λ

1/21 ξ1, λ

1/22 ξ2, . . .

)in `2,

X (x) =∞∑n=1

λ1/2n ξn cos (nx) in L2 (0, π)

with ξk ∼ N (0, 1) independent, λn = n−α, α > 1.

Page 11: Convergence of ensemble Kalman lters in the large ensemble ...jmandel/slides/mines2015-enkf-theory.pdf · Ensemble weather forecast: Independent randomized simulations. If it rains

Curse of dimensionality? Not for probability measures!

101 102 10310−2

10−1

100

101

102

model size

filte

rerr

or

EnKF, N = 10, m = 25

Constant covariance eigenvalues λn = 1 and the inverse law λn = 1/n are not

probability measures in the limit because∑∞

n=1 λn =∞.

Inverse square law λn = 1/n2 gives a probability measure because∑∞

n=1 λn <∞.

m=25 uniformly sampled data points from 1D state, N=10 ensemble members.

From the thesis Beezley (2009), Fig. 4.7.

Page 12: Convergence of ensemble Kalman lters in the large ensemble ...jmandel/slides/mines2015-enkf-theory.pdf · Ensemble weather forecast: Independent randomized simulations. If it rains

Bayes theorem in infinite dimension

• Forecast and analysis are probability distributions, densities pf , pa

• Bayes theorem: pa (u) = 1Cp (d|u) pf (u), C =

∫U

p (d|u) pfdu

• Write in terms of measures µa, µf with densities pa, pf

∫A

pa (u) du = 1C

∫A

p (d|u) pf (u) du, C =∫U

p (d|u) dµf (u)

µa (A) = 1C

∫A

p (d|u) dµf (u), ∀A µf -measurable

⇔ Radon-Nikodym derivative: p (d|u) = Cdµa

dµf

• But how do we know that C =∫U

p (d|u) dµf (u) > 0 ?

Page 13: Convergence of ensemble Kalman lters in the large ensemble ...jmandel/slides/mines2015-enkf-theory.pdf · Ensemble weather forecast: Independent randomized simulations. If it rains

Gaussian case

• data likelihood: d = Hu+ ε, ε ∼ N (0, R),

p (d|u) = const e−1

2|Hu−d|2R−1

• µf (u) is Gaussian measure on U , data d ∈ V

• state space U and data space V are separable Hilbert spaces

• p (d|·) ∈ L1(U, µf

)⇒ µa is absolutely continuous w.r.t. µf

• Feldman-Hajek theorem⇒ µa and µf equivalent

• Difficulties when the data are infinitely dimensional...

Page 14: Convergence of ensemble Kalman lters in the large ensemble ...jmandel/slides/mines2015-enkf-theory.pdf · Ensemble weather forecast: Independent randomized simulations. If it rains

Infinite-dimensional data, Gaussian measure error bad

• Example: µf = N (0, Q), H = I ⇔ all state observed, d = 0, R = Q

• p (d|u) = const e−1

2|u|2R−1 = const e

− 12

⟨R−1/2u,R−1/2u

⟩• p (d|u) > 0 if u ∈ R1/2 (U) = dom

(R−1/2

)• p (d|u) = e−∞ = 0 if u /∈ R1/2 (U) = Q1/2 (U)

• Q1/2 (U) is the Cameron-Martin space of the measure N (0, Q)

• But µ = N (0, Q) ⇒ µ(Q1/2 (U)

)= 0. Thus,

∫U

p (d|u) dµf (u) = 0

• Also not possible from the Feldman-Hajek theorem,µa and µf are not equivalent.

Page 15: Convergence of ensemble Kalman lters in the large ensemble ...jmandel/slides/mines2015-enkf-theory.pdf · Ensemble weather forecast: Independent randomized simulations. If it rains

Infinite-dimensional data, white noise error good

• All is good when data is finite-dimensional and R not singular

• More generally, when data covariance R is bounded below:

〈u,Ru〉 ≥ α |u|2 ∀u, α > 0⇒ |u|2R−1 <∞ ∀u ⇒ e−|Hu−d|2

R−1 > 0 ∀u

⇒∫U

p (d|u) dµf (u) = const∫U

e−|Hu−d|2

R−1dµf (u) > 0

• But if V is not finite dimensional, then such data error distributionN (0, R) is not a probability measure on V . Covariance of probabilitya measure has finite trace

∑∞n=1 λn <∞, but the spectrum

σ (R) ≥ α > 0.

• Yet Bayes theorem µa (A) = 1C

∫A

p (d|u) dµf (u) is fine.

Page 16: Convergence of ensemble Kalman lters in the large ensemble ...jmandel/slides/mines2015-enkf-theory.pdf · Ensemble weather forecast: Independent randomized simulations. If it rains

Randomized data ensemble Kalman filter

Initial ensemble:[Y 0

1 , Y0

2 , . . . , Y0N

]i.i.d., Gaussian bakground

Advance time by the model: Xmk = F

(Y m−1k

)Analysis step: nonlinear map[Xm

1 , Xm2 , . . . , X

mN

]7→[Y m1 , Y m2 , . . . , Y mN

]by solving the inverse problem HY ≈ dm

Y mk = Xmk +K(QmN)(Dm

k −HXmk )

QmN = CN[Xm

1 , . . . , XmN

], Dm

k ∼ N (dm, R)

Page 17: Convergence of ensemble Kalman lters in the large ensemble ...jmandel/slides/mines2015-enkf-theory.pdf · Ensemble weather forecast: Independent randomized simulations. If it rains

Randomized data EnKF and white data noise

Y mk = Xmk +K(QmN)(Dm

k −HXmk ), QmN = CN

[Xm

1 , . . . , XmN

],

Dmk ∼ N (dm, R)

Data space V infinitely dimensional, R ≥ αI, α > 0

⇒ N (dm, R) is only finitely additive on the algebra of cylindrical

sets, not a σ additive measure on the σ-algebra of Borel sets of V

Dmk are only weak random variables (linear functional is a r.v)

But Tr(Q1/2

)<∞⇒ Q1/2·bounded operator·Dm

k

has σ-additive distribution and is a random element

⇒ K(QmN)Dmk = QHT(HQHT +R)−1Dm

k

has σ-additive distribution and is a random element

Back to standard Kolmogorov probability theory

Page 18: Convergence of ensemble Kalman lters in the large ensemble ...jmandel/slides/mines2015-enkf-theory.pdf · Ensemble weather forecast: Independent randomized simulations. If it rains

Tools: Lp law of large numbers on a Hilbert space

Define Lp (Ω, V ) = X : E(|X|p) <∞ equipped with the norm

‖X‖p = E(|X|p)1/p

The Lp law of large numbers for sample mean EN = 1N

∑Ni=1Xi :

Let Xk ∈ Lp (Ω, V ) be i.i.d., then

‖EN − E (X1)‖p ≤Cp√N‖X1‖p .

But Lp laws of large numbers do not generally hold on Banach spaces

(in fact define Rademacher type p spaces), and the space [V ] of all

bounded linear operators on a Hilbert space is not a Hilbert space.

Worse, any Banach space can be embedded in [V ] for some Hilbert

space V . So, if something does not hold on Banach spaces, it won’t be

true for bounded operators on Hilbert spaces either.

Page 19: Convergence of ensemble Kalman lters in the large ensemble ...jmandel/slides/mines2015-enkf-theory.pdf · Ensemble weather forecast: Independent randomized simulations. If it rains

Tools: Lp law of large numbers for the sample covariance

Let Xi ∈ Lp (Ω, V ) be i.i.d. and

CN ([X1, . . . , XN ]) =1

N

N∑i=1

Xi ⊗Xi −

1

N

N∑i=1

Xi

⊗ 1

N

N∑i=1

Xi

But [V ] is not a Hilbert space.

Use L2 (Ω, VHS) with VHS the space of Hilbert-Schmidt operators,

|A|2HS =∞∑n=1

〈Aen, Aen〉 <∞, 〈A,B〉HS =∞∑n=1

〈Aen, Ben〉 ,

where en is any complete orthonormal sequence in V . In finite

dimension, the Hilbert-Schmidt norm becomes the Frobenius norm

|A|2HS =

√√√√∑i,j

∣∣∣aij∣∣∣2.

Page 20: Convergence of ensemble Kalman lters in the large ensemble ...jmandel/slides/mines2015-enkf-theory.pdf · Ensemble weather forecast: Independent randomized simulations. If it rains

Tools: Lp law of large numbers for the sample covariance

Let Xi ∈ Lp (Ω, V ) be i.i.d. and V a separable Hilbert space.

Then, from the Lp law of large numbers in the Hilbert space VHS of

Hilbert-Schmidt operators on V ,

‖CN ([X1, . . . , XN ])− Cov (X1)‖p ≤ ‖|CN ([X1, . . . , XN ])− Cov (X1)|HS‖p

≤const(p)√

N‖X1‖2

2p .

where the constant depends on p only.

Note that since V is separable, VHS is also separable.

Page 21: Convergence of ensemble Kalman lters in the large ensemble ...jmandel/slides/mines2015-enkf-theory.pdf · Ensemble weather forecast: Independent randomized simulations. If it rains

Convergence of the EnKF to mean-field limit

• Legland et al. (2011): analysis step as nonlinear transformation of

probability measures.

• Nonlinear tranformation of ensemble as vector of exchangeable

random variables [X1, X2, . . . , XN ] 7→ [Y1, Y2, . . . , YN ]

• Lp continuity of the model. Drop the superscript m

• Mean field filter:

Y Mean fieldk = XMean field

k +K(Q)(Dk −HXMean fieldk ), Q = Cov (X1)

• Randomized EnKF:

Y Randomizedk = XRandomized

k +K(QN)(Dk −HXRandomizedk ),

QN = CN[XRandomized

1 , . . . , XRandomizedN

], same Dk

Page 22: Convergence of ensemble Kalman lters in the large ensemble ...jmandel/slides/mines2015-enkf-theory.pdf · Ensemble weather forecast: Independent randomized simulations. If it rains

Convergence of the EnKF to mean-field limit (cont)

• Subtract, continuity of Kalman gain:

‖K(Q)−K(QN)‖p ≤ const ‖Q−QN‖2p

• Lp law of large numbers for CN[XMean field

1 , . . . , XMean fieldN

].

• Apriori bound∥∥∥Xm

k

∥∥∥p≤ const (m) for all m from∣∣∣(HQH∗ +R)−1

∣∣∣ ≤ 1α by R ≥ αI

• induction over m: Xm,Randomized1 → X

m,Mean field1 in all Lp,

1 ≤ p <∞

Page 23: Convergence of ensemble Kalman lters in the large ensemble ...jmandel/slides/mines2015-enkf-theory.pdf · Ensemble weather forecast: Independent randomized simulations. If it rains

Conclusion

• With a bit of care, a convergence proof in finite dimension caries over

to infinitely dimensional separable Hilbert space.

• White noise data error is good, stabilizes the method

• But white noise distribution is not a standard σ-additive probability

measure.

• Convergence in non–linear non-Gaussian case to mean-field limit

• Square root EnKF has no randomization, convergence in Gaussian

case Kwiatkowski and M. 2015

• Computational experiments confirm that EnKF converges uniformly

for high-dimensional distributions that approximate a Gaussian measure

on Hilbert space. (J. Beezley, Ph.D. thesis, 2009). EnKF for

distributions with slowly decaying eigenvalues of the covariance

converges very slowly and requires large ensembles.

Page 24: Convergence of ensemble Kalman lters in the large ensemble ...jmandel/slides/mines2015-enkf-theory.pdf · Ensemble weather forecast: Independent randomized simulations. If it rains

References

[1] A. V. Balakrishnan. Applied functional analysis. Springer-Verlag, New York,1976.

[2] Jonathan D. Beezley. High-Dimensional Data Assimilation and MorphingEnsemble Kalman Filters with Applications in Wildfire Modeling. PhD thesis,University of Colorado Denver, 2009.

[3] Gerrit Burgers, Peter Jan van Leeuwen, and Geir Evensen. Analysis scheme inthe ensemble Kalman filter. Monthly Weather Review, 126:1719–1724, 1998.

[4] Giuseppe Da Prato. An introduction to infinite-dimensional analysis.Springer-Verlag, Berlin, 2006.

[5] Geir Evensen. Sequential data assimilation with nonlinear quasi-geostrophicmodel using Monte Carlo methods to forecast error statistics. Journal ofGeophysical Research, 99 (C5)(10):143–162, 1994.

[6] Evan Kwiatkowski and Jan Mandel. Convergence of the square root ensembleKalman filter in the large ensemble limit. SIAM/ASA Journal on UncertaintyQuantification, 3(1):1–17, 2015.

Page 25: Convergence of ensemble Kalman lters in the large ensemble ...jmandel/slides/mines2015-enkf-theory.pdf · Ensemble weather forecast: Independent randomized simulations. If it rains

[7] Kody J. H. Law, Hamidou Tembine, and Raul Tempone. Deterministic methodsfor filtering, part I: Mean-field ensemble Kalman filtering. arXiv:1409.0628, 2014.

[8] F. Le Gland, V. Monbet, and V.-D. Tran. Large sample asymptotics for theensemble Kalman filter. In Dan Crisan and Boris Rozovskiı, editors, The OxfordHandbook of Nonlinear Filtering, pages 598–631. Oxford University Press, 2011.

[9] Michel Ledoux and Michel Talagrand. Probability in Banach spaces. Ergebnisseder Mathematik und ihrer Grenzgebiete (3), Vol. 23. Springer-Verlag, Berlin,1991.

[10] Jan Mandel, Loren Cobb, and Jonathan D. Beezley. On the convergence of theensemble Kalman filter. Applications of Mathematics, 56:533–541, 2011.

[11] A. M. Stuart. Inverse problems: a Bayesian perspective. Acta Numer.,19:451–559, 2010.