26
Stochastic perturbation of scalar conservation laws * J. Vovelle April 21, 2009 Abstract In this course devoted to the study of stochastic perturbations of scalar conservation laws, we expose the first part of the paper [EKMS00]. After a short introduction to scalar conservation laws and stochastic differential equations, we give the proof of the existence of an invariant measure for the stoschastic inviscid periodic Burgers’ Equation in one dimension according to [EKMS00]. Contents 1 Scalar conservation laws with source term 2 1.1 Characteristic curves ....................... 2 1.1.1 From solution to characteristic curves ......... 2 1.1.2 From characteristic curves to solution ......... 3 1.1.3 The one-dimensional Burgers’ Equation ........ 5 1.2 Entropy solutions ......................... 6 1.2.1 Lax-Oleinik Formula ................... 8 2 Stochastic differential equations 9 2.0.2 Stochastic integral .................... 9 2.0.3 Stochastic differential equation ............. 10 2.0.4 Transition semi-group .................. 11 2.0.5 Invariant measure .................... 11 2.0.6 Examples ......................... 12 3 Scalar conservation laws with stochastic forcing 14 3.1 Entropy solution ......................... 14 3.2 Burgers’ Equation ........................ 15 * Course given at the Spring School Analytical and Numerical Aspects of Evolution Equations T.U. Berlin, 30/03/09–03/04/09 1

Stochastic perturbation of scalar conservation laws

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

Stochastic perturbation of scalar conservation laws∗

J. Vovelle

April 21, 2009

Abstract

In this course devoted to the study of stochastic perturbations ofscalar conservation laws, we expose the first part of the paper [EKMS00].After a short introduction to scalar conservation laws and stochasticdifferential equations, we give the proof of the existence of an invariantmeasure for the stoschastic inviscid periodic Burgers’ Equation in onedimension according to [EKMS00].

Contents

1 Scalar conservation laws with source term 21.1 Characteristic curves . . . . . . . . . . . . . . . . . . . . . . . 2

1.1.1 From solution to characteristic curves . . . . . . . . . 21.1.2 From characteristic curves to solution . . . . . . . . . 31.1.3 The one-dimensional Burgers’ Equation . . . . . . . . 5

1.2 Entropy solutions . . . . . . . . . . . . . . . . . . . . . . . . . 61.2.1 Lax-Oleinik Formula . . . . . . . . . . . . . . . . . . . 8

2 Stochastic differential equations 92.0.2 Stochastic integral . . . . . . . . . . . . . . . . . . . . 92.0.3 Stochastic differential equation . . . . . . . . . . . . . 102.0.4 Transition semi-group . . . . . . . . . . . . . . . . . . 112.0.5 Invariant measure . . . . . . . . . . . . . . . . . . . . 112.0.6 Examples . . . . . . . . . . . . . . . . . . . . . . . . . 12

3 Scalar conservation laws with stochastic forcing 143.1 Entropy solution . . . . . . . . . . . . . . . . . . . . . . . . . 143.2 Burgers’ Equation . . . . . . . . . . . . . . . . . . . . . . . . 15∗Course given at the Spring School Analytical and Numerical Aspects of Evolution

Equations T.U. Berlin, 30/03/09–03/04/09

1

4 Invariant measure for the stochastic Burgers’ Equation 164.1 One-sided minimizers . . . . . . . . . . . . . . . . . . . . . . . 16

4.1.1 Bound on velocities . . . . . . . . . . . . . . . . . . . 184.1.2 Intersection of minimizers . . . . . . . . . . . . . . . . 204.1.3 Invariant solution . . . . . . . . . . . . . . . . . . . . . 23

1 Scalar conservation laws with source term

Let TN be the N -dimensional torus. Let A ∈ C2(R; RN ), u0 ∈ L∞(TN ) andW ∈ C1(TN × R). We consider the equation

∂tu(x, t) + div(A(u))(x, t) = W (x, t), x ∈ TN , t > 0 (1)

with initial condition

u(x, 0) = u0(x), x ∈ TN . (2)

Eq. (1) is easily solved in the linear case A(u) = au, a ∈ RN . Indeed, itrewrites as the transport equation (∂t + a · ∇)u = W , whose solution

u(x, t) = u0(x− ta) +∫ t

0W (x− (t− s)a, s)ds

satisfies (2).

1.1 Characteristic curves

1.1.1 From solution to characteristic curves

Let T > 0. Suppose that u ∈ C1(TN × [0, T ]) is a regular solution to thescalar conservation law (1). By the chain-rule formula, we then have

(∂t + a(u) · ∇)u = W in TN × (0, T ), (3)

which is a non-linear transport equation with source term. Here a(u) :=A′(u). The integral curves γ of the vector field (1, a(u))T are the curves(s(σ), ξ(σ))T satisfying the equation

s(σ) = 1,ξ(σ) = a(u(ξ(σ), σ)),

(4)

and (3) asserts that the derivative of u along any such γ is W (γ). Moretechnically, for x ∈ TN , t ∈ (0, T ], taking s = σ (by (4)), let ξ(s; t, x) be thesolution to

ξ(s) = a(u(ξ(s), s)), ξ(t) = x. (5)

2

Set ν(s) = u(ξ(s), s). By the chain-rule formula, we have

ν(s) = ∂tu(ξ(s), s) + ξ(s) · ∇u(ξ(s), s)= (∂tu+ a(u) · ∇u)(ξ(s), s),

hence, by (3),ν(s) = W (ξ(s), s). (6)

If furthermore, u satisfies the initial condition (2), then ν satisfies the limitcondition

ν(0; t, x) = u0(ξ(0; t, x)). (7)

Moreover, since ξ(t) = x, the value at (x, t) of u is ν(t; t, x). To sum up, weintroduce the (time-dependent) vector field

Vs

(xv

)=(

a(v)W (x, s)

).

The equations of characteristics are then the first-order system of ordinarydifferential equations (

ξν

)= Vs

(ξν

)(8)

where ξ(s; t, x) = ∂sξ(s; t, x) and similarly for ν. The ODE has the (non-standard, since coupled) boundary conditions

ξ(t; t, x) = x, ν(0; t, x) = u0(ξ(0; t, x)), (9)

and ν(t; t, x) = u(x, t).

1.1.2 From characteristic curves to solution

Theorem 1 (From characteristics to solution). Let u0 ∈ C1(TN ) and T > 0.Assume that there exists ξ, ν ∈ C1([0, T ] × [0, T ] × TN ) satisfying (8)-(9).Then u : (x, t) 7→ ν(t; t, x) is a C1 solution to (1) on [0, T ] that satisfies theinitial condition (2).

Remark: The local inversion Theorem applied to the integral form of (8)-(9):

ξ(s; t, x) = x−∫ t

sa(ν(σ; t, x))dσ,

ν(s; t, x) = u0(ξ(0; t, x)) +∫ s

0W (ξ(σ; t, x), σ)dσ,

for ξ, ν ∈ C0([0, T ]×[0, T ]×TN ) shows that, given u0 ∈ C1(TN ), there existsa regular solution (ξ, ν) for T small enough (together with Theorem 1, this

3

is precisely the local resolution of (1)-(2) by the method of characteristics).

Proof of Theorem 1: set u(x, t) = ν(t; t, x). In the coordinates (t, x, v),introduce the surface and section

S = (t, x, v) ∈ [0, T ]× TN × R; v = u(x, t),St = (x, v) ∈ TN × R; v = u(x, t), t ∈ [0, T ].

We claim that the surface S can be seen as a “free surface” over of a set ofparticles that are driven by Vt, i.e. St = φt,0(S0) where φt,s is the flow ofVt. Indeed one checks that(

ξ(s; t, x)ν(s; t, x)

)= φs,0

(z(x, t)

u0(z(x, t))

), z(x, t) = πxφ0,t

(x

u(x, t)

),

where πx is the projections on the first component x. See Figure 1. The

Figure 1: Characteristics and surface S

condition of non-penetration of particles in the free surface then reads

∂tu−√

1 + |∇u|2Vt · n = 0 on S , (10)

where n is the out-going unit normal to S ,

n =1√

1 + |∇u|2

(−∇u

1

).

Therefore, the condition of non-penetration (10) is

∂tu+ a(v) · ∇u− W = 0 on S .

4

Since v = u on S , we obtain the scalar conservation law (1) in its quasilinearform.

Remark: the fact that S can be seen as a free surface over of a set ofparticles that are driven by Vt can also be written as

(∂t + Vt · ∇x,v)1Dt = 0

where 1A is the characteristic function of a set A and Dt = (x, v) ∈ TN ×R; v < u(x, t) is the subgraph of u. In other words:

(∂t + a(v) · ∇x + W (x, t)∂v)1u>v = 0. (11)

Eq. (11) can be proven directly on the basis of (3) and the chain rule formulathat gives, for φ ∈ C2

c (R),

∂tφ(u) + div(∫ u

a(v)φ′(v)dv)− Wφ′(u) = 0,

or〈(∂t + a(v) · ∇x + W (x, t)∂v)1u>v, φ′(v)〉 = 0

by use of the identities∫R

1u>vφ′(v)dv = φ(u),∫

R1u>va(v)φ′(v)dv =

∫ u

a(v)φ′(v)dv,∫R

1u>v∂vφ′(v)dv = φ′(u).

Eq. (11) is the kinetic formulation of smooth solutions to (1), that will beextended later to non-smooth solutions.

1.1.3 The one-dimensional Burgers’ Equation

The (inviscid) Burgers’ Equation corresponds to the case A(u) = u2

2 in (1).We also assume that N = 1. Note that what follows could be extendedmore generally to the case of uniformly convex flux A. Indeed, the systemof ODEs (8) simplifies as a second order equation in ξ: since a(v) = v isinvertible, we obtain ξ = ν and thus ξ = W (ξ, t). This last equation isrelated to the minimization of the functional

A0,t(ξ) :=∫ t

0

12ξ(s)2 + W(ξ(s), s)ds, W(x, t) =

∫ x

0W (y, t)dy.

Indeed, a straightforward computation gives

DξA0,t · ζ =∫ t

0ξζ + W (ξ, s)ζds

=∫ t

0ζ(−ξ + W (ξ, s))ds

5

for all ζ ∈ C1c (0, t). Furthermore, the coupled boundary conditions (9)

can be easily incorporated in the minimization problem by considering themodified problem (minimization problem with a fixed end)

infξ(t)=x

(A0,t(ξ) +

∫ ξ(0)

0u0(z)dz

), (12)

where the inf is taken over the paths ξ ∈ C1[0, t] such that ξ(t) = x. Indeed,differentiation of the functional in (12) with respect to ζ ∈ C1

c [0, t) gives, ata point of minimum ξ:

0 =∫ t

0ξζ + W (ξ, s)ζds+ u0(ξ(0))ζ(0)

=∫ t

0ζ(−ξ + W (ξ, s))ds+ [u0(ξ(0))− ξ(0)]ζ(0),

hence the two equations ξ = W (ξ, t) and ν(0) = ξ(0) = u0(ξ(0)) (recallthat ν = ξ here). This relation between the Burgers’ Equation and theminimization Problem (known as Lax-Oleinik Formula [Lax57, Ole57], andHopf-Lax Formula in its original context of Hamilton-Jacobi equations) willbe fully exploited in the study of the Burgers’ Equation with stochasticforcing.

1.2 Entropy solutions

As illustrated on Figure 1 (which corresponds to the Burgers’ Equationwith initial datum u0(x) = min(1,max(1− x, 0)) and x ∈ R instead of T1),φt,0(S0) can cease to be a non-parametrized surface, i.e. the graph of afunction u(t) : TN → R. It is however possible to evolve under Vt whileremaining a graph, as in (11), at the expense of an accurate correction.

Definition 2 (Kinetic solution to scalar conservation laws). Let T > 0 andu0 ∈ L∞(TN ). We say that u ∈ L∞(TN×(0, T )) is a (weak) kinetic solutionto (1)-(2) if there exists a measurable mapping m from R into M+(TN ×[0, T ]), the space of non-negative measures over TN × [0, T ] such that

mv := m(v) = 0 for |v| > ‖u‖L∞(TN×(0,T )),∫ T0

∫TN dmv(x, t) ≤ ‖u0‖L1(TN ),∀v ∈ R,

and∫ T

0

∫TN

∫R

1u>v(∂t+a(v)·∇x+W (x, t)∂v)ϕdxdtdv+∫

TN

∫R

1u0>vϕ(0)dxdv

=∫ T

0

∫TN

∫R∂vϕdmv(x, t)dv (13)

for all ϕ ∈ C∞c (TN × [0, T )× R).

6

In other words, up to the initial condition, we have added the right-handside ∂vm to (11). Note that (by taking ϕ independent on v in (13)), we have

∂tu+ div(A(u)) = W

in the weak sense.

The kinetic formulation of scalar conservation laws has been developed byLions, Perthame, Tadmor [LPT94, Per02]. They prove in particular

Theorem 3 (Resolution of Problem (1)-(2)). Let T > 0 and u0 ∈ L∞(TN ).There exists a unique kinetic solution u ∈ L∞(TN × (0, T )) to (1)-(2). Be-sides, u ∈ C([0, T ];L1(TN )) and, if v is the kinetic solution issued from theinitial datum v0 ∈ L∞(TN ), we have

‖u(t)− v(t)‖L1(TN ) ≤ ‖u0 − v0‖L1(TN ), t ∈ [0, T ].

Actually, the solution u given in Theorem 3 is the entropy solution of Prob-lem (1)-(2), according to the following

Definition 4 (Entropy solution to scalar conservation laws). Let T > 0and u0 ∈ L∞(TN ). We say that u ∈ L∞(TN × (0, T )) is a (weak) entropysolution to (1)-(2) if for every convex η ∈ C2(R) and for every non-negativeθ ∈ C∞c (TN × [0, T )),∫ T

0

∫TN

η(u)∂tθ + Φ(u) · ∇xθ − W (x, t)η′(u)θdxdt+∫

TN

η(u0)θ(0)dxdv ≥ 0

(14)where Φ ∈ C2(R; RN ), Φ′(v) = a(v)η′(v).

The two definitions above are equivalent. To go from one formulation to theother, choose ϕ(x, t, v) = θ(x, t)η′(v) in (13) to deduce (14) from (13) anddefine 〈mv, θ〉 as the left hand-side of (14) with the choice η(u) = (u−v)+ todeduce (13) from (14). The notion of entropy solution as defined here goesback to Kruzhkov [Kru70] and is anterior to the notion of kinetic solution.

As a last remark, notice that this theory can be extended to the case whereA depends on (x, t): A = A(x, t, u) at least if A has a certain amount ofregularity in (x, t) (see section 3.1). We say that a function u ∈ L∞(TN ×(0, T )) is a (weak) entropy solution to the equation

∂tu+ div(A(x, t, u)) = W

7

with initial datum u0 if for every convex η ∈ C2(R) and for every non-negative θ ∈ C∞c (TN × [0, T )),∫ T

0

∫TN

η(u)∂tθ + Φ(x, t, u) · ∇xθdxdt

−∫ T

0

∫TN

[(W (x, t)− (divA)(x, t, u)η′(u)− (divΦ)(x, t, u)]θdxdt

+∫

TN

η(u0)ϕ(0)dxdv ≥ 0 (15)

where ∂vΦ(x, t, v) = a(x, t, v)η′(v).

Still anterior to the notion of entropy solution given by Kruzhkov is thenotion of entropy solution given by Lax and Oleinik.

1.2.1 Lax-Oleinik Formula

We consider the one-dimensional (inviscid) Burgers’ Equation

∂t + ∂x

(u2

2

)= W (x, t), x ∈ T1, t > 0. (16)

Introduce the Skohorod space D of functions u : T1 → R which have dis-continuity of the first kind only, i.e. for every x ∈ T1, u(x+) := lim

y↑xand

u(x−) := limy↓x

u(y) exist and are possibly distinct. We recall that if u ∈ D,

then its set of points of discontinuity is countable at most, u is bounded,and u ∈ L∞(T1).Introduce now the action

A0,t(ξ) :=∫ t

0

12ξ(s)2 + W(ξ(s), s)ds, W(x, t) =

∫ x

0W (y, t)dy.

We have the following characterization of entropy solutions.

Theorem 5 (Lax-Oleinik Formula [Lax57, Ole57]). Let u0 ∈ D and letT > 0. the entropy solution u ∈ L∞(TN × (0, T )) to (16) with initial datumu0 is given by

u(x, t) =∂

∂xinf

ξ(t)=x

A0,t(ξ) +

∫ ξ(0)

0u0(z)dz

,

where the infimum is taken over all piecewise C1[0, t] paths with end ξ(t) = x,and satisfies u(t) ∈ D for every t ∈ [0, T ].

8

2 Stochastic differential equations

We give here a basic summary about stochastic differential equations. Let(Ω,F ,P, (Ft)t≥0, β(t)) be a stochastic basis, i.e.

• (Ω,F ,P) is a probability space,

• (F)t≥0 is a filtration on (Ω,F ,P) : Fs ⊂ Ft ⊂ F , 0 ≤ s ≤ t,

• (β(t))t≥0 is a brownian motion over R,

• each β(t) is Ft-measurable,

• for 0 ≤ s ≤ t, β(t)− β(s) and Fs are independent.

A real-valued process (X(t))t≥0 over (Ω,F ,P) is adapted if each X(t) isFt-measurable. It is said predictible if it is measurable, as a function of(t, ω), for the σ-algebra generated by the set of products (s, t]× F , F ∈ Fs,0 ≤ s ≤ t and by 0×F0. Furthermore, a process (X(t))t≥0 over (Ω,F ,P)is said to be a.s. C0,α (0 ≤ α < 1) if for a.e. ω ∈ Ω, t 7→ X(t, ω) is C0,α

(we use the convention C0,0(R+) = C(R+)). It is said to be L2-continuousif X : R+ → L2(Ω) is continuous. An L2-continuous adapted process X hasa predictible modification, i.e. there exists X a predictible process suchthat X(t) = X(t) a.e., for every t ≥ 0. At last, (X(t))t≥0 is said to be amartingale (resp. a sub-martingale) if

E(X(t)|Fs) = X(s), 0 ≤ s ≤ t,

(resp. E(X(t)|Fs) ≤ X(s)) where E(X|Fs) denotes the conditional expec-tation of X with respect to Fs. We have the

Proposition 6 (Doob’s inequality). Let (X(t))t≥0 be an a.s. continuousmartingale with E|X(T )|p < +∞, p > 1, T > 0. Then

E( sup0≤t≤T

|X(t)|p) ≤ cpE|X(T )|p, cp :=(

p

p− 1

)p. (17)

2.0.2 Stochastic integral

For T > 0, let L2P(Ω × [0, T ]) be the space of predictible L2(Ω × [0, T ])

processes (X(t))t∈[0,T ].

Theorem 7 (Ito’s stochastic integral). There exists an isometry L2P(Ω ×

[0, T ]) 7→ L2(Ω) denoted

Z 7→∫ T

0Z(s)dβ(s)

9

such that ∫ T

0Z(s)dβ(s) =

n−1∑k=0

Zk(β(tk+1)− β(tk))

if

Z =n−1∑k=0

Zk1(tk,tk+1]

with 0 = t0 < t1 < · · · < tn = T and Zk a Ftk-measurable random variable.In particular,

E∣∣∣∣∫ T

0Z(s)dβ(s)

∣∣∣∣2 = E∫ T

0|Z(s)|2 ds, E

∫ T

0Z(s)dβ(s) = 0. (18)

The process X : t 7→∫ t0 Z(s)dβ(s) is then an a.s. continous martingale and,

by Doob’s inequality (17) and Ito’s isometry (18),

E( sup0≤t≤T

|X(t)2|) ≤ 4E∫ T

0|Z(s)|2 ds.

This can be extended to the case where Z is an L2-continuous adaptedprocess and we denote dX = Zdβ. More generally, if Y ∈ L1(Ω × [0, T ]) isadapted, Z ∈ L2

P(Ω× [0, T ]), X0 is F0-measurable and

X(t) = X0 +∫ t

0Y (s)ds+

∫ t

0Z(s)dβ(s),

we denote dX = Y dt+ Zdβ and say that X solves the ProblemdX = Y dt+ Zdβ,X(0) = X0.

The Ito Formula can then be stated as follows. Let u ∈ C2b (R× [0, T ]) and

dX = Y dt+ Zdβ as above. Then

du(X, t) = (ut(X, t) + ux(X, t)Y +12uxx(X, t)Z2)dt+ ux(X, t)Zdβ.

All this can be generalized to the cases where β, Y,X take values in RN

(resp. TN ), Z takes values in RN×N (resp. ZN×N ).

2.0.3 Stochastic differential equation

Let β be a brownian motion over RN . Let f ∈ Cb(RN × [0, T ]; RN ), g ∈Cb(RN × [0, T ]; RN×N ) be Lipschitz continuous in x, uniformly with respect

10

to t and let X0 be a F0-measurable random variable. For the stochasticdifferential Problem

dX = f(X, t)dt+ g(X, t)dβ,X(0) = X0.

(19)

we have the

Theorem 8 (The Cauchy Problem for stochastic differential equations).There exists a unique adapted process X ∈ C([0, T ];L2(Ω)) solution to (19).Besides, X has a.s. continuous trajectories.

2.0.4 Transition semi-group

For ϕ ∈ Cb(RN ), 0 ≤ s ≤ t, define

Pt,sϕ(x) = Eϕ(X(t, s, x)), (20)

where X(t, s, x) is the value at time t of the solution to dX = f(X, t)dt +g(X, t)dβ starting at x ∈ RN at time s. Then Pt,s is Feller, i.e. Pt,s sendsCb(RN ) in Cb(RN ) and we have the Chapman-Kolmogorov equation

Pt,σ Pσ,s = Pt,s, s ≤ σ ≤ t.

In the autonomous case f(X, t) = f(X), g(X, t) = g(X), we have Pt,s =Pt−s,0 =: Pt and PtPs = Pt+s, i.e. (Pt) is a semi-group over Cb(RN ). Besides(Pt) is strongly continuous (since f and g are bounded). The generator of(Pt) is

u 7→ f ·Du+12g∗g : D2u.

2.0.5 Invariant measure

A probability measure µ on RN is said to be invariant if∫RN

Pt,sϕ(x)dµ(x) =∫

RN

ϕ(x)dµ(x)

for all ϕ ∈ C(RN ) and all s ≤ t. Equivalently, Pt,s∗µ = µ where Pt,s∗µ isdefined by the duality 〈Pt,s∗µ, ϕ〉 = 〈µ, Pt,sϕ〉, ϕ ∈ Cb(RN ). Note that, ifX is solution to (19), and µt is the law of X(t), then, for ϕ ∈ Cb(RN ), ands ≤ t, ∫

RN

ϕ(x)dµt(x) = Eϕ(X(t))

= Eϕ(X(t, s,X(s)))= (Pt,sϕ)(X(s))

=∫

RN

(Pt,sϕ)(x)dµs(x).

Consequently, we have the following

11

Proposition 9 (Invariant solution and invariant measure). Assume X isan invariant solution, i.e. the law µ of X does not depend on t. Then µ isan invariant measure.

Hence, existence of invariant solution gives the existence of an invariantmeasure. An other way to exhibit invariant measures is to use the Krylov-Bogoliubov Theorem and the Prohorov Theorem.

Theorem 10 (Prohorov Theorem). Let E be a metric space. If a sequence(µn) of probability measure is tight, i.e. for all ε > 0, there exists a compactKε in E such that µn(Kε) > 1−ε for all n then there is a subsequence (µnk

)which is weakly converging to a probability measure µ: for all ϕ ∈ Cb(E),

limk→+∞

∫Eϕ(x)dµnk

(x) =∫Eϕ(x)dµ(x).

We consider the autonomous case in (19).

Theorem 11 (Krylov-Bogoliubov). Let µ be a probability measure on RN .If, for some probability measure ν and for some sequence (Tn) ↑ +∞, thesequence 1

Tn

∫ Tn

0 P ∗t νdt converges weakly to µ, then µ is an invariant mea-sure.

Corollary 12. Suppose that, for some probability measure ν, the family1T

∫ T0 P ∗t νdt;T > 0

is tight. Then there exists en invariant measure.

2.0.6 Examples

1. ODE: For the autonomous equation X = f(X) (with, say, f continu-ously differentiable and bounded), the invariant measures are the (convexcombinations of the) Dirac masses δx, x ∈ f−1(0).

2. Langevin Equation: We consider the stochastic differential equation

dX = −bXdt+ σdβ, b > 0, σ ∈ R \ 0. (21)

By Ito Formula,

12d|X|2 = −b|X|2dt+ σXdβ +

σ2

2dt,

and12

E|X|2(T ) + b

∫ T

0E|X(t)|2dt =

12

E|X(0)|2 +σ2

2T.

In particular, taking ν = law of X(0), so that P ∗t ν = law of X(t), we have,for µT := 1

T

∫ T0 P ∗t ν, ∫

RN

|x|2dµT (x) ≤ C

b.

12

Considering the compact KR = B(0, R), we deduce

µT (KcR) ≤ 1

R2

∫|x|>R

|x|2dµT (x) ≤ C

bR2,

hence µT (KR) ≥ 1 − CbR2 for all T , which shows that µT T>0 is tight.

By corollary 12, there exists an invariant measure for (21) (actually, thisinvariant measure is unique and is a Gaussian).

3. Stochastic Heat Equation: We consider the stochastic partial differ-ential equation

dX(t) = b∆X(t)dt+ dW in TN , b > 0. (22)

Let en : x 7→ e2πin·x, n ∈ ZN , define the canonical Fourier basis on TN . Let(βn(t))n∈Zd be some independent brownian motion over R. We assume thatW is the cylindrical Wiener process defined by

W (t) =∑n∈ZN

σnβn(t)en, (σn) ∈ l2(ZN ), (23)

so that (22) can be seen as the countable collection of Langevin Equationsfor Xn := 〈X, en〉, X ∈ L2(TN ):

dXn(t) = −4π2b|n|2Xndt+ σndβn(t). (24)

All the following computations (and the fact that Pt is Feller) can be justifiedby using (24). By Ito Formula, we have

12

E‖X(T )‖2L2(TN ) + b

∫ T

0E‖∇X(t)‖2L2(TN )dt

=12

E‖X(0)‖2L2(TN ) +‖σ‖2

l2(ZN )

2T. (25)

In particular, we have the bound

1T

∫ T

0E‖X(t)‖2H1(TN )dt ≤

C

b. (26)

Take ν = law of X(0), so that P ∗t ν = law of X(t) and set µT = 1T

∫ T0 P ∗t ν.

We have ∫L2(TN )

‖u‖2H1(TN )dµT (u) ≤ C

b,

hence, for KR := u ∈ L2(TN ); ‖u‖H1(TN ) ≤ R,

µT (KcR) ≤ 1

R2

∫L2(TN )

‖u‖2H1(TN )dµT (u) ≤ C

bR2

13

and µT (KR) ≥ 1 − CbR2 , which proves that µT T>0 is tight, and gives the

existence of an invariant measure by Corollary 12.

4. Stochastic parabolic Equation: We consider the stochastic partialdifferential equation

dX(t) = div(A(X))dt+ b∆X(t)dt+ dW in TN , b > 0 (27)

with W as in (23) and A ∈ C2(R; RN ). Since 〈div(A(X)), X〉L2 = 0, Ito For-mula again gives (25) as above, hence the existence of an invariant measure.

In all the examples above (except the one of deterministic ODEs), the exis-tence of an invariant measure is ensured by a dissipation mechanism con-tained in the ODE or PDE, and indicated in the fact that b > 0 (observethat all the bounds that give tightness, as (26) for example, depend on b−1).What happen when b = 0 ?

3 Scalar conservation laws with stochastic forcing

Let (Ω,F ,P, (Ft)t≥0) be a stochastic basis. We consider the first-order scalarconservation law with stochastic forcing

du+ div(A(u))dt = dW, x ∈ TN , t > 0. (28)

We suppose that the noise has the structure

W = div(W), W(x, t) =∞∑i=k

Fk(x)βk(t)

where the βk(t)s are independent brownian motion on R the Fk a regularfunctions, Fk ∈ C3(TN ; RN ) with∑

k≥1

‖Fk‖C3(TN ;RN ) < +∞.

The flux function A in (28) is supposed to satisfy A ∈ C2(R; RN ). Eq. (28)will be appended with a (deterministic) initial condition

u(x, 0) = u0(x), x ∈ TN , u0 ∈ L∞(TN ). (29)

3.1 Entropy solution

To define a notion of entropy solution to (28), we take advantage of the factthat the noise is additive (independent on u) to rewrite the equation as

d(u−W ) + div(A(u)) = 0,

14

i.e.

∂tuW + div(B(x, t, uW )) = 0, B(x, t, uW ) := A(uW +W (x, t)), (30)

for the new unknown uW := u −W . For a.e. ω, Wω ∈ C0,1/3t C3

x, henceB(·, u) ∈ C

0,1/3t C3

x. This is enough regularity to solve the Problem (30)-(29). By use of an adequate notion of generalized solution, one can provethe following result.

Definition 13 (Entropy solution). Let T > 0 and u0 ∈ L∞(Ω). We saythat u ∈ L2(TN × (0, T )× Ω) is solution to (28)-(29) if u(t) is adapted andif, for a.e. ω, uω −Wω is an entropy solution to (30)-(29) in the sense of(15).

Theorem 14. [VW09] For any u0 ∈ L∞(Ω), for any T > 0, there exists aunique entropy solution u ∈ L2(TN × (0, T )× Ω) to (28)-(29).

We also refer to [Kim03] and to [FN08] for the case of a noise depending onthe unknown u (multiplicative noise, 1D).

3.2 Burgers’ Equation

Assume N = 1, A(u) = u2/2. Then (28) is the (inviscid) Burgers’ Equationwith stochastic forcing

du+ ∂x(u2/2)dt = dW (t). (31)

For (31), the Lax-Oleinik formula given in Section 1.2.1 holds. Actually, theterm related to the source term in the action Aτ,t is now∫ t

τ

∑k≥1

Fk(ξ(s))dβk(s).

To avoid the use of the stochastic integral, we use integration by parts torewrite it as

−∫ t

τ

∑k≥1

fk(ξ(s))ξ(s)(βk(s)−βk(τ))ds+∑k≥1

Fk(ξ(t))(βk(t)−βk(τ)), fk := F ′k.

If ξ(t) is fixed to be x, then the term Fk(ξ(t))(βk(t)− βk(τ)) is independenton ξ, hence we redefine the action as

Aτ,t(ξ) =∫ t

τ

12|ξ(s)|2 −

∑k≥1

fk(ξ(s))ξ(s)(βk(s)− βk(τ))ds, (32)

for ξ ∈ C1[τ, t].

Remark: in all the analysis that follows we will consider minimizers of theaction As,t for fixed ω, hence this latter should be denoted A ω

s,t, which wewill not do for brevity of notations.

15

Theorem 15. [EKMS00] Let u0 ∈ D and let t0 ∈ R and T > t0. Let theaction Aτ,t be defined by (32). The entropy solution u ∈ L∞(TN × (t0, T ))to (31) with initial datum u0 at t = t0 is given by

uω(x, t) =∂

∂xinf

ξ(t)=x

At0,t(ξ) +

∫ ξ(t0)

0u0(z)dz

, t > t0

for a.e. ω. Besides, for a.e. ω, for every t ∈ [t0, T ], uω(t) ∈ D.

Notice that we have considered the solution to (31) starting at t0 ∈ R,not necessarily t0 = 0. It is straightforward to generalized the definitionof entropy solution given in Def. 13 to this setting. However it supposesthat the brownian processes βk(t) are defined for −∞ < t < +∞. One canconstruct a Brownian motion β on R by concatenation of two independentBrownian motions β\ and β[: β(t) = β[(t−) + β\(t+).

4 Invariant measure for the stochastic Burgers’Equation

We will now follow § 1,2,3,4 in [EKMS00] to prove the following

Theorem 16. The stochastic inviscid Burgers’ Equation (31) admits aninvariant measure.

We have seen above (Example 4. in Section 2.0.6) that stochastic parabolicequations admit an invariant measure because of dissipation mechanisms.For the inviscid Burgers’ Equation, the dissipation (of the energy suppliedby the source term) occur in shocks. Actually, shocks result from the non-linear character of the flux u 7→ u2/2.

4.1 One-sided minimizers

To construct an invariant measure, we will construct an invariant solution(cf Prop. 9). To this aim we will show that, for a.e. ω, there exists asolution uω starting from u0 ≡ 0 at t0 = −∞. This solution will be buildvia minimizers of the action limt0→−∞At0,0. More properly, we will considerone-sided minimizers according to the following definition.

Definition 17 (One-sided minimizer). Let t ∈ R. A piecewise C1-curveξ : (−∞, t]→ T1 is a one-sided minimizer if

As,t(ξ) ≤ As,t(ξ)

for any compact perturbation ξ ∈ Lip(−∞, t] such that ξ(t) = ξ(t), ξ = ξ on(−∞, τ ], and any s ≤ τ .

16

Remark (regularity of minimizers): for a.e. ω, βk(·, ω) ∈ C0,1/3(R) forall k. For such ω, any minimizer (among piecewise C1-curves) of the actionAt1,t2 , t1 < t2 will be actually in C1,1/3[t1, t2]. This can be seen by observingthat

14

∫ t2

t1

|ξ(s)|2ds−∑k≥1

‖fk‖C(T1)

∫ t2

t1

|βk(s)− βk(t1)|ds ≤ At1,t2(ξ), (33)

which shows that any minimizer is in H1(t1, t2), then by writing the Eulerequation for the minimization of At1,t2 , that gives

ξ(τ) = ξ(s) +∑k≥1

fk(ξ(τ))(βk(τ)− βk(t2))− fk(ξ(s))(βk(s)− βk(t2))

−∑k≥1

∫ τ

sf ′k(ξ(σ))ξ(σ)(βk(σ)− βk(t2))dσ, (34)

for a.e. t1 < s < τ < t2.

In relation with the regularity of minimizers, we have the following

Proposition 18. Let s < τ < t and let ξ ∈ C([s, t])∩ (C1([s, τ ])∪C1([τ, t]))have a corner at τ : ξ(τ−) 6= ξ(τ+). Then, by smoothing of ξ, the actionAs,t(ξ) is strictly decreased.

Proof: Let ξε be a regularization of ξ around σ = τ . We can suppose thatξε and ξ have same ends, in which case As,t(ξε)−As,t(ξ) = A ′s,t(ξ

ε)−A ′s,t(ξ),where

A ′s,t(ξ) =∫ t

s

12|ξ(σ)|2 −

∑k≥1

fk(ξ(σ))ξ(σ)(βk(σ)− βk(τ))dσ.

Since |βk(σ) − βk(τ)| = O(ε1/3) for |σ − τ | < ε, this is the first term inA ′s,t that will be dominant in the comparison of A ′s,t(ξ

ε) with A ′s,t(ξ). Itis obvious that, by right or left smoothing of ξ, according to the sign ofξ(τ+)− ξ(τ−), one strictly decreases ‖ξ‖L2 .

Remark (restriction of minimizers): Observe that, for s ≤ τ ≤ t, wehave As,τ + Aτ,t = As,t +Rs,τ,t with

Rs,τ,t(ξ) :=∑k≥1

[Fk(ξ(t))− Fk(ξ(τ))][βk(τ)− βk(s)].

In particular, if γ is a minimizer of As,t, then, for [s′, t′] ⊂ [s, t], γ is aminimizer of As′,t′ among all piecewise C1 curves having same ends as γ ats′ and t′.

17

4.1.1 Bound on velocities

Lemma 19. For a.e. ω and any t ∈ R, there exists T (ω, t) and C(ω, t) suchthat, if γ minimizes At1,t and t1 < t− T (ω, t), then

|γ(t)| ≤ C(ω, t).

Remark: in the case Fk ≡ 0 (no noise), the minimizers of the action arestraight lines :

ξ(s) =ξ(t)− ξ(t1)t− t1

.

In particular, since |ξ(t) − ξ(t1)| ≤ 1, |ξ(t)| ≤ 1 as soon as t1 < t − 1. Theproof of Lemma 19 rests on this simple fact together with a perturbationargument.

Proof of Lemma 19: Set

C1(ω, t) =14

+ maxt−1≤s≤t

∑k≥1

‖Fk‖C2(T1)|βk(s)− βk(t)|

and T (ω, t) = 14C1(ω,t) , C(ω, t) = 20C1(ω, t). Set also

v∗ = |γ(t)|, vM = maxt−T≤s≤t

|γ(s)|, vm = mint−T≤s≤t

|γ(s)|.

We first show a Harnack inequality:

v∗ > 16C1 ⇒12v∗ ≤ vm ≤ vM ≤

32v∗. (35)

Indeed, by modification of (34), we have, for s ∈ [t− T, t],

γ(s) = γ(t) +∑k≥1

fk(γ(s))(βk(s)− βk(t))

+∫ t

sγ(σ)

∑k≥1

f ′k(γ(σ))(βk(σ)− βk(t))dσ,

hence

|γ(s)| ≤ v∗ + C1 + C1TvM

≤ v∗ + C1 +14vM .

It follows vM ≤ v∗+C1 + 14vM and vM ≤ 4

3(v∗+C1), whence the inequalityvM ≤ 3

2v∗ in (35). Similarly, we have

|γ(s)− γ(t)| ≤ C1 +14vM

18

and, if v∗ > 16C1,

|γ(s)− γ(t)| ≤ 116v∗ +

38v∗ ≤

12v∗,

which gives the second inequality in (35). Now, if ξ is a C1-curve havingthe same extremities as γ at t− T and t, we have

0 ≤ At−T,t(ξ)−At−T,t(γ)

=12

∫ t

t−T(|ξ(s)|2 − |γ(s)|2)ds

−∑k≥1

[fk(ξ(s))ξ(s)− fk(γ(s))γ(s)](βk(s)− βk(t))ds

≤ T

2( maxt−T≤s≤t

|ξ(s)|2 − v2m) + C1T ( max

t−T≤s≤t|ξ(s)|+ vM ).

Taking for ξ the straight line joining γ(t−T ) to γ(t), we have |ξ(s)| ≤ T−1,hence

0 ≤ At−T,t(ξ)−At−T,t(γ) ≤ 12

(1T− Tv2

m) +14

(1T

+ vM )

≤ 34T− T

2v2m +

14vM . (36)

If v∗ > 16C1, then (35) and (36) give 0 ≤ 34T −

T8 v

2∗ + 1

4v∗, which impliesv∗ ≤ 20C1. This concludes the proof of the lemma.

Corollary 20. For a.e. ω and any t1 < t′ < t, there exists T (ω, t) andC(ω, t, t′) such that, if γ minimizes At1,t and t1 < t − T (ω, t), then, forevery s ∈ [t′, t],

|γ(s)| ≤ C(ω, t, t′).

Proof: By Lemma 19, we have |γ(t)| ≤ C(ω, t). Let γ be the straight linejoining γ(t′) to γ(t). Using At′,t(γ) ≤ At′,t(γ) in (33) gives ‖ξ‖L2(t′,t) ≤C(ω, t, t′). Reporting in (34) where s = t, we obtain |γ(τ)| ≤ C(ω, t, t′) (forpossibly a different constant), τ ∈ [t′, t].

Lemma 21 (Limit of minimizers). For x ∈ T1, t ∈ R, let (γn) be a sequenceof minimizers of the action Atn,t with fixed end γn(t) = x. Suppose thattn → −∞ and that γn → γ ∈ C1(−∞, t] for the C1-convergence on everycompact subset of (−∞, t]. Then γ is a one-sided minimizer with end γ(t) =x.

Proof: Let ξ ∈ C1(−∞, t] satisfy ξ(t) = x, ξ = γ on (−∞, τ ]. Fix s ≤ τand let (ξn) be a sequence of C1[s, t]-curves such that ξn and γn have same

19

extremities and ξn → ξ in C1[s, t]. Such a (ξn) exists since γn(s) → γ(s) =ξ(s). We have As,t(γ) = lim

n→+∞As,t(γn) and, for tn ≤ s, As,t(γn) ≤ As,t(ξn).

Since As,t(ξn)→ As,t(ξ), it follows that As,t(γ) ≤ As,t(ξ).

As a corollary of Corollary 20 and Lemma 21, we obtain the existence ofone-sided minimizers.

Theorem 22 (Existence of one-sided minimizers). For a.e. ω, for everyx ∈ T1, t ∈ R, there exists a one-sided minimizer γ such that γ(t) = x.

Proof: For fixed ω, x, t, let γn be a minimizer of A−n,t satisfying γn(t) = x(that γn exists results from (33)). For n > T (t, ω) − t, and −n < s < t,we have ‖γn‖C[s,t] ≤ C(ω, t, s) by Corollary 20, hence, up to a subsequence,there exists γ ∈ H1(s, t) such that γn → γ in C[s, t] and γn γ in L2(s, t).From (34), it follows that, possibly after a new extraction, γn → γ in C1[s, t].By a diagonal process, there exists γ ∈ C1(−∞, t] such that γn → γ for theC1-convergence on any compact of (−∞, t]. By Lemma 21, γ is a one-sidedminimizer with γ(t) = x.

4.1.2 Intersection of minimizers

The following lemma is a classical fact for one-sided minimizers.

Lemma 23. Two different one-sided minimizers γ1 ∈ C1(−∞, t1] and γ2 ∈C1(−∞, t2] cannot intersect each other more than once.

Proof: suppose γ1(t) = γ2(t) for t ∈ t3, t4, t3 < t4. Assume without lossof generality At3,t4(γ1) ≤ At3,t4(γ2). Define the curve

ξ(s) = γ2 · γ1 · γ2(s) =γ2(s), s ∈ (−∞, t3] ∪ [t4, t2],γ1(s), s ∈ [t3, t4].

Then, for s < t3, one has As,t2(ξ) ≤ As,t2(γ2). Since γ1(t3) 6= γ2(t3),ξ has a corner at t3. By smoothing ξ, we obtain (by Prop. 18) a curveξ∗ ∈ C1(−∞, t2] such that ξ∗(t2) = γ2(t2) and

As,t2(ξ∗) < As,t2(γ2)

for some s < t3. This contradicts the fact that γ2 is a one-sided minimizer.

The following lemma is also classical:

Lemma 24. For all η > 0, T > 0, for a.e. ω, there exists a sequence ofrandom times tn(ω)→ −∞ such that

∀n, maxtn−T≤s≤tn

∑k≥1

‖Fk‖C2(T1)|βk(s)− βk(tn)| ≤ η.

20

Proof: For m ∈ N, consider the event

Am =

max(−m−1)T≤s≤−mT

∑k≥1

‖Fk‖C2(T1)|βk(s)− βk(−mT )| ≤ η

.

The events Am are independent and P(Am) > 0 for every m, hence, by thesecond Borel-Cantelli lemma,

P(ω ∈ Am for infinitely many m) = 1.

This gives the result for tn = −mnT .

Lemma 24 asserts that, with probability one, the noise is arbitrary small onan infinite number of arbitrary long intervals. We use this result to provethe following

Lemma 25. For a.e. ω, for all ε > 0, for any one-sided minimizers γi ∈C1(−∞, t1], i = 1, 2, there exists T = T (ε) > 0 and an infinite sequenceof random times tn(ω, ε)→ −∞ such that, for all n, there exists some C1-curve γ1,2 (resp. γ2,1) reconnecting γ1 to γ2 (resp. γ2 to γ1) between tn − Tand tn such that

|Atn−T,tn(γi)−Atn−T,tn(ξ)| < ε, i = 1, 2, ξ ∈ γ1,2, γ2,1.

We say that a C1-curve γ1,2 reconnects γ1 to γ2 between τ and t, τ < t, ifγ1,2 and γ1,2 coincides with γ1 and γ1 (resp. γ2 and γ2) at τ (resp. t).

Proof of Lemma 25: fix η = η(ε) > 0, T = T (ε) > 0 and apply Lemma 24:there is a sequence tn(ω, ε) such that

∀n, maxtn−T≤s≤tn

∑k≥1

‖Fk‖C2(T1)|βk(s)− βk(tn)| ≤ η.

Taking T = (4η)−1 and repeating the proofs of Lemma 19 and Corollary 20,we obtain

maxtn−T≤s≤tn

|γi(s)| ≤c

T, i = 1, 2,

where c is a given numerical constant. Then it is easy to construct γ1,2 andγ2,1 with

|γ1,2(s)|, |γ2,1(s)| ≤ c+ 1T

, s ∈ [tn − T, tn].

We then compute |Atn−T,tn(γi)−Atn−T,tn(ξ)| < c′

T , where c′ is a numericalconstant, i = 1, 2, ξ = γ1,2 or γ2,1. The result follows by taking T = c′

ε .

Lemma 25 contains the information that (as a result of the randomness ofthe force), two one-sided minimizers have an effective intersection at −∞.As a result, in light of Lemma 23, we have the following

21

Theorem 26. For a.e. ω, for any distinct C1-curves γ1 and γ2 one-sidedminimizers over (−∞, t1] and (−∞, t2] respectively, we have the followingresult. Assume that γ1 and γ2 intersect at a point (x, t). Then t1 = t2 = tand γ1(t1) = γ2(t2) = x.

Proof: suppose t < t1 for example. The curve

γ2 · γ1(s) :=γ2(s), s ∈ (−∞, t],γ1(s), s ∈ [t, t1],

has a corner at time t. Hence, for a given τ < t, it can be smoothed outaccording to Prop. 18 and the resulting curve γ2 γ1 satisfies

Aτ,t1(γ2 · γ1)−Aτ,t1(γ2 γ1) = δ > 0.

Set ε = δ/4. Let (tn) be given by Lemma 25 and take n large enough sothat γ2 γ1 = γ2 on (−∞, tn]. We have

|Atn,t(γ2)−Atn,t(γ1)| ≤ 2ε. (37)

Indeed, in the case Atn,t(γ2)−Atn,t(γ1) > 2ε for example, the curve

γ2 · γ2,1 · γ1(s) =

γ2(s), s ∈ (−∞, tn − T ],γ2,1(s), s ∈ [tn − T, tn],γ1(s), s ∈ [tn, t],

is a local perturbation of γ2 with the same end at t and, for s ≤ tn − T ,

As,t(γ2)−As,t(γ2 · γ2,1 · γ1) = Atn−T,tn(γ2)−Atn−T,tn(γ2,1)+Atn,t(γ2)−Atn,t(γ1)

≥ ε,

which contradicts the fact that γ2 is a one-sided minimizers. Now, the curve

ξ(s) = γ1 · γ1,2 · γ2 γ1(s) =

γ1(s), s ∈ (−∞, tn − T ],γ1,2(s), s ∈ [tn − T, tn],γ2 γ1(s), s ∈ [tn, t1],

is a local perturbation of γ1 with the same end at t1 and, for s ≤ tn − T ,

As,t1(γ1)−As,t1(ξ) = Atn−T,tn(γ1)−Atn−T,tn(γ1,2)+Atn,t(γ1)−Atn,t(γ2)+Aτ,t1(γ2 · γ1)−Aτ,t1(γ2 γ1)

≥ δ − 3ε > 0,

and this contradicts the fact that γ1 is a one-sided minimizer.

22

4.1.3 Invariant solution

We define now

uω+(x, t) = infM(x,t,ω)

γ(t),

uω−(x, t) = supM(x,t,ω)

γ(t),

where the inf (resp. sup) is taken over the set M(x, t, ω) of one-sided min-imizers γ such that γ(t) = x. The invariant solution is defined by u = u+.Using the analysis in Section 4.1.1, 4.1.2 above, we will prove that for all t,for a.e. ω, uω(t) ∈ D and that the map Ω→ D, ω 7→ uω(t) is measurable.

Lemma 27. For a.e. ω, for every t ∈ R, the set of x ∈ T1 with #M(x, t, ω) >1 is at most countable.

Proof: To any x ∈ T1 with #M(x, t, ω) > 1, and by Lemma 23, therecorresponds a non-trivial segment I(x) = [γ1(t−1), γ2(t−1)], where γ1 < γ2

on (−∞, t). By Theorem 26, the segments I(x)#M(x,t,ω)>1 are mutuallydisjoint. This gives the result.

It follows from Lemma 27 that, for a.e. ω, for every t ∈ R, uω+(x, t) =uω−(x, t), except possibly at a countable number of points x ∈ T1. Besides,we have

Lemma 28. For a.e. ω, for every t ∈ R, x ∈ T1,

limy↑x

uω+(y, t) = uω−(x, t),

limy↓x

uω+(y, t) = uω+(x, t),

and hence uω(t) ∈ D.

Proof: We have |uω+(y, t)| ≤ C(ω, t), y ∈ T1, by Lemma 19. Assume thatthere exists (yn) ↑ x with uω+(yn, t) → v∗ ∈ R. Pick up γn ∈ M(yn, t, ω)such that uω+(yn, t) ≤ γn(t). By repeating the arguments of Lemma 21 andTheorem 22, we see that, up to extraction of a subsequence, there is a one-sided minimizer γ∗ ∈ M(x, t, ω) such that γn → γ∗ for the C1-convergenceon any compact subset of (−∞, t]. In particular v∗ = γ∗(t) ≤ uω−(x, t).If v∗ = γ∗(t) < uω−(x, t), then there exists γ∗ ∈ M(x, t, ω) with γ∗(t) >γ∗(t). But then, for n large enough, γn must intersect γ∗, which contradictsTheorem 26.

We now turn to the question of measurability of u. The space D is endowedwith the Skohorod topology, and we denote by B(D) the σ-algebra of Borelsets. Any map v ∈ D is redefined as a right-continuous function. If E is adense subset of T1, then the cylinders

π−1I (A), A ∈ B(R), I = x1, . . . , xk ⊂ Ek, πI(v) = (v(x1), . . . , v(xk)),

23

generate B(D). In particular, if v : ω 7→ vω is a mapping Ω → D such that(ω, x) 7→ vω(x) is measurable as a mapping Ω×T1 → R, then v is measurable(Ω,F) → (D,B(D)). We refer to Billingsley [Bil99] on this subject. Theresult is the following:

Lemma 29. For all t ∈ R, the mapping (Ω,F)→ (D,B(D)), ω 7→ uω+(·, t)is measurable.

Proof: Without loss of generality, suppose t = 0. Let (tn) be a sequence ofnegative times such that tn → −∞. Introduce, for x ∈ T1, the infimum ofthe action

An(x) := infξ(0)=x

Atn,0(ξ).

For all n, denote by uωn,+ the right-continuous solution to (31) starting from0 at t = tn. By Theorem 14 and Theorem 15, un,+ : Ω → D is measurable.Set

J = (ω, x) ∈ Ω× T1;uω+(x, 0) 6= uω−(x, 0).

Admit for the moment that J is measurable. Lemma 27 implies that fora.e. ω, the section Jω = x ∈ T1, (ω, x) ∈ J has Lebesgue measure 0. ByFubini’s theorem, it follows that for x ∈ E, a set of full Lebesgue measurein T1, we have P(Jx) = 0, where Jx is the section ω ∈ Ω; (ω, x) ∈ J.For x ∈ E, we have uω+(x, 0) = limn→+∞ u

ωn,+(x, 0) for a.e. ω; in particular

ω → uω+(x, 0) is measurable (Ω,F) → (R,B(R)). Since E is dense in T1,u+(·, 0) is measurable (Ω,F)→ (D,B(D).To prove that J is measurable, consider the solution ξx,v to the stochasticdifferential equation

dξ(s) = ν(s)ds, dν(s) =∑k≥1

fk(ξ(s))dβk(s), s ≤ 0

with data ξ(0) = x, ξ(0) = v (such a solution exists globally in time s ≤ 0since T1 is compact) and introduce the sets

En(k,m) = (ω, x, v) ∈ Ω× T1 × R; , |ξx,v(0)− uωn,+(x, 0)| > 1k,

Atn,0(ξx,v)−An(x) ≤ 1m,

Then each set En(k,m) is measurable and so is

J := pr1,2 ∪k,K≥1 ∩l,m≥1(∩p≥1 ∪n≥p En(k, l,m)),

where pr1,2 : Ω × T1 × R → Ω × T1× is the projection (ω, x, v) 7→ (ω, x).If (ω, x) ∈ J , then in particular there exists v ∈ R and a C1-curve ξ suchthat ξ(0) = 0 and, for all ε > 0, Atn,0(ξ) − An(x) < ε for infinitely manyn. This implies that ξ is a one-side minimizer (if γ ∈ C1(−∞, 0] is a local

24

perturbation of ξ which decreases the action Aτ,0 by δ > 0 then we get acontradiction by taking ε = δ/2 and tn ≤ τ). Since |ξx,v(0)−uωn,+(x, 0)| > 1

kfor a k > 0 and since any subsequence of (uωn,+(x, 0)) is the slope at t = 0 of aone-sided minimizer taking the value x at t = 0, J is the set of (ω, x) ∈ Ω×T1

such that at least two one-sided minimizers arrive at (x, 0), i.e. J . Thisconcludes the proof of the lemma.

Proof of Theorem 16: For c ∈ R, let

Dc = v ∈ D;∫

T1

v = c.

Since W = ∂xW by hypothesis, Eq. 31 preserves the “mass”∫

T1 u(·, t),hence the evolution takes place in a Dc, c ∈ R. We consider here thecase c = 0 (since we constructed a solution starting from 0), but the caseof a general c can be handled as well by a suitable modification of theaction [EKMS00]. Therefore, we aim at proving the following statement (alittle bit more precise than the statement of Theorem 16):The stochastic inviscid Burgers’ Equation (31) admits an invariant measureover D0.To that purpose, notice that, for u = u+, we have uω(x, t) = uθ

tω(x, 0)where θtω(s) := ω(s + t) (the probability space Ω is taken to be the spaceof sequences (βk(·)) of C(−∞,∞) with the Wiener measure), hence the lawof u(t) is the law of u(0) and u is indeed invariant. By Prop. 9, there existsan invariant measure (the law of u) on D0.

We emphasize the fact that, additionally, the invariant measure on D0 isunique. We will not develop this point here, nor the properties of the inva-riant measure, see [EKMS00] on this subject.

Acknowledgment: I address all my thanks to Etienne Emmrich and PetraWittbold, organizers of the Spring School Analytical and Numerical Aspectsof Evolution Equations T.U. Berlin, 30/03/09–03/04/09.

References

[Bil99] P. Billingsley, Convergence of probability measures, second ed.,Wiley Series in Probability and Statistics: Probability andStatistics, John Wiley & Sons Inc., New York, 1999, A Wiley-Interscience Publication.

[EKMS00] Weinan E, K. Khanin, A. Mazel, and Ya. Sinai, Invariant mea-sures for Burgers equation with stochastic forcing, Ann. of Math.(2) 151 (2000), no. 3, 877–960.

[FN08] J. Feng and D. Nualart, Stochastic scalar conservation laws, J.Funct. Anal. 255 (2008), no. 2, 313–373. MR MR2419964

25

[Kim03] J. U. Kim, On a stochastic scalar conservation law, Indiana Univ.Math. J. 52 (2003), no. 1, 227–256.

[Kru70] S. N. Kruzkov, First order quasilinear equations with several in-dependent variables., Mat. Sb. (N.S.) 81 (123) (1970), 228–255.

[Lax57] P. D. Lax, Hyperbolic systems of conservation laws. II, Comm.Pure Appl. Math. 10 (1957), 537–566.

[LPT94] P.-L. Lions, B. Perthame, and E. Tadmor, A kinetic formulationof multidimensional scalar conservation laws and related equa-tions, J. Amer. Math. Soc. 7 (1994), no. 1, 169–191.

[Ole57] O. A. Oleınik, Discontinuous solutions of non-linear differentialequations, Uspehi Mat. Nauk (N.S.) 12 (1957), no. 3(75), 3–73.

[Per02] B. Perthame, Kinetic formulation of conservation laws, OxfordLecture Series in Mathematics and its Applications, vol. 21, Ox-ford University Press, Oxford, 2002.

[VW09] G. Valet and P. Wittbold, On a stochastic first-order hyperbolicequation in a bounded domain.

26