Upload
doannhi
View
220
Download
5
Embed Size (px)
Citation preview
MartingalesAlmost Sure Convergence and Doob’s Inequality
Ines B. Almeida
Probability Theory Course
December 12th, 2016
Ines B. Almeida (Probability Theory Course) Martingales December 12th, 2016 1 / 24
Outline
1. Introduction
2. Predictable Sequences and Stopping Times
3. Upcrossing Inequality
4. Almost Sure Convergence
5. Doob’s Decomposition
6. Doob’s Inequality
Ines B. Almeida (Probability Theory Course) Martingales December 12th, 2016 2 / 24
Filtrations
Definition (Filtration)
A filtration is an increasing sequence of σ-fields: Fn ⊆ Fn+1 ∀n.
A sequence Xn is adapted to a filtration Fn if Xn ∈ Fn ∀n.
The σ-field Fn = σ(X1, ...,Xn) can be thought of as the informationknown at time n.
Ines B. Almeida (Probability Theory Course) Martingales December 12th, 2016 3 / 24
Martingales
A martingale is the fortune of a player betting on a fair game.
Definition (Martingale)
A sequence Xn,Fn is said to be a martingale iif
1. E|Xn| <∞,
2. Xn ∈ Fn ∀n,
3. E(Xn+1|Fn) = Xn ∀n.
Property (3) may be an inequality:
I E(Xn+1|Fn) ≤ Xn ∀n iif Xn is a supermartingale,
I E(Xn+1|Fn) ≥ Xn ∀n iif Xn is a submartingale.
Ines B. Almeida (Probability Theory Course) Martingales December 12th, 2016 4 / 24
Predictable Sequences
Definition (Predictable Sequence)
Let Fn be a filtration. Hn is said to be a predictable sequence ifHn ∈ Fn−1 ∀n
A predictable sequence at time n can be predicted with certainty from theinformation available at time n − 1.
Ines B. Almeida (Probability Theory Course) Martingales December 12th, 2016 5 / 24
Predictable Sequences
Let’s play a game...
I flip an even coin over and over,
I win (lose) bet if a flip results in head (tails),
I bet one euro each time.
Let Xm be the player’s fortune at time m. The winnings at time n are
(H · X )n =n∑
m=1
Hm(Xm − Xm−1).
One popular strategy is to double the bet when we lose. Is this strategyany good?
Ines B. Almeida (Probability Theory Course) Martingales December 12th, 2016 6 / 24
Predictable Sequences
Theorem (5.2.5)
Let Xnn≥0 be a supermartingale. If Hn ≥ 0 is a predictable sequenceand each Hn is bounded, then (H · X )n is a supermartingale.
Proof:
E((H · X )n+1|Fn) = E((H · X )n|Fn) + E(Hn+1(Xn+1 − Xn)|Fn) =
= (H · X )n + Hn+1E(Xn+1 − Xn|Fn) =
= (H · X )n + Hn+1 [E(Xn+1|Fn)− Xn] ≤≤ (H · X )n
Similar results can be proved for submartingales and martingales.
Ines B. Almeida (Probability Theory Course) Martingales December 12th, 2016 7 / 24
Stopping Time
Definition (Stopping Time)
A r.v. N taking values in 1, 2, 3, . . . ∪ ∞ is said to be a stopping timeif N = n ∈ Fn ∀n.
In words: if N is the time a player stops gambling, the decision to stop attime n has to be measurable w.r.t. the information available at that time.
Ines B. Almeida (Probability Theory Course) Martingales December 12th, 2016 8 / 24
Stopping Time
Suppose Hn = 1N≥n. Since Hn is predictable (why?), we have that(H · X )n = XN∧n − X0 is a supermartingale. X0 is also a supermartingale,and it follows that
Theorem (5.2.6)
If N is a stopping time and Xn is a supermartingale, then XN∧n is also asupermartingale.
Ines B. Almeida (Probability Theory Course) Martingales December 12th, 2016 9 / 24
Upcrossing Inequality
Let Xn be a submartingale, a < b, N0 = −1, and
N2k−1 = inf m > N2k−2 : Xm ≤ a ,N2k = inf m > N2k−1 : Xm ≥ b .
Notice Nj are stopping times. Also, define the predictable sequence
Hm =
1 if N2k−1 < m ≤ N2k for some k0 otherwise
From N2k−1 to N2k , Xm crosses from below a to above b. Hm profits fromthese upcrossings.
Ines B. Almeida (Probability Theory Course) Martingales December 12th, 2016 10 / 24
Upcrossing Inequality
Let Un =∑k : N2k ≤ n be the number of upcrossings completed up to
time n.
Theorem (5.2.7 Upcrossing Inequality)
If Xmm≥0 is a submartingale then
(b − a)E(Un) ≤ E((Xn − a)+)− E((X0 − a)+).
In words: you cannot lose money by betting on a favorable game.
This result is useful to prove the Martingale Convergence Theorem (next).
Ines B. Almeida (Probability Theory Course) Martingales December 12th, 2016 11 / 24
Upcrossing Inequality
Proof: Notice that, for φ ≥ 0 convex and increasing:
E(φ(Xn+1)|Fn) ≥ φ(E(Xn+1|Fn)) ≥ φ(Xn)
where we used Jensen’s inequality and the fact that Xm is a submartingaleto show that φ(Xn) is also a submartingale. It follows that
Ym := a + (Xm − a)+
is a submartingale.
Ines B. Almeida (Probability Theory Course) Martingales December 12th, 2016 12 / 24
Upcrossing Inequality
On one hand
(H · Y )n =n∑
m=1
Hm(Ym − Ym−1) =
=Un∑k≥1
N2k∑m>N2k−1
(Ym − Ym−1) +n∑
m≥n∧N2Un+1
(Ym − Ym−1) =
=Un∑k≥1
(YN2k− YN2k−1
) + (Yn − Yn∧N2Un+1) ≥
≥ Un(b − a) + (Yn − Yn∧N2Un+1) ≥ Un(b − a)
since (Yn − Yn∧N2Un+1) ≥ 0. It follows that
E((H · Y )n) ≥ (b − a)E(Un).
Ines B. Almeida (Probability Theory Course) Martingales December 12th, 2016 13 / 24
Upcrossing Inequality
On the other hand, let Km = 1− Hm. It is clear that
Yn − Y0 =n∑
m=1
(Ym − Ym−1) = (H · Y )n + (K · Y )n
Since Hm, Km are predictable and Ym is a submartingale, (K · Y )n is alsoa submartingale. Thus,
E((K · Y )n) ≥ E((K · Y )0) = 0
It follows that
E((H · Y )n) = E(Yn − Y0)− E((K · Y )n) ≤≤ E(Yn − Y0) =
= E((Xn − a)+)− E((X0 − a)+).
Ines B. Almeida (Probability Theory Course) Martingales December 12th, 2016 14 / 24
Upcrossing Inequality
We thus arrive to the desired result,
(b − a)E(Un) ≤ E((H · Y )n) ≤ E((Xn − a)+)− E((X0 − a)+).
Ines B. Almeida (Probability Theory Course) Martingales December 12th, 2016 15 / 24
Almost Sure Convergence
Theorem (5.2.8 Martingale Convergence Theorem)
If Xnn≥0 is a submartingale with E(X+n ) <∞, then Xn −→ X a.s., with
|X | <∞.
Theorem (5.2.9)
If Xn ≥ 0 is a supermartingale then Xn −→ X a.s. and E(X ) ≤ E(X0).
The above theorems do not guarantee convergence in Lp.
Ines B. Almeida (Probability Theory Course) Martingales December 12th, 2016 16 / 24
Almost Sure Convergence
Theorem (5.2.8 Martingale Convergence Theorem)
If Xnn≥0 is a submartingale with E(X+n ) <∞, then Xn −→ X a.s., with
|X | <∞.
Proof: We begin by showing the number of upcrossings is finite. Since(X − a)+ ≤ X+ + |a|, and using the Upcrossing Inequality,
(b − a)E(Un) ≤ E((Xn − a)+)− E((X0 − a)+) ≤≤ E(X+
n ) + |a| − E((X0 − a)+) ≤≤ E(X+
n ) + |a| <∞
Because Un is discrete, we have Un <∞.
Ines B. Almeida (Probability Theory Course) Martingales December 12th, 2016 17 / 24
Almost Sure Convergence
Un <∞ for any a < b. It follows that
P(∪a,b∈Qlim Xn < a < b < lim Xn
)= 0
As having non-zero probability would imply Un →∞. By making a, barbitrarily close, we conclude limXn exists a.e.Using Fatou’s Lemma, we have that
E(X+) = E(lim X+n ) ≤ lim E(X+
n ) <∞
and, because Xn is a submartingale,
E(X−) ≤ lim E(X−n ) = lim E(X+n )− E(Xn) ≤
≤ lim E(X+n )− E(Xn) ≤ lim E(X+
n )− E(X0) <∞
from which follows −∞ < E(X ) <∞.
Ines B. Almeida (Probability Theory Course) Martingales December 12th, 2016 18 / 24
Almost Sure Convergence
Theorem (5.2.9)
If Xn ≥ 0 is a supermartingale then Xn −→ X a.s. and E(X ) ≤ E(X0).
Proof: Let Yn = −Xn ≤ 0 be a submartingale. Since Y +n = 0, limYn
exists and E|Y | <∞. by the previous theorem. It follows that limXn alsoexists E|X | <∞. Finally, notice that by Fatou’s Lemma,
E(X ) ≤ lim E(Xn) ≤ lim E(X0) = E(X0)
and we have completed the proof.
The above theorems do not guarantee convergence in Lp.
Ines B. Almeida (Probability Theory Course) Martingales December 12th, 2016 19 / 24
Doob’s Decomposition
Theorem (5.2.10 Doob’s Decomposition)
Any submartingale Xnn≥0 can be uniquely written as Xn = Mn + An,where Mn is a martingale and An is a predictable increasing sequence withA0 = 0.
Doob’s Decomposition Theorem allows one to reduce questions aboutsubmartingales to questions about martingales.
Ines B. Almeida (Probability Theory Course) Martingales December 12th, 2016 20 / 24
Doob’s Decomposition
Proof: From
E(Xn|Fn−1) = E(Mn|Fn−1) + E(An|Fn−1) =
= Mn−1 + An = Xn−1 − An−1 + An
we conclude that
a) An − An−1 = E(Xn|Fn)− Xn−1
b) Mn = Xn − An
Setting A0 = 0 uniquely defines both An and Mn. From a) follows thatAn ≥ An−1 and An ∈ Fn−1. It is easy to verify that Mn ∈ Fn. Finally,
E(Mn|Fn−1) = E(Xn|Fn)− E(An|Fn) =
= E(Xn|Fn)− An =
= Xn−1 − An−1 = Mn−1.
Ines B. Almeida (Probability Theory Course) Martingales December 12th, 2016 21 / 24
Doob’s Inequality
Theorem (5.4.1)
If Xn is a submartingale and N is a stopping time with P(N ≤ k) = 1 then
E(X0) ≤ E(XN) ≤ E(Xk)
Proof: Notice XN∧n is a submartingale. It follows that
E(X0) = E(XN∧0) ≤ E(XN∧k) = E(XN)
(since N ≤ k w.p. 1), thus proving the first inequality.Now let K = 1N≤n−1; Kn is predictable and (K · X )n is a submartingale.Then
E(Xk)− E(XN) = E((K · X )k) ≥ E((K · X )0) = 0
thus proving the second inequality.
Ines B. Almeida (Probability Theory Course) Martingales December 12th, 2016 22 / 24
Doob’s Inequality
Theorem (5.4.2 Doob’s Inequality)
Let Xm be a submartingale and define, for λ > 0,
A =
max
0≤m≤nX+m ≥ λ
.
ThenλP(A) ≤ E(Xn1A) ≤ E(X+
n ).
It means we can prove results on n positive r.v.s by looking at theexpected value of the last one of them.
Ines B. Almeida (Probability Theory Course) Martingales December 12th, 2016 23 / 24
Doob’s Inequality
Proof: Let N = infm : Xm ≥ λorm = n. Then
λP(A) =
∫Aλ dP ≤
∫AXn dP =
∫ΩXn1A dP = E(Xn1A).
By the previous theorem, E(XN1A) ≤ E(Xn1A). Thus,
E(Xn1A) =
∫AXn dP ≤
∫AX+n dP ≤
∫ΩX+n dP = E(X+
n )
and we proved the result.
Ines B. Almeida (Probability Theory Course) Martingales December 12th, 2016 24 / 24