Upload
others
View
2
Download
0
Embed Size (px)
Citation preview
1
3. Brownian motion(February 16, 2012)
Introduction
The French mathematician and father of mathematical �nance Louis Bache-lier initiated the mathematical equations of Brownian motion in his thesis"Théorie de la Spéculation" (1900). Later, in the mid-seventies, the Bacheliertheory was improved by the American economists Fischer Black, Myron Sc-holes, and Robert Merton, which has had an almost indescribable in�uenceon today�s derivative pricing and international economy. Here, Brownianmotion is still very important as it is in many other more recent �nancialmodels.The purpose of this chapter is to discuss some points of the theory of
Brownian motion which are especially important in mathematical �nance.To begin with we show that Brownian motion exists and that the Brownianpaths do not possess a derivative at any point of time. Furthermore, we useabstract Lebesgue integration to show the existence of a stochastic integralZ T
0
f(t; !)dW (t)
with respect to Brownian motion W when f is progressively measurable and
E
�Z T
0
f 2(t)dt
�<1:
For future applications in �nance, it is proved that every �(W )-measurableand square integrable mean-zero random variable may be represented as astochastic integral of the above type. We also indicate that the stochasticintegral above may be de�ned under the weaker integrability conditionZ T
0
f 2(t)dt <1 a.s.
In mathematical �nance change of variables or change of measures is astandard proceeding. We give a detailed proof of the so called Cameron-Martin theorem, which deals with a deterministic change of variables andthe more general result by Girsanov is proved in a special case.
2
Finally in this chapter, we study some other di¤usion processes thanBrownian motion and, in particular, the so called square root process (X(t))t�0,which solves the (Feller or CIR (after Cox, Ingersol, and Ross)) equation
dX(t) = {(� �X(t))dt+ pX(t)dW (t) (X(0);{; �; positive constants):
We prove that X(t) > 0 if {� � 2=2 and derive its moment-generatingfunction.The Itô formula and the Feynman-Kac formula will not be treated at all
in this chapter. Instead we hope it will increase the understanding of theseimportant results, which the reader has met in earlier and more calculusinspired courses.
3.1 De�nition of Brownian motion
Recall that a Gaussian stochastic process (X(t))t2T is a stochastic processsuch that each linear combination
nXk=0
akX(tk) (t0; :::; tn 2 T ; a0; :::; an 2 R and n 2 N)
is Gaussian. If, in addition, the expectation of each X(t) equals zero theprocess is said to be centred.A centred Gaussian process (W (t))t�0; starting at 0 at time 0; and with
the covariance function
E [W (s)W (t)] = min(s; t)
is called a standard Brownian motion. In this case, W (s)�W (t) is a centredGaussian random variable with the second order moment
E�(W (s)�W (t))2
�= E
�W 2(s)� 2W (s)W (t) +W 2(t)
�= s� 2min(s; t) + t =j s� t j
and, thusW (s)�W (t) 2 N(0; j s� t j):
3
Theorem 3.1.1. A Gaussian process X = (X(t))t�0 is a standard Brownianmotion if and only if the following conditions are true:
(i) X(0) = 0
(ii) X(t) 2 N(0; t); t � 0
(iii) the increments of X are independent, that is, for any �nite times0 � t0 � t1 � ::: � tn the random variables
X(t1)�X(t0); X(t2)�X(t1); :::; X(tn)�X(tn�1)
are independent (or uncorrelated since X is Gaussian).
PROOF. First suppose X = (X(t))t�0 is a standard Brownian motion. Then(i) holds and X is a Gaussian process such that X(t) 2 N(0; t): This proves(ii): To prove (iii), let j < k < n to get
E [(X(tj+1)�X(tj))(X(tk+1)�X(tk))]
= E [X(tj+1)X(tk+1)]�E [X(tj+1)X(tk)]�E [X(tj)X(tk+1)]+E [X(tj)X(tk)]
= tj+1 � tj+1 � tj + tj = 0:
This proves (iii):Conversely, assume X = (X(t))t�0 is a Gaussian process satisfying (i)�
(iii). Then X(0) = 0 and if 0 � s � t,
E [X(s)X(t)] = E�X(s)(X(t)�X(s)) +X2(s)
�= E [X(s)(X(t)�X(s)] + E
�X2(s)
�= E [X(s)]E [X(t)�X(s)] + E
�X2(s)
�= s:
From this follows that E [X(s)X(t)] = min(s; t) and X is a standard Brown-ian motion.
4
A stock price process S = (S(t))t�0 is called a geometric Brownian motionif there exists a standard Brownian motion W = (W (t))t�0 such that
S(t) = S(0)e�t+�W (t); t � 0
for appropriate parameters � 2 R och � > 0: Since
E [S(t)] = S(0)e(�+�2
2)t
it is natural to introduce a new parameter � de�ned by the equation
� = �+�2
2
so thatS(t) = S(0)e(��
�2
2)t+�W (t):
The parameters � and � are called the mean rate of return and the volatilityof S, respectively. If the time unit is years and � = 0:25, S is said to havethe volatility 25 %.Note that if t0 � t1 � ::: � tn; then the simple returns
S(t1)
S(t0)� 1; ::::; S(tn)
S(tn�1)� 1
are independent as are the log-returns
lnS(t1)
S(t0); ::::; ln
S(tn)
S(tn�1):
Again letW = (W (t))t�0 be a standard Brownian motion and a > 0: Thescaled process
X(t) = a�12W (at); t � 0
is a standard Brownian motion since the process is centred, Gaussian, and
E [X(s)X(t)] = a�1min(as; at) = min(s; t):
Furthermore,Y (t) =W (t+ a)�W ( a); t � 0
is a standard Brownian motion since the process is centred, Gaussian, and
E [Y (s)Y (t)] = E [(W (s+ a)�W ( a))(W (t+ a)�W ( a))]
5
= E [(W (s+ a)(W (t+ a)]� E [(W (s+ a)W ( a)]
�E [(W ( a)W (t+ a)] + E(W ( a)W ( a)
= min(s+ a; t+ a)� a� a+ a = min(s; t):
As a mnemonic rule we say that W starts afresh at each point of time.Finally, the sign changed process
Z(t) = �W (t); t � 0
is a standard Brownian motion.Thus if a stock price process (S(t))t�0 is a geometric Brownian motion
and a > 0, (S(at))t�0 and (S(a + t))t�0 are geometric Brownian motions.Moreover, (1=S(t))t�0 is a geometric Brownian motion.We �nish this section with a useful de�nition. Suppose I is a subinterval
of [0;1[ and 0 2 I: A centred Gaussian process (W (t))t2I ; starting at thepoint 0 at time 0; and with the covariance function
E [W (s)W (t)] = min(s; t)
is called a standard Brownian motion with the time set I:
3.2 Existence of Brownian motion with continuous sample paths
In this section we will �rst show the existence of Brownian motion withcontinuous paths as a consequence of the existence of Lebesgue measure.The so called Wiener measure is the distribution law of real-valued Brownianmotion with continuous sample paths.We �rst start with some Hilbert space theory. Suppose H is Hilbert
space. Two vectors f and g in H are said to be orthogonal (abbrev. f ? g)if < f; g >= 0: Let n be a non-negative integer and set In = f0; :::; ng : Asequence (ei)i2In of unit vectors in H is said to be an orthonormal sequenceif ei ? ej for all i 6= j; i; j 2 In: If (ei)i2In is an orthonormal sequence andf 2 H; then
f � �i2Inhf; eiiei ? ej all j 2 In
6
and it follows that
k f � �i2Inhf; eiiei k2�k f � �i2In�iei k2 all real �1; :::; �n:
Moreover,
k f k22=k f � �i2Inhf; eiiei k22 + k �i2Inhf; eiiei k22
and we get�i2Inhf; eii2 �k f k22 :
If H is of �nite dimension we say that (en)n2In is an orthonormal basisfor H if it is an orthonormal sequence and
f = �i2Inhf; eiiei for every f 2 H:
A sequence (ei)1i=0 in H is said to be an orthonormal sequence if (ei)i2Inis an orthonormal sequence for each non-negative integer n: In this case, foreach f 2 H;
�1i=0hf; eii2 �k f k22and the series
�1i=0hf; eiieiconverges since the sequence
(�ni=0hf; eiiei)1n=0
of partial sums is a Cauchy sequence in H: The sequence (ei)1i=0 is said to bean orthonormal basis for H if it is an orthonormal sequence and
f = �1i=0hf; eiiei for every f 2 H:
Theorem 3.2.1. An orthonormal sequence (ei)1i=0 in H is an orthonormalbasis for H if
(hf; eii = 0 for every i 2 N)) f = 0:
PROOF. Let f 2 H and set
g = f � �1i=0hf; eiiei:
7
Then, for any j 2 N;
hg; eji = hf � �1i=0hf; eiiei; eji
= hf; eji � �1i=0hf; eiihei; eji = hf; eji � hf; eji = 0:Thus g = 0 or
f = �1i=0hf; eiiei:The theorem is proved.
In order to show the existence of Brownian motion with continuous samplepaths we next construct an appropriate orthonormal basis of L2(�), where �is Lebesgue measure on the unit interval. Set
v(x) = �[0; 12 [(x)� �[ 12 ;1](x); x 2 R
Moreover, de�ne h00(x) = 1; 0 � x � 1; and for each n � 1 and j =1; :::; 2n�1,
hjn(t) = 2n�12 v(2n�1x� j + 1); 0 � x � 1:
Stated otherwise, we have for each n � 1 and j = 1; :::; 2n�1
hjn(x) =
8>>>>><>>>>>:2n�12 ; j�1
2n�1 � x <j� 1
2
2n�1 ;
�2n�12 ; j�12
2n�1 � x �j
2n�1 ;
0; elsewhere in [0; 1] :
Drawing a �gure it is simple to see that the sequence h00;hjn; j = 1; :::; 2n�1;n � 1; is orthonormal in L2(�). We will prove that the same sequence con-stitute an orthonormal basis for L2(�): To this end, suppose f 2 L2(�) isorthogonal to each of the functions h00;hjn; j = 1; :::; 2n�1; n � 1: Then foreach n � 1 and j = 1; :::; 2n�1Z j� 1
22n�1
j�12n�1
fd� =
Z j
2n�1
j� 12
2n�1
fd�
and, in particular Z j
2n�1
j�12n�1
fd� =1
2n�1
Z 1
0
fd� = 0
8
since Z 1
0
fd� =
Z 1
0
fh00d� = 0:
Thus Z k2n�1
j
2n�1
fd� = 0; 1 � j � k � 2n�1
and we conclude thatZ 1
0
1[a;b]fd� =
Z b
a
fd� = 0; 0 � a � b � 1:
Accordingly from this, f = 0 and the sequence
(hk)1k=0 = (h00;h11; h12; h22; h13; h23; h33; h43; :::)
is an orthonormal basis for L2(�), called the Haar basis.Let 0 � t � 1 and de�ne for �xed k 2 N
ak(t) =
Z 1
0
�[0;t](x)hk(x)dx =
Z t
0
hkd�
so that�[0;t] = �
1k=0ak(t)hk in L
2(�):
Now, if 0 � s; t � 1;
min(s; t) =
Z 1
0
�[0;s](x)�[0;t](x)dx = h�1k=1ak(s)hk; �[0;t]i
= �1k=0ak(s)hhk; �[0;t]i = �1k=0ak(s)ak(t):Note that
t = �1k=0a2k(t):
From now on suppose (Gk)1k=0 is an i.i.d. where each Gk 2 N(0; 1) andintroduce the random series
�1k=0ak(t)Gk
which converges in L2(P ) (and a.s., see Corollary 1.4.1). Its sum de�nes aGaussian random variable denoted by W (t) and the process (W (t))0�t�1 isa centred Gaussian stochastic process with the covariance
E [W (s)W (t)] = min(s; t):
9
:Recall that
(h00;h11; h12; h22; h13; h23; h33; h43; :::) = (hk)1k=0:
We de�ne(a00;a11; a12; a22; a13; a23; a33; a43; :::) = (ak)
1k=0
and(G00;G11; G12; G22; G13; G23; G33; G43; :::) = (Gk)
1k=0:
It is important to note that for �xed n;
ajn(t) =
Z 1
0
�[0;t](x)hjn(x)dx 6= 0 for at most one j:
SetU0(t) = a00(t)G00
andUn(t) = �
2n�1
j=1 ajn(t)Gjn; n 2 N+:
We know thatW (t) = �1n=0Un(t) in L
2(P )
for �xed t:Let C [0; 1] be equipped with the metric
d(f; g) =k f � g k1
where k f k1= max0�t�1 j f(t) j : Recall that every f 2 C [0; 1] is uniformlycontinuous and as R is separable, it follows that the Banach space C [0; 1] isseparable. Finally, if fn 2 C [0; 1] ; n 2N; and
�1n=0 k fn k1<1
the series�1n=0fn
converges since the partial sums
sn = �nk=0fk; k 2 N
form a Cauchy sequence.
10
We now de�ne
� = f! 2 ;�1n=0 k Un k1<1g :
Here � 2 F sincek Un k1= sup
0�t�1t2Q
j Un(t) j
for each n: Next we prove that n� is a null set.To this end let n � 1 and note that
P�k Un k1> 2�
n4
�� P
�max
1�j�2n�1(k ajn k1j Gjn j) > 2�
n4
�:
Butk ajn k1=
1
2n+12
and, hence,
P�k Un k1> 2�
n4
�� 2n�1P
hj G00 j> 2
n4+ 12
i:
Since
x � 1) P [j G00 j� x] � 2Z 1
x
ye�y2=2 dy
xp2�� e�x2=2
we getP�k Un k1> 2�
n4
�� 2n�1e�2n=2
and conclude that
E
" 1Xn=0
1[kUnk1>2�n4 ]
#=
1Xn=0
P�k Un k1> 2�
n4
�<1:
From this and the Beppo Levi Theorem (or the Borel-Cantelli Lemma)P [�] = 1:The trajectory t ! W (t; !); 0 � t � 1; is continuous for every ! 2 �:
Without loss of generality, from now on we can therefore assume that alltrajectories of W are continuous (by eventually replacing by �):Finally let W1 and W2 be independent Brownian motions both with time
set [0; 1] and continuous sample paths. and de�ne
W (t) =
�W1(t); 0 � t � 1
W1(1) + tW2(1t)�W2(1); t > 1:
�
11
It is readily seen that W = (W (t))t�0 is a standard Brownian motion withcontinuous sample paths and time set [0;1[.From now on it is always assumed that we consider Brownian motions
with continuous sample paths and if not otherwise stated W denotes such aprocess. Moreover, without loss of generality, it will always be assumed thatthe underlying probability space is complete.
Exercises
1. Suppose 0 � t1 < ::: < tn � T <1 and let I1; :::; In be open subinter-vals of the real line. The set
S(t1; :::; tn; I1; :::; In) = fx 2 C [0; T ] ; x(tk) 2 Ik; k = 1; :::; ng
is called an open n-cell in C [0; T ] : The �-algebra generated by all opencells in C [0; T ] is denoted by C: Prove that
C = B(C [0; 1]):
(The construction above shows that the map
W : ! C [0; T ]
which maps ! to the trajectory
t! W (t; !); 0 � t � T
is (F ; C)-measurable. The image measure PW is called Wiener measurein C [0; T ] :)
3.3 Non-di¤erentiability of Brownian paths
In a mathematical model with continuous time a stock price process S =(S(t))t�0 should not have a derivative at any point of time and, as will shortlybe proved, a log-Brownian stock price will ful�l this desire.
12
Theorem 3.3.1. The function t! W (t); t � 0 is not di¤erentiable at anypoint t a.s. :
PROOF. We may restrict ourselves to the unit interval 0 � t � 1. Letc; " > 0 and denote by B(c; ") the set of all ! 2 such that
j W (t)�W (s) j< c j t� s j if t 2 [s� "; s+ "] \ [0; 1]
for some s 2 [0; 1] : If the set1[j=1
1[k=1
B(j;1
k):
is of probability zero we are doneFrom now on let c; " > 0 be �xed. It is enough to prove P [B(c; ")] = 0 :
To this end setXn;k = max
k�j<k+3j W (j + 1
n)�W ( j
n) j
for each integer n > 3 and k 2 f0; :::; n� 3g : Furthermore, let n > 3 be solarge that
3
n� ":
We claim that
B(c; ") ��min
0�k�n�3Xn;k �
6c
n
�:
To show this claim note that to every ! 2 B(c; ") there exists an s 2 [0; 1]such that
j W (t)�W (s) j� c j t� s j if t 2 [s� "; s+ "] \ [0; 1]
and choose k 2 f0; :::; n� 3g such that
s 2�k
n;k
n+3
n
�:
If k � j < k + 3;
j W (j + 1n)�W ( j
n) j�jW (j + 1
n)�W (s) j + j W (s)�W ( j
n) j
13
� 6c
n
and, hence, Xn;k � 6cn: Now
B(c; ") ��min
0�k�n�3Xn;k �
6c
n
�and it is enough to prove that
limn!1
P
�min
0�k�n�3Xn;k �
6c
n
�= 0:
But
P
�min
0�k�n�3Xn;k �
6c
n
��
n�3Xk=0
P
�Xn;k �
6c
n
�
= (n� 2)P�Xn;0 �
6c
n
�� nP
�Xn;0 �
6c
n
�= n(P
�j W ( 1
n) j� 6c
n
�)3 = n(P (j W (1) j� 6cp
n)3
� n( 12cp2�n
)3:
where the right side converges to zero as n!1. The theorem is proved.
3.4 Stochastic integrals
Suppose W = (W (t))0�t�T is a Brownian motion based on a complete prob-ability space (;F ; P ) and let F(t); 0 � t � T; be an increasing sequence of�-algebras contained in F such that
(i) each W (t) is F(t)-measurable(ii) for 0 � t � u the increment W (u)�W (t) is independent of F(t):
14
Under these assumptions the family (F(t))0�t�T is said to be a �ltration forW: Moreover, a stochastic process X = (X(t))0�t�T is said to be adapted ifeach X(t) is F(t)-measurable. An adapted process such that X(t) 2 L1(P )and
0 � s � t � T ) E [X(t) j F(s)] = X(s)is called a martingale. If
0 � s � t � T ) E [X(t) j F(s)] � X(s)
the processX is called a supermartingale. The minimum of two supermartin-gales is a new supermartingal. Moreover, if Xn; n 2 N; are non-negativesupermartingales and
X(t) = limn!1
Xn(t) a.s. for each t
then X = (X(t))0�t�T is a supermartingale. In fact, if 0 � s � t � T andA 2 F(s); by bounded convergence
E [min(X(t); k)�A] = limn!1
E [(min(Xn(t); k)�A]
� limn!1
E [min(Xn(s); k)�A] = E [min(X(s); k)�A]
and by letting k !1 the claim follows by monotone convergence.In this section the purpose is to introduce a so called stochastic integralZ T
0
f(t)dW (t)
under appropriate conditions on f = (f(t))0�t<T . It will always be assumedthat f is adapted but many more restrictions must be imposed.A stochastic process f = (f(t))0�t�T is said to be simple if the following
conditions hold:
(a) f is adapted and there exists a C 2 R such that j f(t; !) j� C forall 0 � t � T and ! 2
(b) there exists a partition 0 = t0 < t1 < :::: < tn = T such thatf(t) = f(tj�1) if t 2 [tj�1; tj[ for j = 1; :::; n:
15
For a simple integrand f we de�ne the stochastic integral in the naturalway Z T
0
f(t)dW (t) =
nXj=1
f(tj�1)(W (tj)�W (tj�1))
and have the so called Itô isometry
E
�(
Z T
0
f(t)dW (t))2�= E
�Z T
0
f 2(t)dt
�:
In addition, the process
Mf =�eR t0 f(s)dW (s)� 1
2
R t0 f
2(s)ds�0�t�T
is a martingale. If we abide by the notion in (a) and (b) and let A 2 F(tn�1)
E [Mf (T )�A] = E [E [Mf (T )�A j F(tn�1)]]
EhEhef(tn�1)(W (tn)�W (tn�1))� 1
2f2(tn�1)(tn�tn�1) j F(tn�1)
iMf (tn�1)�A
i= E [Mf (tn�1)�A]
orE [Mf (T ) j F(tn�1)] =Mf (tn�1):
By iterating this line of reasoning we conclude that Mf is a martingale forevery simple f:Often it is important to have a stochastic integral for less restrictive
conditions on the integrand f:A stochastic process f = (f(t))0�t<T is said to be progressively measur-
able if for each t 2 [0; T ] ; the restriction map f(s; !); 0 � s � t; ! 2 ; is(B([0; t] F(t))-measurable. If, in addition,
E
�Z T
0
f 2(t)dt
�<1
the process f is said to belong to the class F:Next suppose f 2 F is bounded and f(�; !) continuous for every ! 2 :
Set for each positive integer n;
fn(t; !) = f(t; !) ifj � 1nT � t < j
nT; j = 1; :::; n
16
and de�ne fn(t; !) to be left-continuous at the point t = T: Then, if � standsfor Lebesgue measure on [0; T ] ; by dominated convergence
fn ! f in L2(�� P )as n!1: Moreover, by the Itô isometry
E
�(
Z T
0
fm(t)dW (t)�Z T
0
fn(t)dW (t))2
�= E
�Z T
0
(fm(t)� fn(t))2dt�
and it follows that the sequence (R T0fn(t)dW (t))
1n=1 converges in L
2(P ) andthe limit is denoted by
R T0f(t)dW (t): Clearly, the Itô isometry holds for f:
In the next step we assume f 2 F is bounded and de�ne for each postiveinteger n;
fn(t; !) = n
Z t
(t� 1n)+f(s; !)ds if 0 � t � T:
The function fn(�; !) is continuous. Moreover, by the Lebesgue di¤erentiationtheorem, for every �xed ! 2 ;
limn!0
fn(�; !) = f(�; !) a.s. [�]
and hence, by dominated convergence
limn!1
E
�Z T
0
(fn(t)� f(t))2dt�= 0:
As above the Itô isometry implies that the sequence (R T0fn(t)dW (t))
1n=1 con-
verges in L2(P ) and the limit is denoted byR T0f(t)dW (t): Furthermore, the
Itô isometry holds for f:The next step is simple. If f 2 F, de�ne fn = f�[jf j�n] for every n 2 N+
and use dominated convergence to conclude that
fn ! f in L2(�� P ) as n!1:
As above we can now de�ne the stochastic integralR T0f(t)dW (t) such that
the Itô isometry holds.If f is simple the stochastic integral is de�ned path-wise and the mapZ t
0
f(s)dW (s); 0 � t � T
17
is continuous and, in addition, it is readily seen to be a martingale. Thede�nitions above imply the martingale property of the stochastic integral forevery f 2 F: To prove continuity we need Doob�s maximal inequality.Suppose f 2 F and choose simple functions fn such that
fn ! f in L2(�� P )
as n!1 and note that for each 0 � t � T;Z t
0
fn(s)dW (s)!Z t
0
f(s)dW (s) in L2(P )
as n!1: Furthermore, choose integers 1 � n1 < n2 < ::: such that
E
�Z T
0
(fnk(t)� fnk+1(t))2dt�� 2�3k:
If we note that the square of a bounded martingale in discrete time is anon-negative submartingale, the Doob maximal inequality and Itô isometryyield
P
"sup
0�j�2mjZ j=2m
0
fnk(s)dW (s)�Z j=2m
0
fnk+1(s)dW (s) j� 2�k#
P
"sup
0�j�2mjZ j=2m
0
(fnk(s)dW (s)� fnk+1(s))dW (s) j2� 2�2k#
� 22kE�Z T
0
(fnk(s)� fnk+1(s))2ds�� 2�k
and, hence,
P
�sup0�t�T
jZ t
0
fnk(s)dW (s)�Z t
0
fnk+1(s)dW (s) j� 21�k�
� 22kE�Z T
0
(fnk(s)� fnk+1(s))2ds�� 2�k :
Now the series1X1
(
Z �
0
fnk(s)dW (s)�Z �
0
fnk+1(s)dW (s))
18
converges absolutely in the Banach space C [0; T ] and is therefore convergentwith probability one. Accordingly from this,
limk!1
Z �
0
fnk(s)dW (s) =def X
exists and is a continuous function with probability one. But for every t;
X(t) =
Z t
0
f(s)dW (s) a.s.
and we conclude that the process (R t0f(s)dW (s))0�t�T possesses a continuous
version.
Example 3.4.1. Consider the is progressively measurable random functionf(t) =
R t0cos s dW (s); 0 � t � T: We want to compute E [X2] if
X =
Z T
0
(f(t) +
Z t
0
f(s)dW (s))dW (t):
To this end �rst note that
E�f 2(t)
�=
Z t
0
cos2 s ds
=
�s+ 1
2sin 2s
2
�s=ts=0
=1
2(t+
1
2sin 2t) 2 L2(�):
From this we conclude that f 2 F since
E
�Z T
0
f 2(t)dt
�=
Z T
0
E�f 2(t)
�dt <1:
Thus the stochastic integralR t0f(s)dW (s) is well de�ned for every �xed t
and by the Itô isometry
E
�(
Z t
0
f(s)dW (s))2�
= E
�Z t
0
f 2(s)ds
�=
Z t
0
1
2(s+
1
2sin 2s)ds
19
=1
2(t2
2� 14cos 2t+
1
4) 2 L2(�):
Moreover,R �0f(s)dW (s) 2 F since
E
�Z T
0
(
Z t
0
f(s)dW (s))2dt
�
=
Z T
0
E
�(
Z t
0
f(s)dW (s))2�dt <1:
Accordingly from this the stochastic integralR T0(R t0f(s)dW (s))dW (t) is well
de�ned and by the Itô isometry
E
�(
Z T
0
(
Z t
0
f(s)dW (s))dW (t))2�
= E
�Z T
0
(
Z t
0
f(s)dW (s))2dt
�=
Z T
0
1
2(t2
2� 14cos 2t+
1
4)dt
=1
4(T 3
3� 14sin 2T +
T
2):
Moreover, by using by the Itô isometry twice and that f(s) is a mean-zerorandom variable, we have
E
�Z T
0
f(t)dW (t)
Z T
0
(
Z t
0
f(s)dW (s))dW (t)
�
= E
�Z T
0
(f(t)
Z t
0
f(s)dW (s))dt
�=
Z T
0
E
�f(t)
Z t
0
f(s)dW (s)
�dt
=
Z T
0
E
�Z t
0
cos sdW (s)
Z t
0
f(s)dW (s)
�dt
=
Z T
0
(
Z t
0
E [(cos s) f(s)] ds)dt =
Z T
0
(
Z t
0
0ds)dt = 0:
Finally, noting that
20
X =
Z T
0
f(t)dW (t) +
Z T
0
(
Z t
0
f(s)dW (s))dW (t)
we haveE�X2�
=1
2(T 2
2� 14cos 2T +
1
4) +
1
4(T 3
3� 14sin 2T +
T
2)
=T 3
12+T 2
4+T
8+1
8� 18cos 2T � 1
16sin 2T:
Our next theorem shows that stochastic integration with respect to Brown-ian motion leads to several new martingales and supermartingales.
Theorem 3.4.1. Suppose f 2 F. Then (R t0f(s)dW (s))0�t�T is a martingale
andMf =
�eR t0 f(s)dW (s)� 1
2
R t0 f
2(s)ds�0�t�T
a supermartingale.In particular, if C 2 R and j f j� C;
EheR t0 f(s)dW (s)
i� eC
2
2t; 0 � t � T;
and Mf is a martingale.
PROOF. Suppose f 2 F and choose simple functions fn such that
fn ! f in L2(�� P )
as n!1 and note that for each 0 � t � T;Z t
0
fn(r)dW (r)!Z t
0
f(r)dW (r) in L2(P )
as n!1: But if 0 � s � t � T and A 2 Fs
E
�Z t
0
fn(r)dW (s)�A
�= E
�Z s
0
fn(r)dW (s)�A
�
21
and by letting n!1 we have
E
�Z t
0
f(r)dW (s)�A
�= E
�Z s
0
f(r)dW (s)�A
�:
Thus (R t0f(s)dW (s))0�t�T is a martingale.
By eventually passing to a subsequence we may assume thatZ T
0
(fn � f)2d�! 0 a.s.
and it follows thar there exists a set 0 of measure one such thatZ t
0
f 2n(s)ds!Z t
0
f 2n(s)ds on 0 for every t 2 [0; T ] :
Moreover, we may assumeZ �
0
fn(s)dW (s)!Z �
0
f(s)dW (s) a.s.
in C [0; T ] (to see this use the Doob maximal theorem and Itô isometry asabove). Since the process
Mfn =�eR t0 fn(s)dW (s)� 1
2
R t0 f
2n(s)ds
�0�t�T
is a non-negative martingale it follows that the limit process�eR t0 f(s)dW (s)� 1
2
R t0 f
2(s)ds�0�t�T
is a supermartingale.Next suppose f 2 F and j f j� C for an appropriate real number C: As
EheR t0 f(s)dW (s)� 1
2
R t0 f
2(s)dsi� 1
EheR t0 f(s)dW (s)� 1
2C2ti� 1
and we get
EheR t0 f(s)dW (s)
i� eC
2
2t; 0 � t � T:
22
Now we claim that the processMf is a martingale. In fact, by Itô�s lemma
dMf (t) = f(t)Mf (t)dW (t):
Note here that(f(t)Mf (t))0�t�T 2 F:
Indeed, by using the �rst part of the theorem already proved
E
�Z T
0
�f(t)e
R t0 f(s)dW (s)� 1
2
R t0 f
2(s)ds�2�
� C2E�Z T
0
e2R t0 f(s)dW (s)dt
�� C2
Z T
0
e2C2tdt <1:
Hence
Mf (t) = 1 +
Z t
0
f(s)Mf (s)dW (s)
and we are done.
Example 3.4.1. Suppose f is simple and suppose " > 0: Next we will showhow it is possible to control the tail probability
P
�jZ T
0
f(t)dW (t) j> "�
from the tail probability of the random variableR T0f 2(t)dt:
Given N > 0 we will show that
P
�jZ T
0
f(t)dW (t) j> "�� P
�Z T
0
f 2(t)dt > N
�+N
"2:
To see this suppose
0 = t0 < t1 < ::: < tn�1 < tn = T
and de�nef(t) = f(tj�1); tj�1 � t < tj; j = 1; :::; n:
23
Moreover, let m be the largest integer m � n such thatZ tm
0
f 2(t)dt =
m�1Xj=1
f 2(tj�1)(tj � tj�1) � N:
Note that[tm � t] 2 Ft:
Next we de�ne
fN(t) =
�f(t) if t < tm;
0 otherwise in [0; T ]
and have that fN 2 F is bounded andZ T
0
f 2N(t)dt =m�1Xj=1
f 2(tj�1)(tj � tj�1) � N:
Thus
E
�(
Z T
0
fN(t)dW (t))2
�� N
and furthermore
f(t) = fN(t) for every t < T ifZ T
0
f 2(t)dt � N:
Now
P
�jZ T
0
f(t)dW (t) j> "�� P
�jZ T
0
fN(t)dW (t) j> "�
+P
�Z T
0
f 2(t)dt > N
�:
But
P
�jZ T
0
fN(t)dW (t) j> "�� 1
"2E
"�Z T
0
fN(t)dW (t)
�2#� N
"2
and we are done.
A progressively measurable function such thatZ T
0
f 2(t)dt <1 a.s.
24
is said to belong to the class Floc: It is possible to de�ne the stochastic integralR T0f(t)dW (t) for each f 2Floc such that the process
(
Z t
0
f(s)dW (s))0�t�T
has a continuous version (in this context the Example 3.4.1 is useful (seee.g. A. Friedman, Stochastic Di¤erential Equations and Applications, Vol 1,Academic Press 1975)) .If f is a smooth function and
f 0(t;W (t)) 2 F (as a function of (t; !)).
the Itô lemma gives
df(t;W (t)) = f 0t(t;W (t))dt+ f0x(t;W (t))dW (t) +
1
2f 00xx(t;W (t))dt:
However, the same formula turns out to be true under the much weakercondition
f 0(t;W (t)) 2 Floc (as a function of (t; !)).
If f 2Floc and � is any random variable with values in the interval [0; T ]we de�ne Z �
0
f(t)dW (t) = I(�)
where
I(t) =
Z t
0
f(s)dW (s):
Furthermore, if � is a stopping time so that the level set f� � tg is Ft-measurable for every t 2 [0; T ] it is possible to show thatZ �
0
f(t)dW (t) =
Z T
0
1[t��]f(t)dW (t):
Exercises
25
1. Show that the function W (t; !); 0 � t � T; ! 2 ; is progressivelymeasurable.
2. Suppose 0 � t0 � a � b � T and f(t; !) = 1[a;b](t)W (t0; !); 0 � t � T;! 2 : Show that f is progressively measurable.
3. Find E [X2] if
X =
Z T
0
(
Z t
0
sin(u+ t)dW (u))dW (t):
4. Suppose G 2 N(0; 1) and
Hn(x; y) = E [(x+ ipyG)n] ; n 2 N
for every x 2 R and y � 0: Show that H0(x; y) = 1; H1(x; y) = x;H2(x; y) = x
2� y; H3(x; y) = x3� 3xy and H4(x; y) = x4� 6x2y+3y2:In addition, show that
@
@xHn(x; y) = nHn�1(x; y); n = 1; 2; :::
@
@yHn(x; y) +
1
2
@2
@x2Hn(x; y) = 0; n = 2; 3; :::
and
e�x�12�2y =
1Xn=0
�n
n!Hn(x; y); � 2 R:
5. Prove that
M�(t) =
1Xn=0
�n
n!Hn(W (t); t); � > 0:
6. Use the Itô lemma to show that
Hn(W (t); t) = n
Z t
0
Hn�1(W (s); s)dW (s); n = 1; 2; ::: :
26
3.5 Martingale representation
Theorem 3.5.1. (Itô representation, one dimension) Suppose F(t) =�(W (s); s � t) and X 2 L2(;F(T ); P ): There exists a � 2 F such that
X = E [X] +
Z T
0
�(t)dW (t):
PROOF. Suppose f 2 L2(�) and
Mf (t) = eR t0 f(s)dW (s)� 1
2
R t0 f
2(s)ds; t 2 [0; T ] :
Recall thatR T0f(t)dW (t) 2 N(0;
R T0f 2(t)dt): By the Tonelli theorem
E
�Z T
0
(f(t)Mf (t))2dt
�=
TZ0
f 2(t)Ehe2R t0 f(s)dW (s)�
R t0 f
2(s)dsidt
=
TZ0
f 2(t)eR t0 f
2(s)dsdt � eR T0 f2(s)ds
TZ0
f 2(t)dt <1
and it follows that fMf 2 F: Moreover, the Itô lemma yields
dMf (t) = f(t)Mf (t)dW (t)
and
Mf (T ) = 1 +
Z T
0
f(t)Mf (t)dW (t):
SetL0 =
�Y ; Y 2 L2(;F(T ); P ) and E [Y ] = 0
and
Lint =
�Z; Z =
Z T
0
�(t)dW (t) where � 2 F�:
Here L0 and Lint are closed subspaces of L2(;F(T ); P ) and
Lint � L0:
27
Now suppose Y 2 L0 \ L?int: If we can prove Y = 0, then Lint = L0 and weare done.To start with Y ?Mf (T ) for every f 2 L2(�). To de�ne an appropriate
f choose 0 = t0 < t1 < ::: < tn = T and �1; �2; :::�n 2 R and de�ne
f(t) = �k; t 2 [tk�1; tk[ ; k = 1; :::; n:
SinceMf (T ) = e
�1W (t1)+�2(W (t2)�W (t1))+:::+�n(W (tn)�W (tn�1))+a
where
a = �12
Z T
0
f 2(�)d�
is a constant,
E�Y e�1W (t1)+�2(W (t2)�W (t1))+:::+�n(W (tn)�W (tn�1))
�= 0:
Stated otherwise, withY+ = max(0; Y )
andY� = max(0;�Y )
we haveE�Y+e
�1W (t1)+�2(W (t2)�W (t1))+:::+�n(W (tn)�W (tn�1))�=
E�Y�e
�1W (t1)+�2(W (t2)�W (t1))+:::+�n(W (tn)�W (tn�1))�:
ThusE�Y+e
s(�1W (t1)+�2(W (t2)�W (t1))+:::+�n(W (tn)�W (tn�1)))�=
E�Y�e
s(�1W (t1)+�2(W (t2)�W (t1))+:::+�n(W (tn)�W (tn�1)))�
for every real s and by analytic continuation
E�Y+e
i(�1W (t1)+�2(W (t2)�W (t1))+:::+�n(W (tn)�W (tn�1)))�=
E�Y�e
i(�1W (t1)+�2(W (t2)�W (t1))+:::+�n(W (tn)�W (tn�1)))�:
Now the uniqueness theorem for the Fourier transform yields that
(Y+P )((W (t1);W (t2)�W (t1); (W (tn)�W (tn�1)) 2 B)
(Y�P )((W (t1);W (t2)�W (t1); (W (tn)�W (tn�1)) 2 B)
28
for every B 2 B(Rn): Since F(T ) = �(W (s); s � T ); 0 � t � T; we concludethat Y�P = Y+P or Y P = 0 that is Y = 0: This completes the proof of thetheorem.
Corollary 3.5.1. If M = (M(t))0�t�T is a square integrable martingalewith respect to the �ltration F(t) = �(W (s); s � t); 0 � t � T; there existsa � 2 F such that
M(t) = E [M(0)] +
tZ0
�(s)dW (s); 0 � t � T:
Exercises
1. Find the martingale representation of the random variable j W (T ) j :
3.6 Cameron-Martin-Girsanov theory
If � 2 L2(�) we already know from the previous section that the process
M��(t) =def Z�(t) = e�R t0 �(s)dW (s)� 1
2
R t0 �
2(s)ds; 0 � t � T
is a martingale. Moreover, we have
Theorem 3.6.1 (Cameron-Martin) If � 2 L2(�) the process (Z�(t))0�t�Tis a martingale and the process
~W (t) =W (t) +
Z t
0
�(s)ds; 0 � t � T
is a Brownian motion with respect to the probability measure
~P = Z�(T )P:
29
PROOF. For short write Z� = Z: Fix 0 � s � t � T and A 2 F(s): Now forany real �;
Ehei�(
~W (t)� ~W (s))�AZ(T )i
= Ehei�(W (t)�W (s)+
R ts �(r)dr)))�AZ(T )
i= E
hei�(W (t)�W (s)+
R ts �(r)dr)�AZ(t)
i= E
hEhei�(W (t)�W (s)+
R ts �(r)dr)�
R ts �(r)dW (r)� 1
2
R ts �
2(r)dr j F(s)i�AZ(s)
i:
Here as the random variable i�(W (t)�W (s))�R ts�(r)dW (r) is a complex
linear combination of members of a Gaussian process we get
Ehei�(W (t)�W (s)+
R ts �(r)dr)�
R ts �(r)dW (r)� 1
2
R ts �
2(r)dr j F(s)i
= Ehei�(W (t)�W (s))�
R ts �(r)dW (r) j F(s)
iei�
R ts �(r)dr�
12
R ts �
2(r)dr
= e12E[(i�(W (t)�W (s))�
R ts �(r)dW (r))2jF(s)]ei�
R ts �(r)dr�
12
R ts �
2(r)dr
= e��2
2(t�s)�i�
R ts �(r)dr+
12
R ts �
2(r)drei�R ts �(r)dr�
12
R ts �
2(r)dr = e��2
2(t�s):
ThusEhei�(
~W (t)� ~W (s))�AZ(T )i= E [�AZ(s)] e
� �2
2(t�s):
Now if 0 � t0 � ::: � tn; we have
EhePnk=1 i�k(
~W (tk)� ~W (tk�1))Z(T )i= E
hePn�1k=1 i�k(
~W (tk)� ~W (tk�1))Z(tn�1)ie�
�2n2(tn�tn�1)
= ::: = E [Z(t0)] e� 12
Pnk=1 �
2k(tk�tk�1) = e�
12
Pnk=1 �
2k(tk�tk�1) :
This proves the Cameron-Martin theorem.
Suppose the distribution measure of W on the Banach spase C [0; T ] isdenoted by � and let �a(A) = �(A�a) if a 2 C [0; T ] andA 2 B(C [0; T ]): Theset H of all absolutely continuous functions f :[0; T ]! R such that f(0) = 0
30
and f 0 2 L2(�) is called the Cameron-Martin space of � and Theorem 3.6.1shows that
�a << � if a 2 H:It can be proved that
H = fa 2 C [0; T ] ; �a << �g :
(see e.g. H.-H. Kuo, Gaussian measures in Banach spaces, Springer (1975)).If �B(a; r) denotes the closed ball in C [0; T ] with centre a and radius r; wehave
limr!0+
�( �B(a; r))
�( �B(0; r))=
�e�
12
R T0 (a
0(t))2dt; if a 2 H0; if a 2 C [0; T ]�H
(see my paper A note on Gauss measures which agree on small balls, Ann.H. Poincaré, section B, 13, 231-238, (1977)). Thus
H =
�a 2 C [0; T ] ; lim
r!0+
�( �B(a; r))
�( �B(0; r))> 0
�:
The quantity �( �B(0; r)) can be computed explicitly using the separationof variables method (see Section 4.5).
Theorem 3.6.2. Suppose � 2 Floc.
(a) The process
Z�(t) = e�R t0 �(s)dW (s)� 1
2
R t0 �
2(s)ds; 0 � t � T
is a martingale if and only if
E [Z�(T )] = 1:
(b) (Novikov�s condition) If
Ehe12
R T0 �2(s)ds
i<1;
31
then E [Z�(T )] = 1:
(c) (Girsanov�s Theorem) If E [Z�(T )] = 1; the process
~W (t) =W (t) +
Z t
0
�(s)ds; 0 � t � T
is a Brownian motion with respect to the probability measure
~P = Z�(T )P:
We will not prove Theorem 3.6.2 here (for a proof see J. M. Steele, Sto-chastic Calculus and Financial Applications, Springer 2000 ; for a slightlyweaker result than 3.6.5 see A. Friedman, Stochastic Di¤erential Equationsand Applications, Vol 1, Academic Press 1975)). Let us just remark thatthe case with a simple bounded � follows exactly as the Cameron-Martintheorem.
Example 3.6.1. It is possible to �nd a � 2 Floc such that Z� is not amartingale. A well known example is the following.Set
� = min�t 2 [0; 1] ; W 2(t) = 1� t
and
�(t) =2W (t)
(1� t)21[t�� ]:
ThenP [0 < � < 1] = 1
and Z 1
0
�2(t)dt = 4
Z �
0
W 2(t)
(1� t)4dt <1 a.s.
Now by the Itô lemma, for t < 1;
dW 2(t)
(1� t)2 =2W 2(t)
(1� t)3dt+2W (t)
(1� t)2dW (t) +1
(1� t)2dt
32
from which
�Z 1
0
�(t)dW (t)� 12
Z 1
0
�2(t)d(t)
= �Z 1
0
2W (t)
(1� t)21[��t]dW (t)�Z 1
0
2W 2(t)
(1� t)41[��t]dt
= �Z �
0
dW 2(t)
(1� t)2 +Z �
0
2W 2(t)
(1� t)3dt+Z �
0
1
(1� t)2dt�Z �
0
2W 2(t)
(1� t)4dt
= � W 2(�)
(1� �)2 +Z �
0
2W 2(t)(1
(1� t)3 �1
(1� t)4 ) +1
(1� t)2 )dt
� � 1
1� � +Z �
0
1
(1� t)2dt = �1:
HenceE [Z�(1)] < e
�1:
Example 3.6.2. Let C be the Banach space of all real-valued continuousfunctions on the interval [0; 1] and equip C with the Borel �eld B: We claimthat there is a probability measure ~P :B ! [0; 1] such that under ~P ; the mapf :C ! C de�ned by
f(x) =
�t! x(t) +
Z t
0
pj x(s) jds; 0 � t � 1
�is a standard Brownian motion.To prove the claim let P :B ! [0; 1] be Wiener measure and put W (x) =
x; x 2 C: Note that under P; the map W is a real-valued standard Brownianmotion. Set
~W (t) =W (t) +
Z t
0
pj W (s) jds; 0 � t � 1
andd ~P = e�
R 10
pjW (s)jdW (s)� 1
2
R 10 jW (s)jdsdP:
If the Novikov condition
Ehe12
R 10 jW (s)jds
i<1
33
is ful�lled we know that under ~P ; the process ~W is a Brownian motion. ButZ 1
0
j W (s) j ds � max0�s�1
j W (s) j
and for given y � 0; we use the Bachelier double law
P
�max0�t�1
W (s) � y�= 2P [W (1) � y]
to get
P
�Z 1
0
j W (s) j ds � y�� P
�max0�s�1
j W (s) j� y��
P
�max0�s�1
W (s) � y�+ P
�min0�s�1
W (s) � �y�=
2P
�max0�s�1
W (s) � y�= 4P [W (1) � y] � 2e�
y2
2
Hence
Ehe12
R 10 jW (s)jds
i�Z 1
0
P
�Z 1
0
j W (s) j ds � 2 lnu�du =
1
2
Z 1
0
P
�Z 1
0
j W (s) j ds � y�ey2 dy �
Z 1
0
e�y2
2 ey2 dy <1
and we are done.
The de�nitions and results in this section extend to Brownian motion inseveral dimensions and the generalization is often only a question of notation(cf the Shreve book).
Example 3.6.3. Let T 2 ]0; 1[ and let W = (W1(t);W2(t))0�t�T be a2-dimensional Brownian motion with the natural �ltration Ft = �(W (s);s � t); 0 � t � T: We want to �nd a constant C such that the measure ~Pde�ned by the equation
d ~P = CeW1(T )W2(T )dP on FT
34
is a probability measure and, moreover, we want to �nd a martingale (Lt;Ft)0�t�Tsuch that
d ~P = LtdP on Ftfor every 0 � t � T:With this aim, �rst note that
Lt=C = E�eW1(T )W2(T ) j Ft
�=
eW1(t)W2(t)E�e(W1(T )�W1(t))(W2(T )�W1(t))+W2(t)(W1(T )�W1(t))+W1(t)(W2(T )�W2(t)) j Ft
�=
eW1(t)W2(t)Ehe(T�t)XY+b
pT�tX+a
pT�tY
ij(a;b)=(W1(t);W2(t))
where X; Y 2 N(0; 1) are independent. Moreover,
Ehe(T�t)XY+b
pT�tX+a
pT�tY
i=
ZR2
e�12(x2+y2�2(T�t)xy)+b
pT�tx+a
pT�ty dxdy
2�=
�u = x� (T � t)yv =
p1� (T � t)2y
�=Z
R2
e� 12(u2+v2)+b
pT�t(u+ T�tp
1�(T�t)2v)+ a
pT�tp
1�(T�t)2v dudv
2�p1� (T � t)2
=
1p1� (T � t)2
e12b2((T�t)+ T�t
2(1�(T�t)2) (b(T�t)+a)2)=
1p1� (T � t)2
ea2(T�t)
2(1�(T�t)2)+ab(T�t)2
1�(T�t)2+b2(T�t)
2(1�(T�t)2) :
Hence
Lt=C =eW1(t)W2(t)p1� (T � t)2
eW1(t)
2(T�t)2(1�(T�t)2)+
(T�t)2W1(t)W2(t)
1�(T�t)2 +W2(t)
2(T�t)2(1�(T�t)2)
and it follows thatC =
p1� T 2
and
Lt =
p1� T 2p
1� (T � t)2eW1(t)
2(T�t)2(1�(T�t)2)+
W1(t)W2(t)
1�(T�t)2 +W2(t)
2(T�t)2(1�(T�t)2) :
35
3.7 Stochastic di¤erential equations
Partial di¤erential equations and stochastic di¤erential equations are basicfor most models in option pricing and we �nd it natural to complement theShreve book in various ways.To start with let (W (t))0�t�T be a standard Brownian motion and F(t) =
�(W (s); 0 � s � t); 0 � t � T: We will consider the stochastic di¤erentialequation
dX(t) = a(t;X(t))dt+ b(t;X(t))dW (t); 0 � t � T
with the initial value x0 2 R or, equivalently, the stochastic integral equation
X(t) = x0 +
Z t
0
a(s;X(s))ds+
Z t
0
b(s;X(s))dW (s); 0 � t � T;
where it is assumed that the process X = (X(t))0�t�T is adapted, that is therandom variable X(t) is F(t)-measurable for every t 2 [0; T ] : Throughout itwill be assumed that the functions
a : [0; T ]�R! R
andb : [0; T ]�R! R
are continuous but, in general, more restrictions will be imposed.
Theorem 3.7.1. Suppose there is a constant K such that
j a(t; x)� a(t; y) j� K j x� y j; 0 � t � T; x; y 2 R;
j b(t; x)� b(t; y) j� K j x� y j; 0 � t � T; x; y 2 R;j a(t; x) j� K(1+ j x j); 0 � t � T; x 2 R
andj b(t; x) j� K(1+ j x j); 0 � t � T; x 2 R:
36
Then the di¤erential equation
dX(t) = a(t;X(t))dt+ b(t;X(t))dW (t); 0 � t � T
equipped with the initial condition X(0) = x0 possesses an adapted solution(X(t))0�t�T with continuous trajectories a.s. and such that
sup0�t�T
E�X2(t)
�<1:
If (Y (t))0�t�T is another solution with these properties, then
P [X(t) = Y (t) for every 0 � t � T ] = 1:
For short we only prove uniqueness in Theorem 3.7.1 and refer to A.Friedman, Stochastic Di¤erential Equations and Applications, Vol 1, Acad-emic Press 1975, for the remaining part.
PROOF OF UNIQUENESS. If (X(t))0�t�T and (Y (t))0�t�T are solutions,
X(t)� Y (t) =Z t
0
(a(s;X(s))� a(s; Y (s)))ds
+
Z t
0
(b(s;X(s))� b(s; Y (s)))dW (s):
But(�+ �)2 � 2(�2 + �2); �; � 2 R
and we get
E�(X(t)� Y (t))2
�� 2E
�(
Z t
0
(a(s;X(s))� a(s; Y (s)))ds)2�
+2E
�(
Z t
0
(b(s;X(s))� b(s; Y (s)))dW (s))2�:
By applying the Cauchy-Schwarz inequality we get
E
�(
Z t
0
(a(s;X(s))� a(s; Y (s)))ds)2�� tE
�Z t
0
(a(s;X(s))� a(s; Y (s)))2ds�
37
and since(a(s;X(s))� a(s; Y (s)))2 � K2(X(s)� Y (s))2
it follows that
E
�(
Z t
0
(a(s;X(s))� a(s; Y (s)))ds)2�� tK2
Z t
0
E�(X(s))� Y (s))2
�ds:
Moreover,
(b(s;X(s))� b(s; Y (s)))2 � K2(X(s)� Y (s))2
and
E
�Z t
0
(b(s;X(s))� b(s; Y (s)))2ds�� K2
Z t
0
E�(X(s)� Y (s))2
�ds
� 2K2t( sup0�t�T
E�X2(t)
�+ sup0�t�T
E�Y 2(t)
�) <1:
Hence(b(s;X(s))� b(s; Y (s)))0�t�T 2 F
and
E
�(
Z t
0
(b(s;X(s))� b(s; Y (s)))dW (s))2�= E
�Z t
0
(b(s;X(s))� b(s; Y (s)))2ds�
� K2
Z t
0
E�(X(s)� Y (s))2
�ds:
From the above, if C = 4K2max(T; 1),
E�(X(t)� Y (t))2
�� C
Z t
0
E�(X(s)� Y (s))2
�ds:
Consequently, by introducing the function
f(t) = E�(X(t)� Y (t))2
�; 0 � t � T
we get
f(t) � CZ t
0
f(s)ds; 0 � t � T
38
Next letM = sup
0�s�Tf(s)
and use induction to conclude
f(t) � MCn
n!tn; n 2 N:
From these inequalities it follows that f = 0 or, stated otherwise,
P [X(t) = Y (t)] = 1; for every 0 � t � T:Hence
P [X(t) = Y (t) for every rational t 2 [0; T ]] = 1and as the processes (X(t))0�t�T and (Y (t))0�t�T possess continuous samplepaths with probability one the uniqueness part of the theorem is proved.
Example 3.7.1. Suppose �; �; �; and x0 are real numbers. The equation
dX(t) = (�+ �X(t))dt+ �X(t)dW (t); t � 0
with the initial condition X(0) = x0 possesses a unique solution. In thespecial case � = 0 and x0 = 1 the solution equals
U(t) = e(���2
2)t+�W (t):
To treat the general case set
Y (t) =X(t)
U(t)= X(t)V (t)
whereV (t) = e(��+
�2
2)t��W (t):
FromdV (t) = V (t)((��+ �2)dt� �dW (t))
and the product rule for di¤erentiation
d(X(t)V (t)) = V (t)dX(t) +X(t)dV (t) + dX(t)dV (t)
39
= V (t) f(�+ �X(t))dt+ �X(t)dW (t)g+X(t)V (t)
�(��+ �2)dt� �dW (t)
� �2X(t)V (t)dt = �V (t)dt:
Thus
X(t)V (t) = x0 + �
Z t
0
V (u)du
or, equivalently,
X(t) = e(���2
2)t+�W (t)(x0 + �
Z t
0
e(��+�2
2)u��W (u)du):
Note that X(t) > 0 if x0 > 0 and � > 0:For positive parameters x0; {; �; and ;the process (X(t))t�0 governed
by the equation
dX(t) = {(� �X(t))dt+ X(t)dW (t)
with the initial condition X(0) = x0 describes the volatility in the so calledGarch di¤usion model. The same equation also describes the short rate inthe so called Brennan-Schwartz model.
Example 3.7.2 (The stochastic Verhulst equation). Let s0; � > 0 be positiveparameters and � and � real numbers. The stochastic di¤erential equation�
dS(t) = S(t)(�� �S(t))dt+ �S(t)dW (t); t � 0S(0) = s0; S(t) > 0; 0 � t � T
does not meet the assumptions in Theorem 3.7.1. To solve the equation put
X(t) =1
S(t)
and rewrite the stochastic di¤erential equation in the following equivalentform
dX(t) = � 1
S2(t)fS(t)(�� �S(t))dt+ �S(t)dW (t)g+ 1
S3(t)(dS(t))2
= �(�X(t)� �)dt� �X(t)dW (t) + �2X(t)dt
40
(�+ (�2 � �)X(t))dt� �X(t)dW (t):Now by the previous example,
X(t) = e(�2
2��)t��W (t)(X(0) + �
Z t
0
e(���2
2)u+�W (u)du)
and we get
S(t) =e(��
�2
2)t+�W (t)
X(0) + �R t0e(��
�2
2)u+�W (u)du
or
S(t) =s0e
(���2
2)t+�W (t)
1 + �s0R t0e(��
�2
2)u+�W (u)du
:
Example 3.7.3. Suppose we want to solve the stochastic di¤erential equa-tion
dX(t) = a(X(t))dt+ b(X(t))dW (t); 0 � t � Twith the initial value X(0) = x0 2 R: To this end we start with a partition
0 = t0 < t1 < ::: < tn = T
and rewrite the integral equation
X(t) = x0 +
Z t
0
a(X(s))ds+
Z t
0
b(X(s))dW (t); 0 � t � T
in the equivalent form(X(0) = x0
X(tk) = X(tk�1) +R tktk�1
a(X(s))ds+R tktk�1
b(X(s))dW (t); k = 1; :::; n:
The standard Euler scheme reads as follows:�X(n)(0) = x0
X(n)(tk) = X(n)(tk�1)+a(X
(n)(tk�1))�tk+b(X(n)(tk�1))�W (tk); k = 1; :::; n
where�tk = tk � tk�1; k = 1; :::; n
41
and�W (tk) =W (tk)�W (tk�1); k = 1; :::; n:
In the so called Milstein scheme the termZ tk
tk�1
b(X(s))dW (t)
is approximated in a slightly di¤erent way. First
db(X(s)) = b0(X(s))dX(s) +b2(X(s))
2b00(X(s))ds
= (a(X(s))b0(X(s)) +b2(X(s))
2b00(X(s)))ds+ b(X(s)b0(X(s))dW (s)
and from thisb(X(s)) = b(X(tk�1)
+
Z s
tk�1
(a(X(u))b0(X(u)) +b2(X(u))
2b00(X(u)))du
+
Z s
tk�1
b(X(u)b0(X(u))dW (u):
Now
X(tk)�X(tk�1) =Z tk
tk�1
a(X((s))ds+
Z tk
tk�1
b(X(s))dW (t)
= a(X(tk�1)�tk + b(X(tk�1)�W (tk) +Rk
where
Rk =
Z tk
tk�1
"Z s
tk�1
b(X(u)b0(X(u))dW (u)
#dW (s) +R0k:
Next we approximate the expressionZ tk
tk�1
"Z s
tk�1
b(X(u)b0(X(u))dW (u)
#dW (s)
by
b(X(tk�1)b0(X(tk�1))
Z tk
tk�1
(
Z s
tk�1
dW (u))dW (s)
42
where Z tk
tk�1
(
Z s
tk�1
dW (s))dW (s) =
Z tk
tk�1
(W (s)�W (tk�1))dW (s)
=
Z tk
tk�1
W (s)dW (s)�W (tk�1)(W (tk)�W (tk�1))
=W 2(tk)� tk
2� W
2(tk�1)� tk�12
�W (tk�1)(W (tk)�W (tk�1))
=1
2((�W (tk))
2 ��tk):
If we drop R0k we get the Milsteins schema:
X(n)(0) = x0
and
X(n)(tk) = X(n)(tk�1) + a(X
(n)(tk�1))�tk + b(X(n)(tk�1))�W (tk)
+1
2b(X(n)(tk�1)b
0(X(n)(tk�1))((�W (tk))2 ��tk); k = 1; :::; n:
Exercises
1. Suppose X is the Garch di¤usion process. Show that 1=X solves thestochastic Verhulst equation.
2. Suppose x0 is a real constant. Solve the di¤erential equation
dX(t) =1
2X(t)dt+
p1 +X2(t)dW (t); t � 0
with the initial value X(0) = x0:
43
(Answer: X(t) = sinh(W (t) + ln(x0 +p1 + x20)))
3.8 A Dirichlet problem in one space dimension
In mathematical �nance the connection between Brownian motion and par-tial di¤erential equation is of fundamental importance and requires a courseon its own. In spite of this, in this section we will make a minor occursioninto this important �eld without beeing able to give complete mathematicalproofs of every statement.Consider two continuously di¤erentiable functions a and b on a compact
interval [�; �] and assume min b > 0: The boundary problem�b2(x)2u00 + a(x)u0 = 0
u(�) = A; u(�) = B
is simple to solve for any real A and B using the integrating factor
eR x�2a(z)
b2(z)dz:
The purpose of this section is to give a probabilistic solution of the solu-tion u of the above Dirichlet problem. To this end consider smooth boundedextensions of a and b de�ned on R and, again, denote these extensions bya and b; respectively. Furthermore, choose x 2 ]�; �[ and denote by X thesolution of the stochastic di¤erential equation
dX(t) = a(X(t))dt+ b(X(t))dW (t); 0 � t <1
with the initial value X(0) = x (Hint: Use Theorem 3.7.1 for each the timeinterval 0 � t � n; n 2 N+).Next consider a smooth extension of u de�ned on R: For simplicity, this
extension is denoted by u and without loss of generality we may assume thatu0 is bounded. By Itô�s lemma, we have
du(X(t)) = u0(X(t))dX(t) +1
2u00(X(t))(dX(t))2
= (u0(X(t))a(X(t) +1
2u00(X(t))b2(X(t))dt+ u0(X(t))b(X(t))dW (t):
44
Hence
u(X(t)) = u(x) +
Z t
0
(u0(X(s)a(X(s) +1
2u00(X(s))b2(X(s))ds
+
Z t
0
u0(X(s))b(X(s))dW (s):
Thus if � is the �rst time X hits the boundary of [�; �] and T is a strictlypositive number,
u(X(min(� ; T ))) = u(x) +
Z min(�;T )
0
(u0(X(s)a(X(s) +1
2u00(X(s))b2(X(s))ds
+
Z min(�;T )
0
u0(X(s))b(X(s))dW (s)
and
u(X(min(� ; T ))) = u(x) +
Z min(�;T )
0
u0(X(s))b(X(s))dW (s):
Since the expectation of the stochastic integral in the right-hand side equalszero, we have
u(x) = E [u(X(min(� ; T )))]
and letting T !1 yields the formula
u(x) = E [u(X(�))]
since P [� =1] = 0 (see I. Karatzas and S. E. Shreve, "Brownian Motion andStochastic Calculus", Chapter 5). To empasize that X(0) = x we sometimeswrite
u(x) = E [u(X(�)) j X(0) = x]or
u(x) = Ex [u(X(�))] :
We now have
u(x) = APx [� = �] +BPx [� = �]
where the notation Px indicates that the process starts at the point x:
45
In the special case A = 1 and B = 0 we get
u(x) = 1�R x�e�R y�2a(z)
b2(z)dzdyR �
�e�R y�2a(z)
b2(z)dzdy
and
Px [� = �] = 1�R x�e�R y�2a(z)
b2(z)dzdyR �
�e�R y�2a(z)
b2(z)dzdy:
Suppose �;{; � are strictly positive parameters and consider the squareroot process (X(t))t�0 governed by the CIR equation
dX(t) = {(� �X(t))dt+ pX(t)dW (t)
with the initial condition X(0) = x0 > 0. Moreover, assume
{� � 2
2:
If 0 < � < x < � it is readily seen that
Px(� = �)! 0 as �! 0:
Hence the CIR equation has a strictly positive solution for every t � 0 andx > 0:The CIR equation is used in interest rate theory and, moreover, it repre-
sents the volatility process in Heston�s option pricing model and the followingresult is of great interest.
Theorem 3.8.1. If � > 0;
Ex�e��X(t)
�=
1
(1 + 2�2{ (1� e�{t))
2{� 2
exp(� �e�{tx
1 + 2�2{ (1� e�{t)
):
For a proof of Theorem 3.8.1 in a special case, see Exercise 6.6 in theShreve book "Stochastic Calculus for Finance II. Continuous-Time Models".
46
PROOF. Set
Lf = 2
2xd2f
dx2+ {(� � x) df
dx
and
u(t; x) =1
(1 + 2�2{ (1� e�{t))
2{� 2
exp(� �e�{tx
1 + 2�2{ (1� e�{t)
):
By direct computation we �nd that the function u(t; x) satis�es the equation
Lu =@u
@tif t > 0; x > 0
with the initial condition u(0; x) = exp(��x):Now suppose v(t; x) is a smooth function and use the Itô lemma to con-
clude thatv(min(t; �); X(min(t; �)))
= v(0; x) +
Z min(t;�)
0
@v
@x(s;X(s))
pX(s)dW (s) +
Z min(t;�)
0
(@v
@t+ Lv)ds
if � is any stopping time. Moreover, choose a �xed number t0 > 0 andv(t; x) = u(t0 � t; x) for t < t0: Then
u(t0�min(t; �); X(min(t; �))) = u(t0; x)+ Z min(t;�)
0
@u
@x(t0�s;X(s))
pX(s)dW (s):
Now if 0 < � < x and � is the �rst time the process X hits � we concludethat
Ex [u(t0 �min(t; �); X(min(t; �)))] = u(t0; x):
Noting that � !1 if �! 0 we use dominated convergence to get
Ex [u(t0 � t;X(t))] = u(t0; x):
Finally, letting t! t�0 the theorem follows.
If we di¤erentiate with respect to � in the formula in Theorem 3.8.1 andlet �! 0+, it follows that
Ex [X(t)] = e�{tx+ �(1� e�{tx):
47
(for a strict proof A. Friedman, Stochastic Di¤erential Equations and Appli-cations, Vol 1, Academic Press 1975, is useful). In particular, the process
(pX(t))0�t�T 2 F:
In addition, it is known that
Ex [Xn(t)] <1 if n 2 N:
In fact much more is true. To explain this, set
z(�) = Ex�e�X(t)
�if � < 0
where
z(�) =1
(1� 2�2{ (1� e�{t))
2{� 2
exp(�e�{tx
1� 2�2{ (1� e�{t)
)
is de�ned in the domain
=
�� 2 C; Im � 6= 0 if Re � � 2{
2(1� e�{t)
�:
We already know thatz(n)(0) = Ex [Xn(t)] :
Now if0 � � < 2{
2(1� e�{t) ;
z(�) =1X0
z(n)(0)n!
�n =1X0
Ex [Xn(t)]
n!�n
= Ex�e�X(t)
�:
In applications below � will always be strictly smaller than 1 and sincewe asume
{� � 2
2it follows that
2{ 2(1� e�{t) > 1
andEx�e�X(t)
�is analytic for Re � <
2{ 2(1� e�{t) :