48

Problems in Ok Send a Ls Book

Embed Size (px)

DESCRIPTION

EJercicios de ecuaciones diferenciales estocásticas

Citation preview

  • 1 Problems in Oksendals book

    3.2.

    Proof. WLOG, we assume t = 1, then

    B31 =nj=1

    (B3j/n B3(j1)/n)

    =nj=1

    [(Bj/n B(j1)/n)3 + 3B(j1)/nBj/n(Bj/n B(j1)/n)]

    =nj=1

    (Bj/n B(j1)/n)3 +nj=1

    3B2(j1)/n(Bj/n B(j1)/n)

    +nj=1

    3B(j1)/n(Bj/n B(j1)/n)2

    := I + II + III

    By Problem EP1-1 and the continuity of Brownian motion.

    I [nj=1

    (Bj/n B(j1)/n)2] max1jn

    |Bj/n B(j1)/n| 0 a.s.

    To argue II 3 10B2t dBt as n , it suffices to show E[

    10(Bt B(n)t )2dt] 0,

    where B(n)t =nj=1B

    2(j1)/n1{(j1)/n

  • By looking at a subsequence, we only need to prove the L2-convergence. Indeed,

    E

    nj=1

    B(j1)/n[(Bj/n B(j1)/n)2 1n]

    2

    =nj=1

    E

    (B2(j1)/n[(Bj/n B(j1)/n)2

    1n]2)

    =nj=1

    j 1n

    E

    [(Bj/n B(j1)/n)4 2

    n(Bj/n B(j1)/n)2 + 1

    n2

    ]

    =nj=1

    j 1n

    (31n2

    2 1n2

    +1n2

    )

    =nj=1

    2(j 1)n3

    0

    as n. This completes our proof.

    3.18.

    Proof. If t > s, then

    E

    [MtMs

    |Fs]= E

    [e(BtBs)

    12

    2(ts)|Fs]=E[eBts ]e12

    2(ts) = 1

    The second equality is due to the fact Bt Bs is independent of Fs.

    4.4.

    Proof. For part a), set g(t, x) = ex and use Theorem 4.12. For part b), it comes fromthe fundamental property of Ito integral, i.e. Ito integral preserves martingale property forintegrands in V.

    Comments: The power of Ito formula is that it gives martingales, which vanish underexpectation.

    4.5.

    Proof.

    Bkt = t0

    kBk1s dBs +12k(k 1)

    t0

    Bk2s ds

    Therefore,

    k(t) =k(k 1)

    2

    t0

    k2(s)ds

    This gives E[B4t ] and E[B6t ]. For part b), prove by induction.

    2

  • 4.6. (b)

    Proof. Apply Theorem 4.12 with g(t, x) = ex and Xt = ct+nj=1 jBj . Note

    nj=1 jBj

    is a BM, up to a constant coefficient.

    5.1. (ii)

    Proof. Set f(t, x) = x/(1 + t), then by Itos formula, we have

    dXt = df(t, Bt) = Bt(1 + t)2 dt+dBt1 + t

    = Xt1 + t

    dt+dBt1 + t

    (iv)

    Proof. dX1t = dt is obvvious. Set f(t, x) = etx, then

    dX2t = df(t, Bt) = etBtdt+ etdBt = X2t dt+ e

    tdBt

    5.9.

    Proof. Let b(t, x) = log(1 + x2) and (t, x) = 1{x>0}x, then

    |b(t, x)|+ |(t, x)| log(1 + x2) + |x|

    Note log(1 + x2)/|x| is continuous on R {0}, has limit 0 as x 0 and x . So itsbounded on R. Therefore, there exists a constant C, such that

    |b(t, x)|+ |(t, x)| C(1 + |x|)

    Also,

    |b(t, x) b(t, y)|+ |(t, x) (t, y)| 2||1 + 2

    |x y|+ |1{x>0}x 1{y>0}y|

    for some between x and y. So

    |b(t, x) b(t, y)|+ |(t, x) (t, y)| |x y|+ |x y|

    Conditions in Theorem 5.2.1 are satisfied and we have existence and uniqueness of a strongsolution.

    5.11.

    3

  • Proof. First, we check by integration-by-parts formula,

    dYt =(a+ b

    t0

    dBs1 s

    )dt+ (1 t) dBt

    1 t =b Yt1 t dt+ dBt

    Set Xt = (1 t) t0dBs1s , then Xt is centered Gaussian, with variance

    E[X2t ] = (1 t)2 t0

    ds

    (1 s)2 = (1 t) (1 t)2

    So Xt converges in L2 to 0 as t 1. Since Xt is continuous a.s. for t [0, 1), we conclude0 is the unique a.s. limit of Xt as t 1.

    7.8

    Proof.{1 2 t} = {1 t} {2 t} Nt

    And since {i t} = {i < t}c Nt,{1 2 t} = {1 t} {2 t} Nt

    7.9. a)

    Proof. By Theorem 7.3.3, A restricted to C20 (R) is rx ddx+2x2

    2d2

    dx2 . For f(x) = x , Af can be

    calculated by definition. Indeed, Xt = xe(r22 )t+Bt , and Ex[f(Xt)] = xe(r

    22 +

    22 )t.

    So

    limt0

    Ex[f(Xt)] f(x)t

    = (r +2

    2( 1))x

    So f DA and Af(x) = (r + 22 ( 1))x .

    b)

    Proof. We choose such that 0 < < x < R. We choose f0 C20 (R) such that f0 = f on(,R). Define (,R) = inf{t > 0 : Xt 6 (,R)}. Then by Dynkins formula, and the factAf0(x) = Af(x) = 1x1(r +

    2

    2 (1 1)) = 0 on (,R), we getEx[f0(X(,R)k)] = f0(x)

    The condition r < 2

    2 implies Xt 0 a.s. as t 0. So (,R) < a.s.. Let k , bybounded convergence theorem and the fact (,R)

  • c)

    Proof. We consider > 0 such that < x < R. (,R) is the first exit time of X from (,R).Choose f0 C20 (R) such that f0 = f on (,R). By Dynkins formula with f(x) = log x andthe fact Af0(x) = Af(x) = r 22 for x (,R), we get

    Ex[f0(X(,R)k)] = f0(x) + (r 2

    2)Ex[(,R) k]

    Since r > 2

    2 , Xt a.s. as t. So (,R)

  • 8.16 a)

    Proof. Let Lt = t0

    ni=1

    hxi

    (Xs)dBis. Then L is a square-integrable martingale. Fur-

    thermore, LT = T0| 5 h(Xs)|2ds is bounded, since h C10 (Rn). By Novikovs condition,

    Mt = exp{Lt 12 Lt} is a martingale. We define P on FT by dP =MT dP . Then

    dXt = 5h(Xt)dt+ dBtdefines a BM under P .

    Ex[f(Xt)]= Ex[M1t f(Xt)]

    = Ex[e t0

    ni=1

    hxi

    (Xs)dXis 12

    t0 |5h(Xs)|2dsf(Xt)]

    = Ex[e t0

    ni=1

    hxi

    (Bs)dBis 12

    t0 |5h(Bs)|2dsf(Bt)]

    Apply Itos formula to Zt = h(Bt), we get

    h(Bt) h(B0) = t0

    ni=1

    h

    xi(Bs)dBis +

    12

    t0

    ni=1

    2h

    x2i(Bs)ds

    SoEx[f(Xt)] = Ex[eh(Bt)h(B0)e

    t0 V (Bs)dsf(Bt)]

    b)

    Proof. If Y is the process obtained by killing Bt at a certain rate V , then it has transitionoperator

    TYt (g, x) = Ex[e

    t0 V (Bs)dsg(Bt)]

    So the equality in part a) can be written as

    TXt (f, x) = eh(x)TYt (fe

    h, x)

    9.11 a)

    Proof. First assume F is closed. Let {n}n1 be a sequence of bounded continuous functionsdefined on D such that n 1F boundedly. This is possible due to Tietze extensiontheorem. Let hn(x) = Ex[n(B )]. Then by Theorem 9.2.14, hn C(D) and hn(x) = 0in D. So by Poisson formula, for z = rei D,

    hn(z) =12pi

    2pi0

    Pr(t )hn(eit)dt

    6

  • Let n , hn(z) Ex[1F (B )] = P x(B F ) by bounded convergence theorem, andRHS 12pi

    2pi0

    Pr(t )1F (eit)dt by dominated convergence theorem. Hence

    P z(B F ) = 12pi 2pi0

    Pr(t )1F (eit)dt

    Then by pi theorem and the fact Borel -field is generated by closed sets, we conclude

    P z(B F ) = 12pi 2pi0

    Pr(t )1F (eit)dt

    for any Borel subset of D.

    b)

    Proof. Let B be a BM starting at 0. By example 8.5.9, (Bt) is, after a change of timescale (t) and under the original probability measure P, a BM in the plane. F B(R),

    P (B exits D from (F ))= P ((B) exits upper half plane from F )= P ((B)(t) exits upper half plane from F )= Probability of BM starting at i that exits from F= (F )

    So by part a), (F ) = 12pi 2pi0

    1(F )(eit)dt = 12pi 2pi0

    1F ((eit))dt. This impliesR

    f()d() =12pi

    2pi0

    f((eit))dt =12pii

    D

    f((z))z

    dz

    c)

    Proof. By change-of-variable formula,R

    f()d() =1pi

    H

    f()d

    | i|2 =1pi

    f(x)dx

    x2 + 1

    d)

    Proof. Let g(z) = u + vz, then g is a conformal mapping that maps i to u + vi and keepsupper half plane invariant. Use the harmonic measure on x-axis of a BM starting fromi, and argue as above in part a)-c), we can get the harmonic measure on x-axis of a BMstarting from u+ iv.

    12.1 a)

    7

  • Proof. Let be an arbitrage for the market {Xt}t[0,T ]. Then for the market {Xt}t[0,T ]:(1) is self-financing, i.e. dV t = tdXt. This is (12.1.14).(2) is admissible. This is clear by the fact V t = e

    t0 sdsV t and being bounded.(3) is an arbitrage. This is clear by the fact V t > 0 if and only if V

    t > 0.

    So {Xt}t[0,T ] has an arbitrage if {Xt}t[0,T ] has an arbitrage. Conversely, if we replace with , we can calculate X has an arbitrage from the assumption that X has an aribitrage.

    12.2

    Proof. By Vt =ni=0 iXi(t), we have dVt = dXt. So is self-financing.

    12.6 (e)

    Proof. Arbitrage exists, and one hedging strategy could be = (0, B1 + B2, B1 B2 +13B1+B2

    5 ,13B1+B2

    5 ). The final value would then become B1(T )2 +B2(T )2.

    12.10

    Proof. Becasue we want to represent the contingent claim in terms of original BM B,the measure Q is the same as P. Solving SDE dXt = Xtdt + XtdBt gives us Xt =X0e

    ( 22 )t+Bt . So

    Ey[h(XTt)]= Ey[XTt]

    = ye(2

    2 )(Tt)e2

    2 (Tt)

    = ye(Tt)

    Hence = e(Tt)Xt = X0eT2

    2 t+Bt .

    12.11 a)

    Proof. According to (12.2.12), (t, ) = , (t, ) = mX1(t). So u(t, ) = 1 (mX1(t)X1(t)). By (12.2.2), we should define Q by setting

    dQ|Ft = e t0 usdBs 12

    t0 u

    2sdsdP

    Under Q, Bt = Bt + 1 t0(mX1(s) X1(s))ds is a BM. Then under Q,

    dX1(t) = dBt + X1(t)dt

    So X1(T )eT = X1(0) + T0etdBt and EQ[(T )F ] = EQ[eTX1(T )] = x1.

    b)

    Proof. We use Theorem 12.3.5. From part a), (t, ) = et. We therefore should choose1(t) such that 1(t)et = et. So 1 = 1 and 0 can then be chosen as 0.

    8

  • 2 Extra Problems

    EP1-1.

    Proof. According to Borel-Cantelli lemma, the problem is reduced to proving ,n=1

    P (|Sn| > ) ) < 4E[S4n] 4CE[X41 ]n2

    By integration-by-parts formula, we can easily calculate the 2k-th moment of N(0, ) isof order k. So the order of E[X41 ] is n

    4. This suffices for the Borel-Cantelli lemma toapply.

    EP1-2.

    Proof. We first see the second part of the problem is not hard, since t0YsdBs is a martingale

    with mean 0. For the first part, we do the following construction. We define Yt = 1 fort (0, 1/n], and for t (j/n, (j + 1)/n] (1 j n 1)

    Yt := Cj1{B(i+1)/nBi/n0, 0ij1}

    where each Cj is a constant to be determined.Regarding this as a betting strategy, the intuition of Y is the following: We start with

    one dollar, if B1/n B0 > 0, we stop the game and gain (B1/n B0) dollars. Otherwise,we bet C1 dollars for the second run. If B2/n B1/n > 0, we then stop the game and gainC1(B2/n B1/n) (B1/n B0) dollars (if the difference is negative, it means we actuallylose money, although we win the second bet). Otherwise, we bet C2 dollar for the thirdrun, etc. So in the end our total gain/loss of this betting is t

    0YsdBs = (B1/n B0) + 1{B1/nB00}C1(B2/n B1/n) +

    +1{B1/nB00, ,B(n1)/nB(n2)/n0}Cn1(B1 B(n1)/n)

    We now look at the conditions unde which 10YsdBs 0. There are several possibilities:

    (1) (B1/n B0) 0, (B2/n B1/n) > 0, but C1(B2/n B1/n) < |B1/n B0|;(2) (B1/n B0) 0, (B2/n B1/n) 0, (B3/n B2/n) > 0, but C2(B3/n B2/n) 0, 0 < Y < X/C1) P (0 < Y < X/C1)

    where X and Y are i.i.d. N(0, 1/n) random variables. We can choose C1 large enough sothat this probability is smaller than 1/2n. The second event has the probability smallerthan P (0 < X < Y/C2), where X and Y are independent Gaussian random variables with0 mean and variances 1/n and (C21 + 1)/n, respectively, we can choose C2 large enough, sothat this probability is smaller than 1/2n. We continue this process untill we get all theCj s. Then the probability of

    10YtdBt 0 is at most n/2n. For n large enough, we can

    have P ( 10YtdBt > 0) > 1 for given . The process Y is obviously bounded.

    Comments: Different from flipping a coin, where the gain/loss is one dollar, we havenow random gain/loss (Bj/nB(j1)/n). So there is no sense checking our loss and makingnew strategy constantly. Put it into real-world experience, when times are tough and theoutcome of life is uncertain, dont regret your loss and estimate how much more you shouldinvest to recover that loss. Just keep trying as hard as you can. When the opportunitycomes, you may just get back everything you deserve.

    EP2-1.

    Proof. This is another application of the fact hinted in Problem EP1-1. E[Yn] = 0 isobvious. And

    E[(B1j/n B1(j1)/n)4(B2j/n B2(j1)/n)4]= (3E[(B1j/n B1(j1)/n)2]2)2

    =9n4

    := an

    We set Xj = [B1j/n B1(j1)/n][B2j/n B2(j1)/n]/a14n , and apply the hint in EP1-1,

    E[Y 4n ] = anE(X1 + +Xn)4 9n4cn2 =

    9cn2

    for some constant c. This implies Yn 0 with probability one, by Borel-Cantelli lemma.

    Comments: This following simple proposition is often useful in calculation. If X is acentered Gaussian random variable, then E[X4] = 3E[X2]2. Furthermore, we can showE[X2k] = CkE[X2k2]2 for some constant Ck. These results can be easily proved byintegration-by-part formula. As a consequence, E[B2kt ] = Ct

    k for some constant C.

    EP3-1.

    Proof. A short proof: For part (a), it suffices to set

    Yn+1 = E[Rn+1 Rn|X1, , Xn+1 = 1]

    10

  • (What does this really mean, rigorously?). For part (b), the answer is NO, and Rn =nj=1X

    3j gives the counter example.

    A long proof:We show the analysis behind the above proof and point out if {Xn}n is i.i.d. and

    symmetrically distributed, then Bernoulli type random variables are the only ones thathave martingale representation property.

    By adaptedness, Rn+1 Rn can be represented as fn+1(X1, , Xn+1) for some Borelfunction fn+1 B(Rn+1). Martingale property and {Xn}n being i.i.d. Bernoulli randomvariables imply

    fn+1(X1, , Xn,1) = fn+1(X1, , Xn, 1)This inspires us set Yn+1 as

    fn+1(X1, , Xn, 1) = E[Rn+1 Rn|X1, , Xn+1 = 1].For part b), we just assume {Xn}n is i.i.d. and symmetrically distributed. If (Rn)n has

    martingale representation property, then

    fn+1(X1, , Xn+1)/Xn+1must be a function of X1, , Xn. In particular, for n = 0 and f1(x) = x3, we haveX21 =constant. So Bernoulli type random variables are the only ones that have martingalerepresentation theorem.

    EP5-1.

    Proof. A = rx ddx + 12 d2

    dx2 , so we can choose f(x) = x12r for r 6= 12 and f(x) = log x for

    r = 12 .

    EP6-1. (a)

    Proof. Assume the claim is false, then there exists t0 > 0, > 0 and a sequence {tk}k1such that tk t0, and f(tk) f(t0)tk t0 f +(t0)

    > WLOG, we assume f +(t0) = 0, otherwise we consider f(t) tf +(t0). Because f + is contin-uous, there exists > 0, such that t (t0 , t0 + ),

    |f +(t) f +(t0)| = |f +(t)| or

    f(tk) f(t0)tk t0 <

    By considering f or f and taking a subsequence, we can WLOG assume for all the tks,tk (t , t+ ), and

    f(tk) f(t0)tk t0 f

    +(t0) >

    11

  • Consider h(t) = (t t0) [f(t) f(t0)] = (t t0)[ f(t)f(t0)tt0

    ]. Then h(t0) = 0,

    h+(t) = f +(t) > /2 for t (t0 , t0 + ), and h(tk) > 0. On one hand, t0tk

    h+(t)dt >

    2(t0 tk) > 0

    On the other hand, if h is monotone increasing, then t0tk

    h+(t)dt h(t0) h(tk) = 0 h(tk) < 0

    Contradiction.So it suffices to show h is monotone increasing on (t0 , t0 + ). This is easily proved

    by showing h cannot obtain local maximum in the interior of (t0 , t0 + ).

    (b)

    Proof. f(t) = |t 1|.

    (c)

    Proof. f(t) = 1{t0}.

    EP6-2. (a)

    Proof. Since A is bounded,

  • For uniqueness, by part a), f(Sn ) is a martingale, so use optimal stopping time, wehave

    f(x) = Ex[f(S0)] = Ex[f(Sn )]

    Becasue f is bounded, we can use bounded convergence theorem and let n ,

    f(x) = Ex[f(S )] = Ex[F (S )]

    (c)

    Proof. Since d 2, the random walk is recurrent. So < a.s. even if A is bounded.The existence argument is exactly the same as part b). For uniqueness, we still havef(x) = Ex[f(Sn )]. Since f is bounded, we can let n , and get f(x) = Ex[F (S )].

    (d)

    Proof. Let d = 1 and A = {1, 2, 3, ...}. Then A = {0}. If F (0) = 0, then both f(x) = 0and f(x) = x are solutions of the discrete Dirichlet problem. We dont have uniqueness.

    (e)

    Proof. A = Z3 {0}, A = {0}, and F (0) = 0. T0 = inf{n 0 : Sn 0}. Let c Rand f(x) = cP x(T0 = ). Then f(0) = 0 since T0 = 0 under P 0. f is clearly bounded.To see f is harmonic, the key is to show P x(T0 = |S1 = y) = P y(T0 = ). This is dueto Markov property: note T0 = 1 + T0 1. Since c is arbitrary, we have more than onebounded solution.

    EP6-3.

    Proof.

    Ex[Kn Kn1|Fn1] = Ex[f(Sn) f(Sn1)|Fn1]f(Sn1)= ESn1 [f(S1)] f(Sn1)f(Sn1)= f(Sn1)f(Sn1)= 0

    Applying Dynkins formula is straightforward.

    EP6-4. (a)

    Proof. By induction, it suffices to show if |y x| = 1, then Ey[TA] < . We note TA =1 + TA 1 for any sample path starting in A. So

    Ex[TA1{S1}] = Ex[TA|S1 = y]P x(S1 = y) = Ey[TA 1]P x(S1 = y)

    Since Ex[TA1{S1}] Ex[TA] 0, Ey[TA]

  • (b)

    Proof. If y A, then under P y, TA = 0. So f(y) = 0. If y A,

    f(y) = Ey[f(S1)] f(y)= Ey[Ey[TA 1|S1]] f(y)= Ey[Ey[TA 1|S1]] f(y)= Ey[TA] 1 f(y)= 1

    To see uniqueness, use the martingale in EP6-3 for any solution f , we get

    Ex[f(STAK)] = f(x) + Ex[TA1j=0

    f(Sj)] = f(x) Ex[TA]

    Let K , we get 0 = f(x) Ex[TA].

    EP7-1. a)

    Proof. Since D is bounded, there exists R > 0, such that D B(0, R). Let R := inf{t >0 : |Bt B0| R}, then R. If q

    e(x) = Ex[e ] Ex[eR ] = Ex[ R0

    etdt+ 1] = 1 + 0

    P x(R > t)etdt

    For any n N, P x(R > n) P x(ni=1{|BkBk1| < 2R}) = an, where a = P x(|B1B0| t) = P x(supst |Bs B0| < ) = P 0(supst |Bs/2 | < ) = P x(1 > t/2).So e(x) = 1+

    0P x(1 > u)(M2)eM2udu = Ex[eM21 ]. For small enough, M2

    will be so small that, by what we showed in the proof of part a), Ex[eM21 ] will be finite.

    Obviously, is dependent on M and D only, hence q and D only.

    d)

    Proof. Cf. Rick Durretts book, Stochastic Calculus: A Practical Introduction, page 158-160.

    14

  • b)

    Proof. From part d), it suffices to show for a give x, there is a K = K(D,x) 0, such that B(x, r) D.

    Now we assume q = K < 0, where K is to be determined. We have

    e(x) = Ex[eK ] Ex[eKr ].

    Here r := inf{t > 0 : |Bt B0| r}. Similar to part a), we have

    Ex[eKr ] 1 +n=1

    P x(r n)ekn(1 ek)

    So it suffices to show there exists > 0, such that P x(r n) n.Note

    P x(r > n) = P x(maxtn

    |Bt B0| < r) P x(maxtn

    |Bit Bi0| < C(d)r, i d),

    where Bi is the i-th coordinate of B and C(d) is a constant dependent on d. Set a = C(d)r,then by independence

    P x(r > n) P 0(maxtn

    |Wt| < a)d

    Here W is a standard one-dimensional BM. Let

    = inf a2 0)

    then we have

    P 0(maxtn

    |Wt| < a)

    P 0(nk=1{ maxk1tk

    |Wt| < a, |Wk1| < a2 , |Wk| 0 : Bt 6 ( 12 , b)}. Then Q1(T 12 = ) =limbQ1(Bb = b). By the results in EP5-1 and Problem 7.18 in Oksendals book, wehave

    (i) If > 1/2, limbQ1(Bb = b) = limb1( 12 )12

    b12( 12 )12= 1 ( 12 )21 > 0. So in

    this case, Q1(T 12=) > 0.

    (ii) If < 1/2, limbQ1(Bb = b) = limb1( 12 )12

    b12( 12 )12= 0. So in this case,

    Q1(T 12=) = 0.

    17

  • (iii) If = 1/2, limbQ1(Bb = b) = limb0log 12

    log blog 12= 0. So in this case, Q1(T 1

    2=

    ) = 0.

    EP9-1. a)

    Proof. Fix z D, consider A = { D : D(z, ) < }. Then A is clearly open. Weshow A is also closed. Indeed, if k A and k D, then for k sufficiently large,|k | < 12dist(, D). So k and are adjacent. By definition, D(, z) < , i.e. A.

    Since D is connected, and A is both closed and open, we conclude A = D. By thearbitrariness of z, D(z, ) dist(z, D), then for close to z, dist(, D) is also close todist(z, D), and hence < dist(xk1, D). Choose such that |z | < 12dist(xk1, D)|z xk1|, we then have

    | xk1| |z |+ |z xk1| 0, such that for adjacentz, D, h(z) ch(). Indeed, let r = 14 min{dist(z, D),dist(, D)}, then by mean-value property, y B(, r), we have B(y, r) B(, 2r), so

    h() =

    B(,2r)

    h(x)dx

    V (B(, 2r))B(y,r)

    h(x)dx

    V (B(, 2r))=

    V (B(y, r))V (B(, 2r))

    h(y) =h(y)2d

    By using a sequence of small balls connecting and z, we are done.

    d)

    Proof. Since K is compact and {U1(x)}xU is an open covering of K, we can find a finitesub-covering {Uni(x)}Ni=1 of K. This implies z, K, D(z, ) N . By the result in partc), were done.

    EP9-2. a)

    Proof. We first have the following observation. Consider circles centered at 0, with radiusr and 2r, respectively. Let B be a BM on the plane and 2r = inf{t > 0 : |Bt| = 2r}.

    x B(0, r), P x([B0, B2r ] doesnt loop around 0) is invariant for different xs onB(0, r), by the rotational invariance of BM. > 0, we define Bt = Bt, and 2r =inf{t > 0 : |Bt| = 2r}. Since B and B have the same trajectories,

    P x([B0, B2r ] doesnt loop around 0)= P ([B0, B2r ] + x doesnt loop around 0)= P ([B0, B2r ] + x doesnt loop around 0)= P ( 1

    [B0, B2r ] +

    xdoesnt loop around 0)

    Define Wt = Bt =Bt, then W is a BM under P. If we set = inf{t > 0 : |Wt| = 2r},

    then = 2r. So

    P ( 1[B0, B2r ] +

    xdoesnt loop around 0)

    = P ([W0,W ] + x doesnt loop around 0)

    = Px ([W0,W ] doesnt loop around 0)

    Note x B(0, r

    ), we conclude for different rs, the probability that BM starting from

    B(0, r) exits B(0, 2r) without looping around 0 is the same.Now we assume 2n1 |x| < 2n and n = inf{t > 0 : |Bt| = 2n}. Then for

    Ej = {[Bj , Bj1 ] doesnt loop around 0}, E nj=1Ej . From what we observe above,PBj ([B0, Bj1 ] doesnt loop around 0) is a constant, say . Use strong Markov propertyand induction, we have

    P x(nj=1Ej) = P x(nj=2Ej ;P x(E1|F1)) = P x(nj=2Ej) = n = 2n log

    19

  • Set log = , we have P x(E) 2n = 2(2n1) 2|x|. Clearly (0, 1). So (0,).

    The above discussion relies on the assumtion |x| < 1/2. However, when 1/2 |x| < 1,the desired inequality is trivial. Indeed, in this case 2|x| 1.

    b)

    Proof. x D, WLOG, we assume x = 0. > 0, let Bt = Bt/2 , = inf{t > 0 : |Bt| =1} and := = inf{t > 0 : |Bt| = }, then = 2. Hence P 0{[B0, B] loops around 0} =P 0{[B0, B] loops around 0}. By part a), P{[B0, B] loops around 0} = 1. So,

    P 0(B loops around 0 before exiting B(0, )) = 1.

    This means P (D < ) = 1, > 0. This is equivalent to x being regular.

    EP9-3. a)

    Proof. We first establish a derivative estimate for harmonic functions. Let h be harmonic inD. Then hzi is also harmonic. By mean-value property and integration-by-parts formula,z0 D and r > 0 such that B(z0, r) U , we have hzi (z0)

    =B(z0,r/2)

    hzidz

    V (B(z0, r/2))

    =B(z0,r/2)

    hvidz

    V (B(z0, r/2))

    2dr ||h||L(B(z0,r/2))Now fix K. There exists > 0, such that when K is enlarged by a distance of , theenlarged set is contained in the interior of a compact subset K of U . Furthermore, if is small enough, z, K with |z | < , we have [z,]B(, ) K . Denotesupn supzK |hn(z)| by C, then by the above derivative estimate, for z, K with |z| 0,t [0, 1]) = P x(B1 1) P x( inf0s1

    Bs 0, B1 1)

    Let 0 be the first passage time of BM hitting 0, then by strong Markov property

    P x(infs1

    Bs 0, B1 1)= P x(0 1, P x(B1 1|F0))= P x(0 1, PB0 (Bu 1)|u=10)= P x(0 1, P 0(Bu 1)|u=10)= P x(0 1, P 0(Bu 1)|u=10)= P x(0 1, B1 1)= P x(B1 1)

    So

    P x(B1 1;Bt > 0,t [0, 1])= P x(B1 1) P x(B1 1)

    = 1+x1x

    ey2/2

    2pi

    dy

    2x e2

    2pi

    where the last inequality is due to x < 1.

    EP10-2.

    Proof. Let F (n) = P (E2n) and let DLA be the shorthand for doesnt loop around then

    F (n+m) = P (E2n+m)= P ([B0, BT2n+m ] DLA 0) P ([B0, BT2n ] DLA 0;P ([BT2n , BT2n+m ] DLA 0|FT 2n ))= P ([B0, BT2n ] DLA 0;P

    BT2n ([B0, BT2n+m ] DLA 0))

    By rotational invariance of BM P x([B0, BT2n+m ] DLA 0) is a constant for any x B(0, 2n).By scaling, we have

    P x([B0, BT2n+m ] DLA 0) = Px2n ([B0, BT2m ] DLA 0) = P (E2m) = F (m)

    So F (n+m) F (n)F (m). By the properties of submultiplicative functions, limn logF (n)nexists. We set this limit by . m N, for m large enough, we can find n, such that2n m < 2n+1, then P (E2n) P (Em) P (E2n+1). So

    logP (E2n)log 2n

    log 2n

    logm logP (Em)

    logm logP (E2n+1)

    log 2n+1log 2n+1

    logm

    21

  • Let m , then log 2n/ logm 1 as seen by log 2n logm < log 2 + log 2n. Solimm

    logP (Em)logm exists and equals to . To see (0, 1], note F (1) < 1 and F (n) F (1)n.

    So > 0. Furthermore, we note

    P x([B0, BTn ] DLA 0) P x(B1 exits (0, n) by hitting n)=

    1n

    So logP (En)/ log n 1. Hence 1.

    EP10-3. a)

    Proof. We assume f0(k) = 1, k and j, k = 1, , N . We let P be the N N matrixwith Pjk = pj,k. Then if we regard fn as a row vector, we have fn = fn1P . DefineMn = maxkN fn(k), then

    fn+m = f0Pn+m = f0PmPn = fmPn Mmf0Pn =Mmfn MmMnf0So Mn+m MnMm. By properties of submultiplicative functions, limn logMnn exists andequals infn logMnn . Meanwhile, := minj,kN pj,k > 0. So

    Mn fn(k) Nj=1

    fn1(j) Mn1

    By induction, Mn n. Hence infn logMnn log > . Let = infn logMnn , thenMn en. We set = e . Then Mn n. Meanwhile, there exists constant C (0,),such that for mn = minkN fn(k), Mn Cmn. Indeed, for n = 1, M1 = m1, and for n > 1,fn(k) =

    j pj,kfn1(j) K

    j fn1(j) and fn(k)

    j fn1(j). So Mn K mn. Let

    C = K 1, thenfn(k) mn Mn

    C

    n

    C

    Similarly, we can show mn is supermultiplicative and similar argument gives us the upperbound.

    22

    Problems in Oksendals book-Problems in Oksendals book