BOOK-Soner-Stochastic Optimal Control in Finance

  • Upload
    fincal

  • View
    230

  • Download
    0

Embed Size (px)

Citation preview

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    1/67

    Stochastic Optimal ControlinFinance

    H. Mete Soner

    Koc UniversityIstanbul, Turkey

    [email protected]

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    2/67

    .

    for my son,MehmetAliye.

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    3/67

    Preface

    These are the extended version of the Cattedra Galileiana I gave in April2003 in Scuola Normale, Pisa. I am grateful to the Society of Amici dellaScuola Normale for the funding and to Professors Maurizio Pratelli, MarziaDe Donno and Paulo Guasoni for organizing these lectures and their hospi-tality.

    In these notes, I give a very quick introduction to stochastic optimalcontrol and the dynamic programming approach to control. This is donethrough several important examples that arise in mathematical finance andeconomics. The theory of viscosity solutions of Crandall and Lions is alsodemonstrated in one example. The choice of problems is driven by my ownresearch and the desire to illustrate the use of dynamic programming andviscosity solutions. In particular, a great emphasis is given to the problemof super-replication as it provides an usual application of these methods.

    Of course there are a number of other very important examples of optimalcontrol problems arising in mathematical finance, such as passport options,American options. Omission of these examples and different methods insolving them do not reflect in any way on the importance of these problemsand techniques.

    Most of the original work presented here is obtained in collaboration withProfessor Nizar Touzi of Paris. I would like to thank him for the fruitfulcollaboration, his support and friendship.

    Oxford, March 2004.

    i

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    4/67

    Contents

    1 Examples and Dynamic Programming 11.1 Optimal Control. . . . . . . . . . . . . . . . . . . . . . . . . . 1

    1.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2.1 Deterministic minimal time problem . . . . . . . . . . 31.2.2 Mertons optimal investment-consumption problem . . 31.2.3 Finite time utility maximization . . . . . . . . . . . . . 51.2.4 Merton problem with transaction costs . . . . . . . . . 51.2.5 Super-replication with portfolio constraints . . . . . . . 71.2.6 Buyers price and the no-arbitrage interval . . . . . . . 71.2.7 Super-replication with gamma constraints . . . . . . . 8

    1.3 Dynamic Programming Principle . . . . . . . . . . . . . . . . 91.3.1 Formal Proof of DPP . . . . . . . . . . . . . . . . . . . 10

    1.3.2 Examples for the DPP . . . . . . . . . . . . . . . . . . 111.4 Dynamic Programming Equation . . . . . . . . . . . . . . . . 131.4.1 Formal Derivation of the DPE . . . . . . . . . . . . . . 141.4.2 Infinite horizon . . . . . . . . . . . . . . . . . . . . . . 16

    1.5 Examples for the DPE . . . . . . . . . . . . . . . . . . . . . . 161.5.1 Merton Problem . . . . . . . . . . . . . . . . . . . . . 161.5.2 Minimal Time Problem . . . . . . . . . . . . . . . . . . 191.5.3 Transaction costs . . . . . . . . . . . . . . . . . . . . . 201.5.4 Super-replication with portfolio constraints . . . . . . . 221.5.5 Target Reachability Problem . . . . . . . . . . . . . . . 22

    2 Super-Replication under portfolio constraints 252.1 Solution by Duality . . . . . . . . . . . . . . . . . . . . . . . . 25

    2.1.1 Black-Scholes Case . . . . . . . . . . . . . . . . . . . . 262.1.2 General Case . . . . . . . . . . . . . . . . . . . . . . . 27

    2.2 Direct Solution . . . . . . . . . . . . . . . . . . . . . . . . . . 29

    ii

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    5/67

    2.2.1 Viscosity Solutions . . . . . . . . . . . . . . . . . . . . 29

    2.2.2 Supersolution . . . . . . . . . . . . . . . . . . . . . . . 302.2.3 Subsolution . . . . . . . . . . . . . . . . . . . . . . . . 322.2.4 Terminal Condition or Face-Lifting . . . . . . . . . . 34

    3 Super-Replication with Gamma Constraints 383.1 Pure Upper Bound Case . . . . . . . . . . . . . . . . . . . . . 39

    3.1.1 Super solution . . . . . . . . . . . . . . . . . . . . . . . 393.1.2 Subsolution . . . . . . . . . . . . . . . . . . . . . . . . 413.1.3 Terminal Condition . . . . . . . . . . . . . . . . . . . . 41

    3.2 Double Stochastic Integrals . . . . . . . . . . . . . . . . . . . 443.3 General Gamma Constraint . . . . . . . . . . . . . . . . . . . 51

    3.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533.4.1 European Call Option . . . . . . . . . . . . . . . . . . 533.4.2 European Put Option: . . . . . . . . . . . . . . . . . . 543.4.3 European Digital Option . . . . . . . . . . . . . . . . . 553.4.4 Up and Out European Call Option . . . . . . . . . . . 56

    3.5 Guess for The Dual Formulation . . . . . . . . . . . . . . . . . 58

    iii

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    6/67

    Chapter 1

    Examples and DynamicProgramming

    In this Chapter, we will outline the basic structure of an optimal control prob-lem. Then, this structure will be explained through several examples mainlyfrom mathematical finance. Analysis and the solution to these problems willbe provided later.

    1.1 Optimal Control.

    In very general terms, an optimal control problem consists of the followingelements:

    State process Z(). This process must capture of the minimal neces-sary information needed to describe the problem. Typically, Z(t) dis influenced by the control and given the control process it has a Marko-vian structure. Usually its time dynamics is prescribed through anequation. We will consider only the state processes whose dynamics isdescribed through an ordinary or a stochastic differential equation. Dy-namics given by partial differential equations yield infinite dimensionalproblems and we will not consider those in these lecture notes.

    Control process (). We need to describe the control set, U, inwhich (t) takes values in for every t. Applications dictate the choiceof U. In addition to this simple restriction (t) U, there could beadditional constraints placed on control process. For instance, in the

    1

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    7/67

    stochastic setting, we will require to be adapted to a certain filtration,

    to model the flow of information. Also we may require the state processto take values in a certain region (i.e., state constraint). This also placesrestrictions on the process ().

    Admissible controls A. A control process satisfying the constraintsis called an admissible control. The set of all admissible controls willbe denoted by A and it may depend on the initial value of the stateprocess.

    Objective functional J(Z(), ()). This is the functional to be max-imized (or minimized). In all of our applications, J has an additivestructure, or in other words J is given as an integral over time.

    Then, the goal is to minimize (or maximize) the objective functional Jover all admissible controls. The minimum value plays an important role inour analysis

    Value function: = v = infA

    J .

    The main problem in optimal control is to find the minimizing controlprocess. In our approach, we will exploit the Markovian structure of theproblem and use dynamic programming. This approach yields a certainpartial differential equation satisfied by the value function v. However, in

    solving this equation we also obtain the optimal control in a feedback form.This means that is the optimal process (t) is given as (Z(t)), where isthe optimal feedback control given as a function of the state and Z is thecorresponding optimal state process. Both Z and the optimal control arecomputed simultaneously by solving the state dynamics with feedback control. Although a powerful method, it also has its technical drawbacks. Thisprocess and the technical issues will be explained by examples throughoutthese notes.

    1.2 Examples

    In this section, we formulate several important examples of optimal controlproblems. Their solutions will be given in later sections after the necessarytechniques are developed.

    2

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    8/67

    1.2.1 Deterministic minimal time problem

    The state dynamics is given by

    d

    dtZ(t) = f(Z(t), (t)) , t > 0 ,

    Z(0) = z,

    where f is a given vector field and : [0, ) U is the control process. Wealways assume that f is regular enough so that for a given control process ,the above equation has a unique solution Zx ().

    For a given target set T d, the objective functional is

    J(Z(), ()) := inf {t 0 : Zz (t) T } (or + if set is empty),:= Tz .

    Letv(z) = inf

    ATz ,

    where A := L([0, ); U), and U is a subset of a Euclidean space.Note that additional constraints typically placed on controls. In robotics,

    for instance, control set U can be discrete and the state Z() may not beallowed to enter into certain a region, called obstacles.

    1.2.2 Mertons optimal investment-consumption prob-lem

    This is a financial market with two assets: one risky asset, called stock, andone riskless asset, called bond. We model that price of the stock S(t) asthe solution of

    dS(t) = S(t)[dt + dW] , (1.2.1)

    where W is the standard one-dimensional Brownian motion, and and are given constants. We also assume a constant interest rate r for the Bondprice, B(t), i.e.,

    dB(t) = B(t)[rdt] .

    At time t, let X(t) be the money invested in the bond, Y(t) be the invest-ments at the stock, l(t) be the rate of transfer from the bond holdings to

    3

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    9/67

    the stock, m(t) be the rate of opposite transfers and c(t) be the rate of con-

    sumption. So we have the following equations for X(t), Y(t) assuming notransaction costs.

    dX(t) = rX(t)dt l(t)dt + m(t)dt c(t)dt , (1.2.2)dY(t) = Y(t)[dt + dW] + l(t)dt m(t)dt . (1.2.3)

    Set

    Z(t) = X(t) + Y(t) = wealth of the investor at time t,

    (t) =Y(t)

    Z(t).

    Then,

    dZ(t) = Z(t)[(r + (t)( r))dt + (t)dW] c(t)dt . (1.2.4)In this example, the state process is Z = Z,cz and the controls are (t) 1and c(t) 0. Since we can transfer funds between the stock holdings andthe bond holdings instantaneously and without a loss, it is not necessary tokeep track of the holdings in each asset separately.

    We have an additional restriction that Z(t) 0. Thus the set of admis-sible controls Az is given by:

    Az:=

    {((

    ), c(

    ))

    |bounded, adapted processes so that Z,c

    Z 0 a.s.

    }.

    The objective functional is the expected discounted utility derived from con-sumption:

    J = E

    0

    etU(c(t))dt

    ,

    where U : [0, ) 1 is the utility function. The function U(c) = cpp

    with0 < p < 1, provides an interesting class of examples. In this case,

    v(z) := sup(,c)Az

    E

    0

    et1

    p(c(t))pdt | Z,cz (0) = z

    . (1.2.5)

    The simplifying nature of this utility is that there is a certain homotethy.Note that due to the linear structure of the state equation, for any > 0,(,c) Az if and only if (, c) Az. Therefore,

    v(z) = pv(z) v(z) = v(1)zp . (1.2.6)

    4

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    10/67

    Thus, we only need to compute v(1) and the optimal strategy associated to

    it. By dynamic programming, we will see thatc(t) = (pv(1))

    1p1 Z(t), (1.2.7)

    (t) = r2(1 p) . (1.2.8)

    For v(1) to be finite and thus for the problem to have a solution, needs tobe sufficiently large. An exact condition is known and will be calculated bydynamic programming.

    1.2.3 Finite time utility maximization

    The following variant of the Mertons problem often arises in finance. LetZt,z() be the solution of (1.2.4) with c 0 and the initial condition:

    Zt,z(t) = z . (1.2.9)

    Then, for all t < T and z +, considerJ = E

    U(Zt,z(T))

    ,

    v(z, t) = supAt,z

    E[U(Zt,z(T)|Ft],

    where Ft is the filtration generated by the Brownian motion. Mathematically,the main difference between this and the classical Merton problem is thatthe value function here depends not only on the initial value of z but also ont. In fact, one may think the pair (t, Z(t)) as the state variables, but in theliterature this is understood only implicitly. In the classical Merton problem,the dependence on t is trivial and thus omitted.

    1.2.4 Merton problem with transaction costs

    This is an interesting modification of the Mertons problem due to Constan-tinides [9] and Davis & Norman [12]. We assume that whenever we movefunds from bond to stock we pay, or loose,

    (0, 1) fraction to the transac-

    tion fee ,and similarly, we loose (0, 1) fraction in the opposite transfers.Then, equations (1.2.2),(1.2.3) become

    dX(t) = rX(t)dt l(t)dt + (1 )m(t)dt c(t)dt , (1.2.10)dY(t) = Y(t)[dt + dW] + (1 )l(t)dt m(t)dt . (1.2.11)

    5

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    11/67

    In this model, it is intuitively clear that the variable Z = X + Y, is not

    sufficient to describe the state of the model. So, it is now necessary toconsider the pair Z := (X, Y) as the state process. The controls are theprocesses l, m and c, and all are assumed to be non-negative. Again,

    v(x, y) := sup=(l,m,c)Ax,y

    E

    0

    et1

    p(c(t))pdt

    .

    The set of admissible controls are such that the solutions (Xx , Y

    x ) Lfor all t 0. The liquidity set L 2 is the collection of all (x, y) thatcan be transferred to a non-negative position both in bond and stock by anappropriate transaction, i.e.,

    L = {(x, y) 2 : (L, M) 0 s.t.(x + (1 )M L, Y M + (1 )L) + +}

    = {(x, y) 2 : (1 )x + y 0 and x + (1 )y 0} .

    An important feature of this problem is that it is possibly singular, i.e.,the optimal (l(), m()) process can be unbounded. On the other hand, thenonlinear penalization c(t)p does not allow c(t) to be unbounded.

    The singular problems share this common feature that the control enterslinearly in the state equation and either is not included in the objectivefunctional or included only in a linear manner.

    So, it is convenient to introduce processes:

    L(t) :=

    t0

    l(s)ds, M (t) :=

    t0

    m(s)ds .

    Then, (L(), M()) are nondecreasing adopted processes and (dL(t), dM(t))can be defined as random measures on [0, ). With this notation, we rewrite(1.2.10), (1.2.11) as

    dX = rXdt dL + (1 )dM c(t)dt ,dY = Y[dt + dW] + (1

    )dL

    dM ,

    and = (L,M,c) Ax,y is admissible if they are adapted (L, M) nonde-creasing, c 0 and

    (Xx (t), Y

    y (t)) L t 0 . (1.2.12)

    6

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    12/67

    1.2.5 Super-replication with portfolio constraints

    Let Zt,z() be the solution of (1.2.4) with c 0 and (1.2.9), and let St,s()be the solution of (1.2.1) with St,s(t) = s. Given a deterministic functionG : 1 1 we wish to findv(t, s) := inf{z | () adapted, (t) K and Zt,z(T) G(St,s(T)) a.s. } ,

    where T is maturity, K is an interval containing 0, i.e., K = [a, b]. Herea is related to a short-sell constraint and b to a borrowing constraint (orequivalently a constraint on short-selling the bond).

    This is clearly not in the form of the previous problems, but it can betransferred into that form. Indeed, set

    X(z, s) :=

    0, z G(s),+, z < G(s).

    Consider an objective functional,

    J(t,s,s; ()) := EX(Zt,z(T), St,s(T)) | Ft ,u(t ,z,s) := inf

    AJ(t,s,s; ()) ,

    and A if and only if is adapted with values in K. Then, observe that

    u(t,z,s) = 0, z > v(t, s)

    +, z < v(t, s) .and at z = v(t,z,s) is a subtle question. In other words,

    v(t, s) = inf{z| u(t,z,s) = 0} .

    1.2.6 Buyers price and the no-arbitrage interval

    In the previous subsection, we considered the problem from the perspectiveof the writer of the option. For a potential buyer, if the quoted price z of acertain claim is low, there is a different possibility of arbitrage. She would

    take advantage of a low price by buying the option for a price z. She wouldfinance this purchase by using the instruments in the market. Then she triesto maximize her wealth (or minimize her debt) with initial wealth of z. Ifat maturity,

    Zt,z(T) + G(St,s(T)) 0, a.s.,

    7

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    13/67

    then this provides arbitrage. Hence the largest of these initial data provides

    the lower bound of all prices that do not allow arbitrage. So we define (afterobserving that Zt,z(T) = Zt,z(T)),

    v(t, s) := sup{z | () adapted, (t) K and Zt,z(T) G(St,s(T)) a.s. } .

    Then, the no-arbitrage interval is given by

    [v(t, s), v(t, s)] .

    In the presence of friction, there are many approaches to pricing. How-ever, the above above interval must contain all the prices obtained by anymethod.

    1.2.7 Super-replication with gamma constraints

    To simplify, we take r = 0, = 0. We rewrite (1.2.4) as

    dZ(t) = n(t)dS(t) ,

    dS(t) = S(t)dW(t) .

    Then, n(t) = (t)Z(t)/S(t) is the number of stocks held at a given time.Previously, we placed no restrictions on the time change of rate of n() andassumed only that it is bounded and adapted. The gamma constraint, re-stricts n() to be a semimartingale,

    dn(t) = dA(t) + (t)dS(t) ,

    where A is an adapted BV process, () is an adapted process with values inan internal [, ].

    Then, the super-replication problem is

    v(t, s) := inf{z| = (n(t), A(), ()) At,s,z s.t. Zt,z(T) G(St,s(T))} .

    The important new feature here is the singular form of the equation for then() process. Notice the dA term in that equation.

    8

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    14/67

    1.3 Dynamic Programming Principle

    In this section, we formulate an abstract dynamic programming following therecent manuscript of Soner & Touzi [20]. This principle holds for all dynamicoptimization problems with a certain structure. Thus, the structure of theproblem is of critical importance. We formulate this in the following mainassumptions.

    Assumption 1 We assume that for every control and initial data (t, z),the corresponding state process starts afresh at every stopping time > t,i.e.,

    Zt,z(s) = Z,Zt,z()

    (s) , s .

    Assumption 2 The affect of is causal, i.e., if 1(s) = 2(s) for all s ,where is a stopping time, then

    Z1

    t,z(s) = Z2

    t,z(s) , s .Moreover, we assume that if is admissible at (t, z) then, is restricted

    to the stochastic interval [, T] is also admissible starting at (, Zt,z()).

    Assumption 3 We also assume that the concatenation of admissible con-trols yield another admissible control. Mathematically, for a stopping time

    and At,z, set

    = (, Zt,z()). Suppose A and define

    (s) =

    (s), s ,(s), s .

    Then, we assume At,z. Precise formulation is in Soner & Touzi [20].

    Assumption 4 Finally, we assume an additive structure for J, i.e.,

    J =

    t

    L(s, (s), Zt,z(s))ds + G(Zt,z()) .

    The above list of assumptions need to be verified in each example. Underthese structural assumptions, we have the following result which is called thedynamic programming principle or DPP in short.

    9

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    15/67

    Theorem 1.3.1 (Dynamic Programming Principle) For any stopping

    time tv(t, z) = inf

    At,zE

    t

    Lds + v(, Zt,z()) | Ft

    .

    We refer to Fleming & Soner [14] and Soner & Touzi [20]for precise state-ments and proofs.

    1.3.1 Formal Proof of DPP

    By the additive structure of the cost functional,

    v(t, z) = inf At,z

    E(

    T

    t

    Lds + G(Zt,z(T)) | Ft)

    = inf At,z

    E(

    t

    Lds + E[

    T

    Lds + G(Zt,z(T)) | F] | Ft) .(1.3.13)

    By Assumption 2, restricted to the interval [, T] is in A,Zt,z(). Hence,

    E[

    Tt

    Lds + G | F] v(),

    where := t,z() = (, Zt,z()) .

    Substitute the above inequality into (1.3.13) to obtain

    v(t, z) inf

    E[

    t

    Lds + v() | Ft] .

    To prove the reverse inequality, for > 0 and , choose , A sothat

    E(T

    L(s, ,(s), Z, (s))ds + G(Z

    , (T))

    |F)

    v() + .

    For a given At,z, set

    (s) :=

    (s), s [t, ],(s), s T

    10

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    16/67

    Here there are serious measurability questions (c.f. Soner & Touzi), but it

    can be shown that At,z. Then, with Z = Z

    t,z,

    v(t, z) E(T

    t

    L(s, (s), Z(s))ds + G(Z(T)) | Ft)

    E(

    t

    L(s, (s), Z(s))ds + v() + | Ft) .

    Since this holds for any At,z and > 0,

    v(t, z) infAt,z

    E(

    T

    Lds + v() + |Ft) .

    The above calculation is the main trust of a rigorous proof. But there aretechnical details that need to provided. We refer to the book of Fleming &Soner and the manuscript by Soner & Touzi.

    1.3.2 Examples for the DPP

    Our assumptions include all the examples given above. In this section we lookat the super-replication, and more generally a target reachability problemand deduce a geometric DPP from the above DPP. Then, we will outline an

    example for theoretical economics for which our method does that alwaysapply.

    Target Reachability.Let Zt,z, At,z be as before. Given a target set T d. Consider

    V(t) := {z d : At,z s.t. Zt,z(T) T a.s.} .

    This is the generalization of the super-replication problems considered before.So as before, for A d, set

    XA(z) := 0, z T ,+ z T ,and

    v(t, z) := infAz,t

    E[XT(Zt,z(T)) | Ft] .

    11

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    17/67

    Then,

    v(t, z) = XV(t)(z) = 0, z V(t) ,+, z V(t) .Since, at least formally, DPP applies to v,

    v(t, z) = XV(t)(z) = inf At,z

    E(v(, Zt,z()) | Ft)= inf

    At,zE(XV()(Zt,z()) | Ft) .

    Therefore, V(t) also satisfies a geometric DPP:

    V(t) = {z d : At,z s.t. Zt,z() V() a.s. } . (1.3.14)

    In conclusion, this is a nonstandard example of dynamic programming,in which the principle has the above geometric form. Later in these notes,we will show that this yields a geometric equation for the time evolution ofthe reachability sets.

    Incentive Controls.Here we describe a problem in which the dynamic programming does not

    always hold. The original problem of Benhabib [4] is a resource allocationproblem. Two players are using y(t) by consuming ci(t), i = 1, 2 . Theequation for the resource is

    dydt = (y(t)) c1(t) c2(t) ,with the constraint

    y(t) 0, ci 0 .If at some time t0, y(t0) = 0, after this point we require y(t), ci(t) = 0 for

    t t0. Each player is trying to maximize

    vi(y) := supci

    0

    etci(t)dt .

    Set

    v(y) := supc1+c2=c

    0

    etc(t)dt ,

    so that, clearly, v1(y) + v2(y) v(y). However, each player may bring thestate to a bankruptcy by consuming a large amount to the detriment of theother player and possibly to herself as well.

    12

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    18/67

    To avoid this problem Rustichini [17] proposed a variation in which the

    state equation is d

    dtX(t) = f(X(t), c(t)) ,

    with initial conditionX(0) = x .

    Then, the pay-off is

    J(x, c()) =0

    etL(t, X(t), c(t))dt ,

    and c(

    )

    Ax if

    t

    es L(t, X(t), c(t))ds esD(Xcx(t), c(t)), t 0 ,

    where D is a given function. Note that this condition, in general, violatesthe concatenation property of the set of admissible controls. Hence, dynamicprogramming does not always hold. However, Barucci-Gozzi-Swiech [3] over-come this in certain cases.

    1.4 Dynamic Programming Equation

    This equation is the infinitesimal version of the dynamic programming prin-ciple. It is used, generally, in the following two ways:

    Derive the DPE formallyas we will do later in these notes. Obtain a smooth solution, or show that there is a smooth solution via

    PDE techniques.

    Show that the smooth solution is the value function by the use of Itosformula. This step is called the verification.

    As a by product, an optimal policy is obtained in the verification.This is the classical use of the DPE and details are given in the book

    Fleming & Soner and we will outline it in detail for the Merton problem.The second approach is this:

    13

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    19/67

    Derive the DPE rigorously using the theory of viscosity solutions ofCrandall and Lions.

    Show uniqueness or more generally a comparison result between suband super viscosity solutions.

    This provides a unique characterization of the value function which canthen be used to obtain further results.

    This approach, which become available by the theory of viscosity solu-tions, avoids showing the smoothness of the value function. This is verydesirable as the value function is often not smooth.

    1.4.1 Formal Derivation of the DPE

    To simplify the presentation, we only consider the state processes which arediffusions. Let the state variable X be the unique solution

    dX = (t, X(t), (t))dt + (t, X(t), (t)))dW ,

    and a usual pay-off functional

    J(t,x,) = E[

    Tt

    L(s, Xt,x(s), (s))ds + G(Xt,x(T)) | Ft] ,

    v(t, x) := infAt,x J(t,x,) .

    We assume enough so that the DPP holds. Use = t + h in the DPP toobtain

    v(t, x) = infAt,x

    E[

    t+ht

    Lds + v(t + h, Xt,x(t + h)) | Ft] .

    We assume, without justification, that v is sufficiently smooth. This part ofthe derivation is formal and can not be made rigorous unless viscosity theoryis revoked. Then, by the Itos formula,

    v(t + h, Xt,x(t + h)) = v(t, x) +

    t+h

    t

    ( t

    v + L(s)v)ds + martingale ,

    where

    Lv := (t,x,) v + 12

    tr a(t,x,)D2v ,

    14

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    20/67

    with the notation,

    a(t,x,) := (t,x,)(t,x,)t and tr a :=d

    i=1

    aii .

    In view of the DPP,

    supAt,x

    E[t+h

    t

    (

    tv + L(s)v + L)ds] = 0 .

    We assume that the coefficients ,a,L are continuous. Divide the aboveequation by h and let h go to zero to obtain

    t

    v(t, x) + H(x,t, v(t, x), D2v(t, x)) = 0 , (1.4.15)

    where

    H(x,t,,A) := sup{. 12

    traA l : (,a,l) A(t, x)} ,

    and (,a,l) A(t, x) iff there exists At,x such that

    (,a,l) = limh0

    1

    h

    t+h

    t

    ((x, (s), t), a(x, (s), t), L(x, (s), t))ds .

    We should emphasize that we assume that the functions ,a,L are suffi-ciently regular and we also made an unjustified assumption that v is smooth.All these assumptions are not needed in the theory of viscosity solutions aswe will see later or we refer to the book by Fleming & Soner.

    For a large class of problems, At,x = L((0, ) ; U) for some set U.Then,

    A(t, x) = {((x,,t), a(x,,t), L(x,,t) : U} ,and

    H(x,t,,A) = supU{(x,,t) 12tr a(x,,t)A L(x,,t)} .

    15

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    21/67

    1.4.2 Infinite horizon

    A important class of problems are known as the discounted infinite horizonproblems. In these problems the state equation for X is time homogenousand the time horizon T = . However, to ensure the finiteness of the costfunctional, the running cost is exponentially discounted, i.e.,

    J(x, ) := E

    0

    etL(s, Xx (s), (s))ds .

    Then, following the same calculation as in the finite horizon case, we derivethe dynamic programming equation to be

    v(x) + H(x, v(x), D2

    v(x)) = 0 , (1.4.16)

    where for At,x = L((0, ) ; U),

    H(x,,A) = supU

    {(x, ) 12

    tr a(x, )A L(x, )} .

    1.5 Examples for the DPE

    In this section, we will obtain the corresponding dynamic programming equa-tion for the examples given earlier.

    1.5.1 Merton Problem

    We already showed that, in the case with no transaction costs,

    v(z) = v(1)zp .

    This is a rare example of an interesting stochastic optimal control problemwith a smooth and an explicit solution. Hence, we will employ the first ofthe two approaches mentioned earlier for using the DPE.

    We start with the DPE (1.4.16), which takes the following form for this

    equation (accounting for sup instead of inf),

    v(z) + inf 1,c0

    {(r + ( r))zvz(z) 12

    22z2vzz + cvz 1p

    cp} = 0 .

    16

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    22/67

    We write this as,

    v(z) rzvz(z) sup1

    {( r)zvz + 12

    22z2vzz} supc0

    {cvz + 1p

    cp} = 0 .

    We directly calculate that (for vz(z) > 0 > vzz (z))

    v(z) rzvz(z) 12

    (( r)zvz(z))22z2vzz (z)

    H(vz(z)) = 0 ,

    where

    H(vz(z)) =1 p

    p(vz(z))

    pp1 ,

    with maximizers

    = ( r)zvz(z)2z2vzz (z)

    , c = (vz(z))1

    p1 .

    We plug the form v(z) = v(1)zp in the above equations. The result is theequation (1.2.7) and (1.2.8) and

    v(1)

    rp p( r)

    2

    2(1 p)2

    1 pp

    (p v(1))p

    p1 = 0 .

    The solution is

    v(1) = :=(1 p)1p

    p

    rp p( r)

    2

    2(1 p)2p1

    ,

    and we require that

    > rp +p( r)2

    2(1 p)2 .

    Although the above calculations look to be complete, we recall that thederivation of the DPE is formal. For that reason, we need to complete thesecalculations with a verification step.

    Theorem 1.5.1 (Verification) The function zp, with as above, is thevalue function. Moreover, the optimal feedback policies are given by the equa-tions (1.2.7) and (1.2.8).

    17

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    23/67

    Proof. Set u(z) := zp.

    For z > 0 and T > 0, let = ((), c()) Az be any admissible con-sumption, investment strategy. Set Z := Zz . Apply the Itos rule to thefunction e t u(Z(t)). The result is

    eTE[u(Z(T))] = u(z) +T0

    et E[ u(Z(t)) + L(t),c(t)u(Z(t))] dt,

    where L,c is the infinitesimal generator of the wealth process. By the factthat u solves the DPE, we have, for any and c,

    u(z) L,cu(z) 1p

    cp 0.

    Hence,

    u(z) E[e T u(Z(T)) +T0

    e t1

    p(c(t))p dt] .

    By direct calculations, we can show that

    limT

    E[e T u(Z(T))] = limT

    E[e T (Z(T))p] = 0 .

    Also by the Fatous Lemma,

    limT

    E[T

    0

    e t1

    p

    (c(t)p dt] = J(z; (

    ), c(

    )).

    Since this holds for any control, we proved that

    u(z) = zp v(z) = value function .To prove the opposite inequality we use the controls (, c) given by theequations (1.2.7) and (1.2.8). Let Z be the corresponding state process.Then,

    u(z) L,cu(z) 1p

    (c)p = 0 .

    Therefore,

    EeTu(Z(T)) = u(z) +T0

    et E[ u(Z(t)) + L(t),c(t)u(Z(t))]dt

    = u(z) + E[

    T0

    e t1

    p(c(t))p dt] .

    18

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    24/67

    Again we let T tend to infinity. Since Z and the other quantities can be

    calculated explicitly, it is straightforward to pass to the limit in the aboveequation. The result isu(z) = J(z; , c).

    Hence, u(z) = v(z) = J(z; , c).

    1.5.2 Minimal Time Problem

    For Ax = L((0, ); U), the dynamic programming equation has a simpleform,

    supU

    {f(x, ) v(x) 1} = 0 x T .

    This follows from our results and the simple observation that

    J = x =

    x0

    1ds .

    So we may think of this problem an infinite horizon problem with zero dis-count, = 0.

    In the special example, U = B1, f(x, ) = , the above equation simplifiesto the Eikonal equation,

    |v(x)

    |= 1 x

    T,

    together with the boundary condition,

    v(x) = 0, x T .The solution is the distance function,

    v(x) = infyT

    {|x y|} = |x y| ,

    and the optimal control is

    =y

    x

    |y x| t .As in the Merton problem, the solution is again explicitly available. How-

    ever, v(x) is not a smooth function, and the above assertions have to beproved by the viscosity theory.

    19

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    25/67

    1.5.3 Transaction costs

    Using the formulation (1.2.10) and (1.2.11), we formally obtain,

    v(x, y) + inf (l,m,c)0

    {rxvx yvy 12

    2y2vyy

    l[(1 )vy vx] m[(1 )vx vy] + cvx 1p

    cp} = 0 .

    We see that, since l and m can be arbitrarily large,

    (1 )vy vx 0 and (1 )vx vy 0 . (1.5.17)Also dropping the l and m terms we have,

    v rxvx yvy 12

    2y2vyy := v Lv H(vx) ,

    where

    H(vx) := supc0

    1

    pcp cvx

    .

    Moreover, if both inequalities are strict in (1.5.17), then the above is anequality. Hence,

    min{v Lv H(vx), vx (1 )vy, vy (1 )vx} = 0 . (1.5.18)

    This derivation is extremely formal, but can be verified by the theory ofviscosity solutions, cf. Fleming & Soner [14], Shreve & Soner [18]. We alsorefer to Davis & Norman [12] who was first to study this problem using thefirst approach described earlier.

    Notice also the singular character of the problem resulted in a quasi vari-ational inequality, instead of a more standard second order elliptic equation.

    We again use homothety pv(x, y) = v(x,y), to represent v(x, y) so

    v(x, y) = (x + y)pf(y

    x + y), (x, y) L ,

    where

    f(u) := v(1 u, u), 1 u 1 .The DPE for v, namely (1.5.18), turns into an equation for f the coefficientfunction; a one dimensional problem which can be solved numerically bystandard methods.

    20

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    26/67

    -

    6

    r

    rrrrrrrrrrrrrrrrr

    eeeeeeeeeeeeee

    x

    y

    Region I

    Consume

    Region III

    Sell Stock

    Region II

    Sell Bond

    yx+y =

    1

    yx+y = b

    yx+y

    = a

    yx+y

    = (1)

    eeu

    ee

    Figure 1.1:

    Further, we know that v is concave. Using the concavity of v, we showthat there are points

    (1)

    a < mert < b

    1

    so that

    In Region I: v Lv H(vx) = 0 for a yx + y

    b ,

    In Region II: vx (1 )vy = 0 for (1 )

    yx + y

    a ,

    In Region III: vy (1 )vx = 0 for b yx + y

    1

    .

    So we formally expect that in Region 1, no transactions are made and con-

    sumption is according to , c = (vx(x, y))1

    p1 . In Region 2, we sell bonds and

    buy stocks, and in Region 3 we sell stock and buy bonds. Finally, the process(X(t), Y(t)) is kept in Region 1 through reflection.Constant b, a are not explicitly available but can be computed numeri-

    cally.

    21

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    27/67

    1.5.4 Super-replication with portfolio constraints

    In the next Chapter, we will show that the DPE is (with K = (a, b) )

    min{vt

    12

    s22vss rsvs + rv ; bv svs; svs + av} = 0

    with the final conditionu(T, s) = G(s) .

    Then, we will show that

    u(t, s) = E[G(St,s(T)) | Ft] ,where E is the risk neutral expectation and G is the minimal function sat-isfying

    (i). G G ,(ii) a sGs(s)

    Gs b .

    Examples of G will be computed in the last Chapter.

    1.5.5 Target Reachability Problem

    ConsiderdX = (t,z,(t))dt + (t,z,(t))dW ,

    as before,A

    = L((0,

    ); U). The reachability set is given by,

    V(t) := {x| A such that Xt,x(T) T a.s} ,where T d is a given set. Then, at least formally,

    v(t, x) := XV(t)(x) = lim0

    H(w(t, x)) ,

    wherew(t, x) = inf

    AE[G(Xt,x(T)) | Ft] ,

    H(w) =tanh(w/) + 1

    2,

    and G : d [0, ) any smooth function which vanishes only on T, i.e.,G(x) = 0 if and only if x T. Then, the standard DPE yields,

    wt

    + supU

    {(t,x,) w 12

    tr(a(t,x,)D2w)} = 0 .

    22

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    28/67

    We calculate that, with w := H(w),

    w

    t= (H)

    w

    t, w = (H)w, D2w = (H)D2w+(H)ww .

    Hence,

    (H)D2w = D2w (H)

    [(H)]2w w .

    This yields,

    w

    t+ sup

    U{ w 1

    2traD2w +

    1

    2

    (H)

    [(H)]2aw w} = 0 .

    Here, very formally, we conjecture the following limiting equation for v =lim w:

    wt

    supK(w)

    { w 12

    traD2w} = 0 . (1.5.19)

    K() := { : t(t,x,) = 0} .Notice that for K(w), aw w = 0. And if K(w) then,aw w = |tw|2 > 0 and (H)/[(H)]2 1/H will cause the non-linear term to blow-up.

    The above calculation is very formal. A rigorous derivation using theviscosity solution and different methods is available in Soner & Touzi [20, 21].

    Mean Curvature Flow.This is an interesting example of a target reachability problem, which

    provides a stochastic representation for a geometric flow. Indeed, consider atarget reachability problem with a general target set and state dynamics

    dX =

    2(I )dW ,

    where () B1 is a unit vector in d. In our previous notation, the controlset U is set of all unit vectors, and

    Ais the collection of all adapted pro-

    cess with values in U. Then, the geometric dynamic programming equation(1.5.19) takes the form

    vt

    + supK(v)

    {tr(I )D2v} = 0 ,

    23

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    29/67

    and

    K(v) = { B1 : (I )v = 0} = { v

    |v|} .So the equation is

    vt

    v + D2vv v|v|2 = 0 .

    This is exactly the level set equation for the mean curvature flow as in thework of Evans-Spruck [13] and Chen-Giga-Goto [6].

    If we usedX =

    2(t)dW ,

    where (t) is a projection matrix on d onto (dk) dimensional planes thenwe obtain the co-dimension k mean curvature flow equation as in Ambrosio& Soner [1].

    24

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    30/67

    Chapter 2

    Super-Replication underportfolio constraints

    In this Chapter, we will provide all the technical details for this specificproblem as an interesting example of a stochastic optimal control problem.

    For this problem, two approaches are available. In the first, after a cleverduality argument, this problem is transformed into a standard optimal con-trol problem and then solved by dynamic programming, we refer to Karatzas& Shreve [15] for details of this method. In the second approach, dynamicprogramming is used directly. Although, when available the first approachprovides more insight, it is not always possible to apply the dual method.The second approach is a direct one and applicable to all super-replicationproblems. The problem with Gamma constraint is an example for which thedual method is not yet known.

    2.1 Solution by Duality

    Let us recall the problem briefly. We consider a market with one stock andone bond. By multiplying er(Tt) we may take r = 0, (or equivalently takingthe bond as the numeriare). Also by a Girsanov transformation, we maytake = 0. So the resulting simpler equations for the stock price and wealthprocesses are

    dS(t) = S(t)dW(t) ,

    dZ(t) = (t)Z(t)dW(t) .

    25

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    31/67

    A contingent claim with payoff G : [0, ) 1 is given. The minimalsuper-replication cost is

    v(t, s) = inf{z : () A s.t. Zt,z(T) G(St,s(T)) a.s. } ,

    where A is the set of all essentially bounded, adapted processes () withvalues in a convex set K.

    This restriction of () K, corresponds to proportional borrowing (orequivalently short-selling of bond) and short-selling of stock constraints thatthe investors typically face.

    The above is the so-called writers price. The buyers point of view isslightly different. The appropriate target problem for this case is

    v(t, s) := sup{z : () A s.t. Zt,z(T) + G(St,s(T)) 0 a.s. } .

    Then, the interval [v(t, s), v(t, s)] gives the no-arbitrage interval. That is,if the initial price of this claim is in this interval, then, there is no admissibleportfolio process () which will result in a positive position with probabilityone.

    2.1.1 Black-Scholes Case

    Let us start with the unconstrained case, K = 1. Since Zt,z() is a martin-gale, if there is z and () A which is super-replicating, then

    z = Zt,z(t) = E[Zt,z(T) | Ft] E[G(St,s(T) | Ft] .

    Our claim is, indeed the above inequality is an equality for z = v(t, s). Set

    Yu := E[G(St,s(T)) | Fu] .

    By the martingale representation theorem, Y() is a stochastic integral. Wechoose to write it as

    Y(u) = E[G(St,s(T))|Ft] + u

    t ()Y()dW() ,

    with an appropriate () A. Then,

    Y() = Zt,z0(), z0 = E[G(St,s(T)) | Ft] .

    26

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    32/67

    Hence, v(t, s) z0. But we have already shown that if an initial capitalsupports a super-replicating portfolio then, it must be larger than z0. Hence,

    v(t, s) = z0 = E[G(St,s(T)) | Ft] := vBS(t, s) ,

    which is the Black-Scholes price. Note that in this case, starting with z0,there always exists a replicating portfolio.

    In this example, it can be shown that the buyers price is also equal tothe Black-Scholes price vBS. Hence, the no-arbitrage interval defined in theprevious subsection is the singleton {vBS}. Thus, that is the only fair price.

    2.1.2 General Case

    Let us first introduce several elementary facts from convex analysis. Set

    K() := supK

    v, K := { : K() < } .

    In the convex analysis literature, K is the support function of the convex setK. In one dimension, we may directly calculate these functions. However,we use this notation, as it is suggestive of the multidimensional case. Then,it is a classical fact that

    K

    + K()

    0

    K .

    Let z, () be an initial capital, and respectively, a super-replicating portfolio.For any () with values in K, let P be such that

    W(u) := W(u) +

    ut

    ()1

    d

    is a P martingale. This measure exists under integrability conditions on (),by the Girsanov theorem. Here we assume essentially bounded processes, soP exits. Set

    Z(u) := Zt,z(u) exp(

    u

    t K(())d) .

    By calculus,

    dZ(u) = Z(u)[(K((u)) + (u)(u))du + dW(u)] .

    27

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    33/67

    Since (u) K and (u) K , K((u) + (u)(u) 0 for all u. Therefore,

    Z(u) is a super-martingale and

    E[Z(T) | Ft] Z(t) = Zt,z(t) = z .

    Also Zt,z(T) G(St,s(T)) Pa.s, and therefore, P-a.s. as well. Hence,

    Z(T) = exp(T

    t

    K((v))du) Zt,z(T)

    exp(T

    t

    K((v))du) G(St,s(T)) P a.s. .

    All of these together yield,

    v(t, s) z := E[exp(T

    t

    K((v))du)G(St,s(T)) | Ft] .

    Since this holds for any () K,

    v(t, s) supK

    z .

    The equality is obtained through a super-martingale representation for theright hand side, c.f. Karatzas & Shreve [15]. The final result is

    Theorem 2.1.1 (Cvitanic & Karatzas [11]) The minimal super replicat-ing cost v(t, s) is the value function of the standard optimal control problem,

    v(t, s) = E

    exp(

    Tt

    K((v))du) G(St,s(T)) | Ft

    ,

    where St,s solvedSt,s = S

    t,s(T) [dt + dW] .

    Now this problem can be solved by dynamic programming. Indeed, an

    explicit solution was obtained by Broadie, Cvitanic & Soner [5]. We willobtain this solution by the direct approach in the next section.

    28

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    34/67

    2.2 Direct Solution

    In this section, we will use dynamic programming directly to obtain a solu-tion. The dynamic programming principle for this problem is

    v(t, s) = inf{Z : () A s.t. Zt,z() v(, St,s()) a.s. } .We will use this to derive first the dynamic programming equation and

    then the solution.

    2.2.1 Viscosity Solutions

    We refer to the books by Barles [2], Fleming & Soner [14], the Users Guide[10] for an introduction to viscosity solutions and for more references to thesubject.

    Here we briefly introduce the definition. For a locally bounded functionv, set

    v(t, s) := lim sup(t,s)(t,s)

    v(t, s), v(t, s) := lim inf(t,s)(t,s)

    v(t, s) .

    Consider the partial differential equation,

    F(t,s,v,vt, vs, vss) = 0 .

    We say that v is a viscosity supersolution if for any C1,2 and any mini-mizer (t0, s0) of (u

    ),

    F(t0, s0, u(t0, s0), t(t0, s0), s(t0, s0), ss(t0, s0)) 0 . (2.2.1)A subsolution satisfies

    F(t0, s0, u(t0, s0), t, s, ss) 0 , (2.2.2)

    at any maximizer of (u ).Note that a viscosity solution of F = 0 is not a viscosity solution of

    F = 0.It can be checked that the distance function introduced in the minimal

    time problem is a viscosity solution of the Eikonal equation.

    Theorem 2.2.1 The minimal super-replicating cost v is a viscosity solutionof the DPE,

    min{vt

    12

    s22vss rsvs + rv ; bv svs; svs + av} = 0 . (2.2.3)The proof will be given in the next two subsections.

    29

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    35/67

    2.2.2 Supersolution

    Assume G 0. Let C1,2 andv(t0, s0) (t0, s0) = 0 (v )(t, s) (t, s).

    Choose tn, sn, zn such that

    (tn, sn) (t0, s0), v(tn, sn) v(t0, s0),

    v(tn, sn) zn (tn, sn) + 1n2

    .

    Then, by the dynamic programming principle, there is n() A so that

    Zn(tn +1

    n) v(tn + 1

    n, Sn(tn +

    1

    n)) a.s ,

    whereZn := Z

    n

    tn,zn, Sn := Stn,sn .

    Since v v ,

    Zn(tn +1

    n) (tn + 1

    n, Sn(tn +

    1

    n)) a.s .

    We use the Itos rule and the dynamics of Zn(

    ) to obtain,

    zn +

    tn+ 1ntn

    n(u)Zn(u)dW(u) (tn, sn)

    +

    tn+ 1ntn

    (t + L)(u, Sn(u))du

    +

    tn+ 1ntn

    s(u, Sn(u))Sn(u)dW(u) .

    We rewrite this as

    cn +tn+ 1

    n

    tn

    an(u)du +tn+ 1

    n

    tn

    bn(u)dW(u) 0 a.s. , n ,

    where

    cn := zn (tn, sn) [0, 1n2

    ],

    30

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    36/67

    an(u) =

    (t +

    L)(u, Sn(u)), [

    L =

    1

    22s2ss],

    bn(u) = [n(u)Zn(u) s(u, Sn(u))Sn(u)] .

    For a real number > 0, let P,n be so that

    Wn (u) := W(u) +

    ut

    bn()d

    is a P,n martingale. Set

    Mn(u) := cn +

    utn

    an()d +

    utn

    bn()dW()

    = cn +u

    tn

    (an() b2n())d +u

    tn

    bn()dW,n() .

    Since Mn(u) 0,

    0 E,nMn(tn + 1n

    )

    = cn + E,n[

    tn+ 1ntn

    (an() b2n()) d] .

    Note that an()

    (t

    L)(t0, s0) as

    t0. We multiply the above

    inequality n and let n tend to infinity. The result is for every > 0,

    (tL)(t0, s0) lim infn

    E,nn

    tn+ 1ntn

    2(n(u)v(t0, s0)s0s(t0, s0))2du.

    Hence,(t L)(t0, s0) 0 .

    liminfn

    E,nn

    tn+ 1ntn

    2(n(u)v(t0, s0) s0s(t0, s0))2du = 0 .

    Moreover, since v

    0 and n(

    )

    (

    a, b],

    b v(t0, s0) s0s(t0, s0) 0,

    s0s(t0, s0) + a v(t0, s0) 0. (2.2.4)

    31

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    37/67

    In conclusion,

    F(t0, s0, u(t0, s0), t(t0, s0), s(t0, s0), ss(t0, s0)) 0, (2.2.5)where

    F(t,a,v,q,,A) = min{q 12

    2s2A; bv s; av + s}.

    Here q stands for t, for s and A for ss.

    Thus, we proved that

    Theorem 2.2.2 v is a viscosity super solution of

    F(t,s,v,vt, vs, vss) = min{vt 2

    2s2vss; bv svs; av + svs} 0,

    on (0, T) (0, ).

    2.2.3 Subsolution

    Assume that G 0 and G 0. Let C1,2, (t0, s0) be such that(v

    )(t0, s0) = 0

    (v

    )(t, s)

    (t, s) .

    By considering = + (t t0)2 + (s s0)4 we may assume the abovemaximum of (v ) is strict. (Note t = t, s = s, ss = ss at (t0, s0).)We need to show that

    F(t0, s0, v(t0, s0), t(t0, s0), s(t0, s0), ss(t0, s0) 0 . (2.2.6)

    Suppose to the contrary. Since v = at (t0, s0), and since F and aresmooth, there exists a neighborhood of (t0, s0), say N, and > 0 so that

    F(t,s,,t, s, ss) (t, s) N . (2.2.7)

    Since G and G 0, and s0 = 0 then v(t0, s0) > 0. So > 0 in aneighborhood of (t0, s0). Also since (v

    ) has a strict maximum at (t0, s0),there is a subset ofN, denoted by Nagain so that

    > 0 on N, and v(t, s) e(t, s) (t, s) N .

    32

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    38/67

    Set

    (t, s) =ss(t, s)

    (t, s) , (t, s) N .Then by (2.2.7), K. Fix (t, s) N near (t0, s0) and set S(u) :=St,s(u),

    := inf{u t : (u, S(u)) N} .Let

    dZ(u) = Z(u)(u, S(u))dW(u), u [t, ] ,Z(t) = (t, s) .

    By the Itos rule, for t < u < ,

    d[Z(u)

    (u, S(u))] =L

    (u, S(u)) du

    +(u, S(u))[Z(u) (u, S(u))]dW(u) .Since Z(t) (t, S(t)) = 0, and L 0,

    Z(u) (u, S(u)) , u [t, ] .In particular,

    Z() (, S() .Also, (, S()) N, and therefore

    v(, S()) e (, S()) .We combine all these to arrive at,

    Z() (, S() e v(, S()) e v(, S()) .Then,

    Z

    t,e(t,s)() = eZ() v(, S()) .

    By the dynamic programming principle, (also since () K)),e(t, s) v(t, s) .

    Since the above inequality holds for every (t, s), by letting (t, s) tend to(t0, s0), we arrive at

    e(t0, s0) v(t0, s0) .But we assumed that (t0, s0) = v(t0, s0).

    Hence the inequality (2.2.6) must hold and v is a viscosity subsolution.

    In the next subsection, we will study the terminal condition and thenprovide an explicit solution.

    33

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    39/67

    2.2.4 Terminal Condition or Face-Lifting

    We showed that the value function v(t, s) is a viscosity solution of

    F(t,s,v(t, s), vt(t, s), vs(t, s), vss(t, s)) = 0, s > 0, t < T .

    In particular,

    b v(t, s)svs(t, s) 0, a v(t, s)svs(t, s) 0, s > 0, t < T, (2.2.8)

    in the viscosity sense. Set

    V(s) := lim sups

    s,t

    T

    v(t, s), V(s) := liminfs

    s,t

    T

    v(t, s) .

    Formally, since v(t, .) satisfies (2.2.8) for every t < T, we also expect Vand V to satisfy (2.2.8) as well. However, given contingent claim G may notsatisfy (2.2.8).

    Example.Consider a call option:

    G(s) = (s K)+,

    with K = (

    , b) for some b > 1. Then, for s > K,

    bG(s) sGs(s) = b(s K)+ s .

    This expression is negative for s near K. Note that the Black-Scholes repli-cating portfolio requires almost one share of the stock when time to maturity(T t) near zero and when S(t) > K but close to K. Again at these pointsthe price of the option is near zero. Hence to be able finance this replicatingportfolio, the investor has to borrow an amount which is an arbitrarily largemultiple of her wealth. So any borrowing constraints (i.e. any b < +)makes the replicating portfolio inadmissible.

    We formally proceed and assume V = V = V. Formally, we expect V tosatisfy (2.2.8) and also V 0. Hence,

    bV(s) sVs(s) 0 aV(s) sVs(s), s 0, (2.2.9)V(s) G(s), s 0.

    34

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    40/67

    Since we are looking for the minimal super replicating cost, it is natural to

    guess that V() is the minimal function satisfying (2.2.9). So we defineG(s) := inf{H(s) : H satisfies (2.2.9) } . (2.2.10)

    Example.Here we compute G(s) corresponding to G(s) = (s K)+ and K =

    (, b] for any b > 1. Then, G satisfiesG(s)(s) (K s)+ ,

    bG(s)(s) sGs(s) 0 . (2.2.11)Assume G() is smooth we integrate the second inequality to conclude

    G(s0) ( s0s1

    )bG(s1), s0 s1 > 0 .

    Again assuming that (2.2.11) holds on [0, s] for some s we get

    G(s) = (s

    s)bG(s), s s .

    Assume that G(s) = G(s) for s s. Then, we have the following claim

    G(s) = h(s) :=

    (s K)+, s s,

    ( ss )b(s K)+ s < s,

    where s > K the unique point for which h C1, i.e., hs(s) = Gs(s) = 1.Then,

    s =K

    b 1It is easy to show that h (with s as above) satisfies (2.2.9). It remains toshow that h is the minimal function satisfying (2.2.9). This will be donethrough a general formula below.

    Theorem 2.2.3 Assume that G is nonnegative, continuous and grows atmost linearly. Let G be given by (2.2.10). Then,

    V(s) = V(s) = G(s) . (2.2.12)

    35

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    41/67

    In particular, v(t, ) converges to G uniformly on compact sets. Moreover,G(s) = sup

    KeK()G(s, e),

    andv(t, s) = E[G(St,s(T))|Ft],

    i.e., v is the unconstrained (Black-Scholes) price of the modified claim G.

    This is proved in Broadie-Cvitanic-Soner [5]. A presentation is also given inKaratzas & Shreve [15] (Chapter 5.7). A proof based on the dual formulationis simpler and is given in both of the above references.

    A PDE based proof of formulation G() is also available. A complete

    proof in the case of a gamma constraint is given in the next Chapter. Herewe only prove the final statement through a PDE argument. Set

    u(t, s) = E[G(St,s(T)) | Ft] .Then,

    ut =1

    22s2uss t < T, s > 0,

    andu(T, s) = G(s) .

    It is clear that u is smooth. Set

    w(t, s) := bu(t, s) sus(t, s) .Then, since u(T, s) = G(s) satisfies the constraints,

    w(T, s) 0 s 0 .Also using the equation satisfied by u, we calculate that

    wt = but sust = 12

    2s2[buss] s2

    2[s2uss]s

    = 12

    2s2[buss 2uss s2usss] = 12

    2s2wss .

    So by the Feynman-Kac formula,

    bu(t, s) sus(t, s) = w(t, s) = E[w(t, s)|Ft] 0, t T, s 0 .

    36

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    42/67

    Similarly, we can show that

    au(t, s) + sus(t, s) 0, s T, s 0 .

    Therefore,

    F(t,s,u(t, s), ut(t, s), us(t, s), uss(t, s)) = 0, t < T, s > 0,

    where F is as in the previous section. Since v, the value function, also solvesthis equation, and also v(T, s) = u(T, s) = G(s), by a comparison result forthe above equation, we can conclude that u = v, the value function.

    37

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    43/67

    Chapter 3

    Super-Replication with GammaConstraints

    This chapter is taken from Cheridito-Soner-Touzi [7, 8], and Soner-Touzi[19]. For the brevity of the presentation, again we consider a market withonly one stock, and assume that the mean return rate = 0, by a Girsanovtransformation, and that the interest rate r = 0, by appropriately discountingall the prices. Then, the stock price S() follows

    dS(t) = S(t)dW(t),

    and the wealth process Z() solvesdZ(t) = Y(t)dS(t),

    where the portfolio process Y() is a semi-martingaledY(t) = dA(t) + (t)dS(t) .

    The control processes A(t) and () are required to satisfy(S(t)) (t) (S(t)) t, a.s. ,

    for given functions and . The triplet (Y(t) = y, A(), ()) = is thecontrol. Further restriction on A() will be replaced later. Then, the minimalsuper-replicating cost v(t, s) is

    v(t, s) = inf{z : A s.t. Zt,z G(St,s(T)) a.s. } .

    38

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    44/67

    3.1 Pure Upper Bound Case

    We consider the case with no lower bound, i.e. = . Since formallywe expect vs(t, S(t)) = Y

    (t) to be the optimal portfolio, we also expect(t) := vss(t, S(t)). Hence, the gamma constraint translates into a differen-tial inequality

    vss(t, s) (s) .In view of the portfolio constraint example, the expected DPE is

    min{vt 12

    2s2vss; s2vss + s2(s)} = 0 . (3.1.1)

    We could eliminate s2

    term in the second part, but we choose to write theequation this way for reasons that will be clear later.

    Theorem 3.1.1 Assume G is non-negative, lower semi-continuous and grow-ing at most linearly. Then, v is a viscosity solution of (3.1.1)

    We will prove the sub and super solution parts separately.

    3.1.1 Super solution

    Consider a test function and (t0, s0) (0, T) (0, ) so that

    (v )(t0, s0) = min(v ) = 0 .Choose (tn, sn) (t0, s0) so that v(tn, sn) v(t0, s0). Further choosevn = (yn, An(), n()) so that with zn := v(tn, sn) + 1/n2,

    Zntn,sn(tn +1

    n) v(tn + 1

    n, Stn,sn(tn +

    1

    n)) a.s.

    The existence of n follows from the dynamic programming principle withstopping time = tn +

    1n

    . Set := tn +1n

    , Sn() = Stn,sn(). Since v v ,Itos rule yield,

    zn +

    ntn

    Yntn,yn(u)dS(u) (tn, sn) (3.1.2)

    +

    ntn

    [L(u, Sn(u))du + s(u, Sn(u))dS(u)] .

    39

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    45/67

    Taking the expected value implies

    n

    n

    tn

    E(L(u, Sn(u)))du n[(tn, sn) zn]

    = n[ v](tn, sn) 1n

    .

    We could choose (tn, sn) so that n[ v](tn, sn) 0. Hence, by taking thelimit we obtain,

    L(t0, s0) 0To obtain the second inequality, we return to (3.1.2) and use the Itos ruleon s and the dynamics of Y(). The result is

    n +

    ntn

    an(u)du +

    ntn

    nu

    bn(t)dtdS(u)

    +

    ntn

    nu

    cn(t)dS(t)dS(u) +

    ntn

    dndS(u) 0 ,

    where

    n = zn (tn, sn),an(u) = L(u, Sn(u)),bn(u) = dAn(u)

    Ls(u, Sn(u)),

    cn(u) = n(u) ss(u, Sn(u)),dn = yn s(tn, sn) .

    We now need to define the set of admissible controls in a precise way andthen use our results on double stochastic integrals. We refer to the paperby Cheridito-Soner-Touzi for the definition of the set of admissible controls.Then, we can use the results of the next subsection to conclude that

    L(t0, s0) = lim suput

    an(u) 0 ,

    andlim sup

    utcn(u) 0 .

    In particular, this implies that

    ss(t0, s0) (s0) .

    40

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    46/67

    Hence, v is a viscosity super solution of

    min{vt 12

    2s2vss; s2vss + s2(s)} 0 .

    3.1.2 Subsolution

    Consider a smooth and (t0, s0) so that

    (v )(t0, s0) = max(v ) = 0Suppose that

    min{t(t0, s0) 1

    2

    2

    s

    2

    0ss(t0, s0) ; s2

    0ss(t0, s0) + s

    2

    0(s0)} > 0 .We will obtain a contradiction to prove the subsolution property. We proceedexactly as in the constrained portfolio case using the controls

    y = s(t0, s0), dA = Ls(u, S(u))du, = ss(u, S(u)) .Then, in a neighborhood of (t0, s0), Y(u) = s(u, S(u)) and () satisfies theconstraint. Since L > 0 in this neighborhood, we can proceed exactly asin the constrained portfolio problem.

    3.1.3 Terminal ConditionIn this subsection, we will show that the terminal condition is satisfied with amodified function G. This is a similar result as in the portfolio constraint caseand the chief reason for modifying the terminal data is to make it compatiblewith the constraints. Assume

    0 s2(s) c(s2 + 1) . (3.1.3)Theorem 3.1.2 Under the previous assumptions on G,

    limt

    T,s

    s

    v(t, s) = G(s),

    where G is smallest function satisfying (i) G G (ii) Gss(s) (s). Let(s) be a smooth function satisfying

    ss = (s) .

    41

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    47/67

    Then,

    G(s) = hconc(s) + (s) ,

    h(s) := G(s) (s) ,and hconc(s) is the concave envelope of h, i.e., the smallest concave functionabove h.

    Proof: Consider a sequence (tn, sn) (T, s0). Choose n A(tn) so that

    Zntn,v(tn,sn)+

    1n

    (T) G(Stn,sn(T)) .

    Take the expected value to arrive at

    v(tn, sn) +1

    n E(Gtn,sn(T) | Ftn) .

    Hence, by the Fatous lemma,

    V(s0) := lim inf (tn,sn)(T,s0)

    v(tn, sn) G(s0) .

    Moreover, since v is a super solution of (3.1.1), for every t

    vss(t, s)

    (s) in the viscosity sense .

    By the stability of the viscosity property,

    Vss(s) (s) in the viscosity sense .

    Hence, we proved that V is a viscosity supersolution of

    min{V(s) G(s); Vss(s) + (s)} 0 .

    Set w(s) = V(s) (s). Then, w is a viscosity super solution of

    min{

    w(s)

    h(s);

    wss(s)}

    0 .

    Therefore, w is concave and w h. Hence, w hconc and consequently

    V(s) = w(s) + (s) hconc(s) + (s) = G(s) .

    42

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    48/67

    To prove the reverse inequality fix (t, s). For > 0 set

    (t, s) := (1 )(s) + c(T t)s2/2 .

    Set

    (u) = ss(u, S(u)), dA(u) = Ls (u, S(u)), y0 = s (t, s) + p0,

    where p0 is any point in the subdifferential of h,conc, h = G (1 ).Then,

    p0(s s) + h,conc(s) h,conc(s), s .

    For any z, consider Zt,z with = (y0, A(), ()). Note for t near T, A(t),

    Zt,z(T) = z +

    Tt

    (s (t, s) + p0)dS(u)

    +

    Tt

    [

    ut

    Ls (r, S(r))dr + ss(r, S(r))dS(r)]dS

    = z + p0 (S(T) s) +T

    t

    s (u, S(u))dS(u)

    z + h,conc(S(T)) h,conc(s)+(S(T), T) (t, s)

    T

    t

    L(u, S(u))du .

    We directly calculate that with c as in (3.1.3),

    L(t, s) = cs2 + 12

    2s2((1 )ss + c(T t))

    = s2{c 12

    2[(1 )(s) + +c(T t)]}

    s2{c 12

    2[(1 )c + c(T t)]} + 12

    2c

    12

    2c ,

    provided that t is sufficiently near T and c is sufficiently large. Hence, with

    z0 = h,conc(s) + (t, s) +

    1

    22c(T t),

    43

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    49/67

    we have

    Zt,z0 h,conc(S(T)) + (S(T), T) G(S(T)) (1 )(S(T)) + (1 )(S(T))= G(S(T)) .

    Therefore,

    z0 = h,conc(s) + (t, s) +

    1

    22c(T t) v(t, s) ,

    for all > 0, c large and t sufficiently close to T. Hence,

    limsup(t,s)(T,s0)

    v(t, s)

    h,conc(s0) +

    (T, s0)

    > 0

    = h,conc(s0) + (1 )(s0)= (G (1 ))conc(s0) + (1 )(s0)= G(s0) .

    It is easy to prove thatlim0

    G(s0) = G(s0) .

    3.2 Double Stochastic IntegralsIn this section, we study the asymptotic behavior of certain stochastic inte-grals, as t 0. These properties were used in the derivation of the viscosityproperty of the value function. Here we only give some of the results and theproofs. A detailed discussion is given in the recent manuscript of Cheridito,Soner and Touzi.

    For a predictable, bounded process b, let

    Vb(t) :=

    t0

    u0

    b()dW()dW(u) ,

    h(t) := t lnln1/t .

    For another predictable, bounded process m, let

    M(t) :=

    t0

    m(u)dW(u),

    44

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    50/67

    Vbm := t

    0

    u

    0

    b()dM()dM(u) .

    In the easy case when b(t) = , t 0, for some constant R, we have

    Vb(t) =

    2

    W2(t) t , t 0 .

    If 0, it follows from the law of the iterated logarithm for Brownianmotion that,

    limsupt0

    2V(t)

    h(t)= , (3.2.4)

    where h(t) := 2t log log 1t

    , t > 0 ,

    and the equality in (3.2.4) is understood in the almost sure sense. On theother hand, if < 0, it can be deduced from the fact that almost all paths ofa one-dimensional standard Brownian motion cross zero on all time intervals(0, ], > 0, that

    limsupt0

    2V(t)

    t= . (3.2.5)

    The purpose of this section is to derive formulae similar to (3.2.4) and

    (3.2.5) when b = {b(t) , t 0} a predictable matrix-valued stochastic process.Lemma 3.2.1 Let and T be two positive parameters with 2T < 1 and{b(t) , t 0} an Md-valued, F-predictable process such that |b(t)| 1, forall t 0. Then

    E

    exp

    2Vb(T) E exp 2VId(T) .

    Proof. We prove this lemma with a standard argument from the theory ofstochastic control. We define the processes

    Yb

    (r) := Y(0)+r0 b(u)dW(u) and Z

    b

    (t) := Z(0)+t0 (Y

    b

    (r))T

    dW(r) , t 0 ,where Y(0) d and Z(0) are some given initial data. Observe thatVb(t) = Zb(t) when Y(0) = 0 and Z(0) = 0. We split the argument intothree steps.

    45

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    51/67

    Step 1: It can easily be checked that

    E

    exp

    2ZId(T)Ft = ft, YId(t), ZId(t) , (3.2.6)

    where, for t [0, T], y d and z , the function f is given by

    f(t,y,z) := E

    exp

    2

    z +

    Tt

    (y + W(r) W(t))T dW(r)

    = exp( 2z) E

    exp

    {2yTW(T t) + |W(T t)|2 d(T t)}= d/2 exp

    2z d(T t) + 22(T t)|y|2 ,

    and := [1

    2(T

    t)]1. Observe that the function f is strictly convex in

    y and

    D2yz f(t,y,z) :=2f

    yz(t,y,z) = g2(t,y,z) y (3.2.7)

    where g2 := 83(T t) f is a positive function of (t,y,z).Step 2: For a matrix Md, we denote by L the Dynkin operator associ-ated to the process

    Yb, Zb

    , that is,

    L := Dt + 12

    tr TD2yy +

    1

    2|y|2D2zz + (y)TD2yz ,

    where D. and D2.. denote the gradient and the Hessian operators with respect

    to the indexed variables. In this step, we intend to prove that for all t [0, T],y d and z ,

    maxMd , ||1

    Lf(t,y,z) = LIdf(t,y,z) = 0 . (3.2.8)

    The second equality can be derived from the fact that the process

    f

    t, YId(t), ZId(t)

    , t [0, T] ,

    is a martingale, which can easily be seen from (3.2.6). The first equalityfollows from the following two observations: First, note that for each Mdsuch that || 1, the matrix Id T is in Sd+. Therefore, there exists a Sd+ such that

    Id T = 2 .

    46

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    52/67

    Since f is convex in y, the Hessian matrix D2yyf is also in Sd+. It follows thatD

    2

    yyf(t,x,y) Sd

    +, and therefore,

    tr[D2yy f(t,x,y)] tr[TD2yyf(t,x,y)] = tr[(Id T)D2yyf(t,x,y)]= tr[D2yy f(t,x,y)] 0 . (3.2.9)

    Secondly, it follows from (3.2.7) and the Cauchy-Schwartz inequality that,for all Md such that || 1,

    (y)TD2yz f(t,y,z) = g2(t,y,z)(y)Ty g2(t,y,z)|y|2

    = yTD2yz f(t,y,z) . (3.2.10)

    Together, (3.2.9) and (3.2.10) imply the first equality in (3.2.8).

    Step 3: Let {b(t) , t 0} be an Md-valued, F-predictable process such that|b(t)| 1 for all t 0. We define the sequence of stopping times

    k := T inf

    t 0 : |Yb(t)| + |Zb(t)| k , k N .It follows from Itos lemma and (3.2.8) that

    f(0, Y(0), Z(0)) = f

    k, Y

    b(k), Zb(k)

    k0

    Lb(t)f

    t, Yb(t), Zb(t)

    dt

    k0

    [(Dyf)Tb + (Dzf)yT]

    t, Yb(t), Zb(t)

    dW(t)

    fk, Yb(k), Zb(k)k0

    [(Dyf)Tb + (Dzf)y

    T]

    t, Yb(t), Zb(t)

    dW(t) .

    Taking expected values and sending k to infinity, we get by Fatous lemma,

    E

    exp

    2ZId(T)

    = f(0, Y(0), Z(0))

    lim infk

    E

    f

    k, Y

    b(k), Zb(k) E fT, Yb(T), Zb(T) = E exp 2Zb(T) ,

    which proves the lemma.

    47

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    53/67

    Theorem 3.2.2

    a) Let{b(t) , t 0} be anMd

    -valued,F

    -predictable process such that|b(t)| 1 for all t 0. Thenlimsup

    t0

    |2Vb(t)|h(t)

    1 .

    b) Let Sd with largest eigenvalue () 0. If {b(t) , t 0} is abounded, Sd-valued, F-predictable process such that b(t) for all t 0,then

    limsupt0

    2Vb(t)

    h(t) () .

    Proof.

    a) Let T > 0 and > 0 be such that 2T < 1. It follows from Doobsmaximal inequality for submartingales and Lemma 3.2.1 that for all 0,

    P

    sup

    0tT2Vb(t)

    = P

    sup

    0tTexp(2Vb(t)) exp()

    exp() E exp 2Vb(T) exp() E exp 2VId(T)= exp ()exp(dT) (1 2T) d2 .

    (3.2.11)

    Now, take ,

    (0, 1), and set for all k

    N,

    k := (1 + )2h(k) and k := [2

    k(1 + )]1 .

    It follows from (3.2.11) that for all k N,

    P

    sup

    0tk2Vb(t) (1 + )2h(k)

    exp

    d

    2(1 + )

    1 +

    1

    d/2(k log

    1

    )(1+) .

    Since

    k=1 k(1+) < ,

    it follows from the Borel-Cantelli lemma that, for almost all , thereexists a natural number K, () such that for all k K, (),

    sup0tk

    2Vb(t, ) < (1 + )2h(k) .

    48

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    54/67

    In particular, for all t (k+1, k],

    2Vb(t, ) < (1 + )2h(k) (1 + )2 h(t)

    .

    Hence,

    lim supt0

    2Vb(t)

    h(t) (1 + )

    2

    .

    By letting tend to one and to zero along the rationals, we conclude that

    lim supt0

    2Vb(t)

    h(t) 1 .

    On the other hand,

    lim inft0

    2Vb(t)

    h(t)= lim sup

    t0

    2Vb(t)h(t)

    1 ,

    and the proof of part a) is complete.b) There exists a constant c > 0 such that for all t 0,

    cId b(t) cId , (3.2.12)and there exists an orthogonal d d-matrix U such that

    := UUT

    = diag[(), 2, . . . , d] ,

    where () 2 d are the ordered eigenvalues of the matrix .Let

    := diag[3c , c , . . . , c] and := UTU .

    It follows from (3.2.12) that for all t 0, b(t) 0 ,

    which implies that

    | b(t)| | | = ( ) = () = 3c () .

    Hence, by part a),

    limsupt0

    2Vb(t)h(t)

    3c () . (3.2.13)

    49

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    55/67

    Note that the transformed Brownian motion,

    W(t) := UW(t) , t 0 ,is again a d-dimensional standard Brownian motion and

    lim supt0

    2V(t)

    h(t)= lim sup

    t0

    W(t)TW(t) tr()th(t)

    = lim supt0

    W(t)TW(t) tr()th(t)

    = lim supt0

    W(t)TW(t)

    h(t)

    limsupt0

    3c(W1(t))2

    h(t)= 3c .

    (3.2.14)

    It follows from (3.2.14) and (3.2.13) that

    lim supt0

    2Vb(t)

    h(t) limsup

    t0

    2V(t)

    h(t)limsup

    t0

    2Vb(t)h(t)

    3c(3c()) = () ,

    which proves part b) of the theorem.

    Using the above results one can prove the following lower bound for Vbm.

    Theorem 3.2.3 Assume that

    supu

    E|m(u) m(0)|4u4

    .

    Then,

    lim supt0

    Vbm(t)

    h(t) a ,

    for every b L satisfying m(0)b(0)m(u) a > 0.Theorem 3.2.4 Suppose that c L and E|m(u)|4 c. Then, for all > 0,

    lim supt0

    | t0 (u0 a()d)dM(u)|

    t3/2+= 0 .

    50

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    56/67

    Proofs of these results can be found in Cheridito-Soner-Touzi.

    Lemma 3.2.5 Suppose that b L and for some > 0

    supu0

    E|b(u) b(0)|2u2

    < .

    Then,

    liminft0

    Vb(t)

    t= 1

    2b(0) .

    Proof: Set b(u) := b(u) b(0). Then, E|b(u)|2/u2 < and

    Vb(t) = 12

    b(0)W2(t) 12

    b(0)t + Vb(t) .

    It can be shown that |Vb(t)| = 0(t). Hence,

    liminft0

    Vb(t)

    t= 1

    2b(0) + lim inf

    t01

    2b(0)

    W2(t)

    t= 1

    2b(0) .

    3.3 General Gamma Constraint

    Here we consider the constraint

    (S(u)) (u) (S(u)) .Let

    H(vt, vs, vss, s) = min{vt 12

    2s2vss; vss + (s), vss (s)} ,

    and H be the parabolic majorant of H, i.e.,

    H(vt, vs, vss, s) := sup{H(vt, vs, vss + Q, s) : Q 0} .Theorem 3.3.1 v is a viscosity solution of

    H(vt, vs, vss, s) = 0, (t, s) (0, T) (0, )

    51

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    57/67

    The super solution property is already proved. The subsolution property is

    proved almost exactly as in the upper gamma constraint case; cf [Cheridito-Soner-Touzi].Terminal condition is the same in the pure upper bound case. As before

    G(s) = (G )conc(s) + (s) ,

    where satisfies, ss(s) = (s). Note that the lower gamma bound does

    not effect G.

    Proposition 3.3.2lim

    (t,s)(T,s)v(t, s) = G(s)

    Proof is exactly as in the upper gamma bound case.

    Theorem 3.3.3 Assume 0 G(s) c[1 + s]. Then, v is the unique viscos-ity solution of H = 0 together with the boundary condition v(T, s) = G(s).

    Proof is given in Cheridito-Soner-Touzi.

    Example. Consider the problem with (s) 0, (s) +, G(s) = s 1.For s 1, z = s, y0 = 1, A 0, 0 yield,

    Y(u) 1, Z

    t,s(u) = s +u

    t dS() = St,s(u) .

    Hence, Zt,s(T) = St,s(T) St,s(T) 1 and for s < 1, v(t, s) s. For s 1,let z = 1, y0 = 0, A 0, 0. Then,

    Y(u) 0, Zt,1(u) = 1 ,

    and Zt,1(T) = 1 St,s(T) 1. So for s 1 v(t, s) 1.Hence, v(t, s) G(s). We can check that

    H(Gt, Gs, Gss) = supQ0{

    1

    2

    2s2(Gss + Q); (Gss + Q)

    }= 0 .

    By the uniqueness, we conclude that v = G, and the buy and hold strategydescribed earlier is optimal.

    52

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    58/67

    3.4 Examples

    In some cases it is possible to compute the solution. In this section, we giveseveral examples to illustrate the theory.

    3.4.1 European Call Option

    G(s) = (s K)+ .(a). Let us first consider the portfolio constraint:

    Lv svs Uv .where (U 1) is the fraction of our wealth we are allowed to borrow (i.e.,shortsell the money market) and L is the shortsell constraint. According toour results,

    v(t, s) = E[G(St,s(T))] ,

    where G is the smallest function satisfying i)G G, ii) LG sGs UG.By observation, we see that only the upper bound is relevant and that thereexists s > K so that

    G(s) = G(s) = (s K) s s

    Moreover, for s s the constraint sG(s) = UG(s) saturates, i.e.sG(s) = UG(s) s s

    HenceG(s) = (

    s

    s)UG(s) = (s K)( s

    s)U

    Now we compute s, by imposing that G C1. So

    Gs(s) = Gs(s) = 1 U(s K)( 1

    s) = 1 .

    The result is

    s = U(v 1) K .

    Therefore,

    G(s) =

    (s K), if s s

    ( U1K )U1UUsU, if 0 s s .

    53

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    59/67

    Set,

    Gpremium(s) :=

    G(s) G(s) .Then,

    v(t, s) = vBS(t, s) + vpr(t, s) ,

    where vBS(t, s) := EG(t, s) is the Black Scholes price, and

    vpr(t, s) = E[G(s) G(S(T))]Moreover, vpr can be explicitly calculated (and its derivative) in terms of theerror function, as in the Black-Scholes Case.(b). Now consider the Gamma constraint

    v

    s2vss

    v .

    Again, since G is convex, the lower bound is irrelevant and the modified Gis given as in part (a) with

    U(U 1) = U = 1 +

    1 + 4

    2.

    So, it is interesting to note that the minimal super-replication cost of bothconstraints agree provided that U and are related as above.

    3.4.2 European Put Option:

    G(s) = (K s)+ .(a). Again let us consider the portfolio constraint first. In this case, since Gis decreasing, only lower bound is active. We compute as in the Call OptionCase:

    G(s) =

    (K s), if 0 s s ,

    LL( KL+1)L+1sL, if s s ,

    where

    s =L

    L + 1K .

    (b). Consider the gamma constraint

    v s2vss v .Again lower bound inactive, and the solution is as in the portfolio case with

    L = ( + 1) .

    54

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    60/67

    3.4.3 European Digital Option

    G(s) =

    1, s K ,0, s K .

    (a). Again, we first consider the portfolio constraint. It can be verified that

    G(s) =

    ( s

    K)U, 0 s K ,

    1, s K .If we split the price as before:

    v(t, s) = vBS(t, s) + vpr(t, s) .

    We have

    vBS(t, s) = P(St,s(T) 1) = N(d2) := 12

    d2

    ex2/2dx ,

    d2 =ln( s

    k) 1

    22(T t)

    T t ,

    vpr(t, s) =12

    d2(

    s

    K)UeU[

    Ttx 1

    22(Tt)]e

    12

    x2dx

    = (s

    K)Ue

    u(u1)2

    2(Tt)[1

    2 N(d2)] .

    We can also compute the corresponding deltas.(b). Once again, the gamma constraint

    s2vss vyields the same G as in the portfolio constraint with

    = U(U

    1)

    U =

    1 +

    1 + 4

    2

    .

    If there is no lower gamma bound, v = vBS + vpr.(c). Suppose now that there is no upper bound,

    s2vss v .

    55

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    61/67

    Then the dynamic programming equation is

    supQ0

    min{vt 2

    2(s2vss + Q); (s

    2vss + Q) + v} = 0 s > 0, t < T .

    v(T, s) = G(s) .

    By calculus, we see that this is equivalent to

    vt 2

    2(s2vss + v)+ +

    2

    2v = 0 .

    Since, for any real number ,

    ()+ = sup0a1

    { a2 } ,

    vt + inf0a1

    {2a2

    2s2vss +

    2

    (1 a2)v} = 0 .We recognize the above equation as the dynamic programming equation forthe following stochastic optimal control problem:

    v(t, s) = sup0a()1

    E[exp(T

    t

    22

    (1 a2(u))du)G(Sat,s(T))] ,

    wheredSa(u) = a(u)Sa(u)dW(u) .

    3.4.4 Up and Out European Call Option

    This is a path-dependent option

    G(St,s()) =

    (St,s(T) K)+, if maxtuT St,s(u) := M(T) < B ,0, M(T) B .

    We can also handle this with PDE techniques. Consider the bound

    v s2vss v .

    The upper bound describes the modified final data G:

    56

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    62/67

    (i) If

    s := UU 1K < B (with U = 1 + 1 + 42 ) ,then G is as in the usual Call Option case.

    (ii) If, however, s B, then

    G(s) = (K B)( sB

    )U .

    We claim that at the lateral boundary s = B, the lower gamma boundsaturates and

    s

    2

    vss(t, B) = v(t, B) .Using the equation, we formally guess that

    vt(t, B) = 2

    2s2vss(t, B) =

    2

    2v(t, B) .

    We solve this ODE to obtain

    v(t, B) = (K B) exp( 2

    2(T t)) t T . (3.4.15)

    Lemma 3.4.1 The minimal super-replicating cost for the up and out call

    option is the unique (smooth) solution of

    vt 2

    2s2vss = 0, t < T, 0 < s < B ,

    v(T, s) = G(s), 0 s B .together with (3.4.15). In particular,

    v(t, s) = E{G(St,s(T)){t,sT} + (K B)e

    2

    2(tt,s){t,s

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    63/67

    Proof: It is clear that

    s2vss(t, B) = v(t, B), and s2vss(T, s) 0 .

    Setw(t, s) := s2vss(t, s) + v(t, s) .

    Then,

    wt 2

    2s2wss = 0, t < T, 0 < s < B ,

    w(T, s) > 0 ,

    w(t, B) = 0 .

    Hence, w 0. Similarly setz(t, s) := s2vss(t, s) v(t, s) .

    Then,z(T, s) , z(t, B) < 0 .

    Also,

    zt 2

    2s2zss = 0 .

    Hence, z

    0. Therefore, v solves H(vt, s

    2vss, v) = 0. So by the uniqueness

    v is the super-replication cost.

    3.5 Guess for The Dual Formulation

    As it was done for the portfolio constraint, using duality is another possibleapproach to super-replication is also available. We refer to the lecture notesof Rogers [16] for this method and the relevant references. However, the dualapproach has not yet been successfully applied to the gamma problem. Here

    we describe a possible dual problem based on the results obtained throughdynamic programming.

    Let us first consider the upper bound case

    s2vss .

    58

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    64/67

    Then the dynamic programming equation is

    min{vt 22

    s2vss; s2vss + } = 0 .We rewrite this as

    vt + infb1

    {2

    2b2s2vss +

    2

    2(b2 1)} = 0 .

    The above equation is the dynamic programming equation of the followingoptimal control problem,

    v(t, s) := supb(

    )

    1

    E 2

    2

    T

    t

    (b2(u) 1)du + G(Sbt,s(T)) ,dSbt,s(u) = b(u)S

    bt,s(u)dW(u) .

    Note the change in the diffusion coefficient of the stock price process.If we consider,

    s2vss ,same argument yields

    v(t, s) = sup0a()1

    E

    2

    2

    Tt

    (1 a2(u))du + G(Sat,s(T))

    .

    Now consider the full constraint,

    s2vss .The equation is

    supQ0

    min{vt 2

    2(s2vss + Q); (s2vss + Q) + ; (s2vss + Q) + } = 0 .

    We rewrite it as

    vt + inf b1a0

    {2

    2s2a2b2vss +

    2

    2(b2 1) +

    2

    2(1 a2)} = 0 .

    Hence,

    v(t, s) = supb()1,0a()1

    E

    2

    2

    Tt

    [(b(u)2 1) + (1 a2(u))]du

    +G(Sa,bt,s (T))

    ,

    59

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    65/67

    dSa,b(u) = a(u)b(u)Sa,b(u)dW(u) .

    It is open to prove this by direct convex analysis methods. We finish byobserving that if

    v s2vss v ,then

    v(t, s) = supb()1,0a()1

    E

    eTt2

    2[(b(u)21)+(1a2(u))] du G(Sa,bt,s (T))

    .

    All approaches to duality (see Rogers lecture notes [16]) yield expressions ofthe form

    v(t, s) = sup E[ B Y ] ,

    where B = G(S(T)) in our examples. However, in above examples S() needsto be modified in a way that is not absolutely continuous with respect to P.

    60

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    66/67

    Bibliography

    [1] Ambrosio L., and Soner H.M. (1996). Level set approach to mean cur-vature flow in arbitrary codimension, Journal of Differential Geometry43, 693-737.

    [2] Barles G. (1994). Solutions de viscosite des equations dHamilton-Jacobi, Mathematiques et Applications 17, Springer-Verlag.

    [3] Barucci E., Gozzi F., and Swiech A. (2000) Incentive compatibilityconstraints and dynamic programming in continuous time. J. Math.Econ. 34/4, 471-508.

    [4] Benhabib J., and Rusctichini A. (1994). A note on a new class od solu-tions to dynamics programming problems arising in economic growth.J. Econ. Dyn. Control, 18, 807-813.

    [5] Broadie M., Cvitanic J. and Soner H.M. (1998). Optimal replication ofcontingent claims under portfolio constraints, The Review of FinancialStudies. 11, 59-79.

    [6] Chen Y.-G., Giga Y. and Goto S. (1991). Uniqueness and existenceof viscosity solutions of generalized mean curvature flow equations,Journal of Differential Geometry. 33, 749-786.

    [7] Cheridito P., Soner H.M. and Touzi N. (2003). The multidimensionalsuper-replication problem under Gamma constraints, preprint.

    [8] Cheridito P., Soner H.M. and Touzi N. (2003). Local path behaviorof double stochastic integrals and application to stochastic control,preprint.

    [9] Constandinides G.M. (1986) Capital market equilibrium with trans-action costs. Journal Political Economy, 94, 842-862.

    61

  • 7/31/2019 BOOK-Soner-Stochastic Optimal Control in Finance

    67/67

    [10] Crandall M.G., Ishii H. and Lions P.L. (1992). Users guide to viscosity

    solutions of second order Partial Differential Equations, Bull. Amer.Math. Soc. 27, 1-67.

    [11] Cvitanic J., and Karatzas I. (1993). Hedging contingent claims withconstrained portfolios, Annals of Applied Probability. 3, 652-681.

    [12] Davis M.H.A., and Norman A. (1990), Portfolio selection with trans-action costs. Mathematics of Operations Research. 15: 676-713.

    [13] Evans L.C., and Spruck J. (1991). Motion of level sets by mean cur-vature, Journal of Differential Geometry. 33, 635-681.

    [14] Fleming W.H., and Soner H.M. (1993). Controlled Markov Pro-cesses and Viscosity Solutions, Springer-Verlag, New York, Heidel-berg, Berlin.

    [15] Karatzas I., and Shreve S. (1998). Methods of Mathematical Finance,Springer-Verlag, New York, Heidelberg, Berlin.

    [16] Rogers L.C.G. (2003). Duality in constrained optimal investment andconsumption problems: A synthesis. Lecture Notes in Mathematics.1814, 95-131.

    [17] Rustichini A. (1998), Dynamic programming solution of incentive con-strained problems . Journal of Economics Theory. 78/2, 329-354.

    [18] Shreve S.E., and Soner H.M. (1994), Optimal investment and con-sumption with transaction costs, The Annals of Applied Probability.4 (3) : 609-692.

    [19] Soner H.M., and Touzi N. (2002). Super-replication under Gammaconstraints, SIAM J. Control and Opt. 39, 73-96.

    [20] Soner H.M., and Touzi N. (2002). Dynamic programming for stochas-tic target problems and geometric flows, Journal of the European

    Mathematical Society. 8, 201-236.

    [ ] S H M d N ( ) L l h f h