Seminars UCHI

Embed Size (px)

Citation preview

  • 8/3/2019 Seminars UCHI

    1/25

    ABSTRACTS

    On Implied Volatility for Options -- Some Reasons to Smile and to Correct

    Songxi Chen

    Guanghua School of Business, Peking University, and Iowa State University

    We analyze the properties of the implied volatility that is obtained by inverting a single option price via

    Black-Scholes formula, which is the commonly used volatility estimator for option data. We show that

    the implied volatilities is subject to a systematic bias in the presence of pricing errors. The stylish

    impacts of the errors can be significance even for at the money short maturity options. We propose a

    kernel smoothing based implied volatility estimator, and demonstrate it can automatically

    correct/remove for the pricing errors. The S&P 500 options data are intensively analyzed to

    demonstrate the approach. This is a joint work with a graduate Student Zheng Xu, Department of

    Statistics, Iowa State University .

    Multiperiod Corporate Default Prediction -- A Forward Inten tity Approach

    Jin-Chuan Duan

    Risk Management Institute, National University of Singapore

    A forward intensity model for the prediction of corporate defaults over different future periods is

    proposed. Maximum pseudo-likelihood analysis is then conducted on a large sample of the US industrial

    and financial firms spanning the period 1991-2010 on a monthly basis. Several commonly used factors

    and firm-specific attributes are shown to be useful for prediction at both short and long horizons. Our

    implementation also factors in momentum in some variables and documents their importance in default

    prediction. The prediction is very accurate for shorter horizons. The accuracy deteriorates somewhat

    when the horizon is increased to two or three years, but its performance still remains reasonable. The

    forward intensity model is also amenable to aggregation, which allows for an analysis of default

    behavior at the portfolio and/or economy level. (Joint work with Jie Sun and Tao Wang.)

    Detecting Financial Bubbles in Real Time

    Philip Protter

    Columbia University

    After the 2007 credit crisis, financial bubbles have once again emerged as a topic of current concern. An

    open problem is to determine in real time whether or not a given asset's price process exhibits a bubble.

    To do this, one needs to use a mathematical theory of bubbles, which we have recently developed and

    will briefly explain. The theory uses the arbitrage-free martingale pricing technology. This allows us to

    answer this question based on the asset's price volatility. We limit ourselves to the special case of a risky

  • 8/3/2019 Seminars UCHI

    2/25

    asset's price being modeled by a Brownian driven stochastic diff erential equation. Such models are

    ubiquitous both in theory and in practice. Our methods use sophisticated volatility estimation

    techniques combined with the method of reproducing kernel Hilbert spaces. We illustrate these

    techniques using several stocks from the alleged internet dot-com episode of 1998 - 2001, where price

    bubbles were widely thought to have existed. Our results support these beliefs. We then consider the

    special case of the recent IPO of LinkedIn. The talk is based on several joint works with Robert Jarrow,

    Kazuhiro Shimbo, and Younes Kchia.

    The Leverage Effect Puzzle: Disentangling Sources of Bias in High Frequency Inference

    Yacine At-Sahalia

    Princeton University

    The leverage effect refers to the generally negative correlation between an asset return and its changes

    of volatility. A natural estimate consists in using the empirical correlation between the daily returns and

    the changes of daily volatility estimated from high-frequency data. The puzzle lies in the fact that suchan intuitively natural estimate yields nearly zero correlation for most assets tested, despite the many

    economic reasons for expecting the estimated correlation to be negative. To better understand the

    sources of the puzzle, we analyze the different asymptotic biases that are involved in high frequency

    estimation of the leverage effect, including biases due to discretization errors, to smoothing errors in

    estimating spot volatilities, to estimation error, and to market microstructure noise. This decomposition

    enables us to propose a bias correction method for estimating the leverage effect. (Joint work with

    Jianqing Fan and Yingying Li.)

    Complex Trading Mechanisms

    Patricia Lassus

    geodesiXs

    I will discuss mechanisms with a richer message space for bidders/traders, which allow them to express

    conditions on the size of the overall auction/trade they participate in, or on the price impact of their

    bid/order.

    These mechanisms can be used in a one-shot auction (e.g. for corporate or government debt

    underwriting) or on a continuous trading platform (e.g. for trading equities, bonds, or other asset

    classes).

    Implied Volatility Smirk under Asymmetric Dynamics

    Jos Santiago Fajardo Barbachan

    FGV

    In this paper focusing on Lvy process, with exponential dampening controlling the skewness, we obtain

    a result that allow us to relate the impled volatility skew and the asymmetric dynamics of the

  • 8/3/2019 Seminars UCHI

    3/25

    underlying. Moreover, with this result in mind we propose alternatives specifications for the implied

    volatility and test them using S&P500 options data, obtaining a very good fit. Although, there is in the

    literature more general data-generating process, including stochastic volatility models, by focusing on a

    particular class we can learn a bit more insights about how this particular process generates the skew.

    More exactly, the market symmetry parameter is deeply connected with the risk neutral excess of

    kurtosis, which will allow us to relate the risk neutral skewness and kurtosis with the implied volatility

    skew

    On volatility matrix estimation in a multivariate semimartingale model with microstrutcure noise

    Markus Bibinger

    Humboldt-Universitt zu Berlin

    We consider a multivariate discretely observed semimartingale corrupted by microstructure noise and

    aim at estimating the (co)volatility matrix. A concise insight into state-of-the-art approaches for

    integrated covolatility estimation in the presence of noise and non-synchronous observation schemeswill be provided to reveal some intrinsic fundamental features of the statistical model and strategies for

    estimation. In an idealized simplified model an asymptotic equivalence result gives rise to a new local

    parametric statistical approach attaining asymptotic efficiency. We highlight that a multivariate local

    likelihood approach allows for efficiency gains for integrated volatility estimation by the information

    contained in correlated components.

    Implied volatility asymptotics in affine stochastic volatility models with jumps

    Antoine Jacquier

    TU Berlin

    Calibrating stochastic models is a fundamental issue on financial markets. The aim of this talk is to

    propose a calibration methodology based on the knowledge of the asymptotic behaviour of the implied

    volatility. We focus on the general class of affine stochastic volatility models with jumps, which

    encompasses the Heston (with jumps) model, exponential Levy models, the Barndorff-Nielsen and

    Shephard model. Under mild conditions on the jump measures, we derive (semi) closed-form formulae

    for the implied volatility as the maturity gets large.

    Large-time asymptotics for general stochastic volatility models

    Martin Forde

    Dublin City University

    We derive large-time asymptotics for the modified SABR model, and show that the estimates are

    consistent with the general result in Tehranchi09. We then discuss large-time asymptotics for a general

    uncorrelated model, using Donsker&Varadhan's large deviation principle for the occupation measure of

  • 8/3/2019 Seminars UCHI

    4/25

    an ergodic process. This is related to the recent work of Feng,Fouque&Kumar using viscosity solutions

    and non-linear homogenization theory.

    Long-term Behaviors and Implied Volatilities for General Affine Diffusions

    Kyoung-KukKim

    KAIST

    For the past several years, studies on affine processes have been worked out by many researchers,

    regarding moment explosions, implied volatilities, and long-term behaviors. Recently, Glasserman and

    Kim, and Keller-Ressel investigated the moment explosions of the canonical affine models of Dai and

    Singleton, and general two-factor affine stochastic volatility models, respectively. They also presented

    the long-term behaviors of such processes. On the other hand, Benaim and Friz, and Lee showed that

    implied volatilities at extreme strikes are linked to the moment explosions of stock prices at given option

    maturities. In this work, we characterize the regions in which moment explosions happen for some time

    or at a given time, and relate them to the long-term behavior of stock prices and to implied volatilities,extending previous works on moment explosions for affine processes. (This is a joint work with Rudra P.

    Jena and Hao Xing.)

    Asymptoticbehavior of the implied volatility in stochastic asset price models with and without

    moment explosions

    Archil Gulisashvili

    Ohio University

    The main results discussed in the talk concern asymptotic formulas with error estimates fot the implied

    volatility at extreme strikes in various stochastic asset price models. These formulas are valid for

    practically any such model. It will be shown that the new formulas imply several known results, including

    Roger Lee's moment formulas and the tail-wing formulas due to Shalom Benaim and Peter Friz. We will

    also provide necessary and sufficient conditions for the validity of asymptotic equivalence in Lee's

    moment formulas. Applications will be given to various stochastic volatility models (Hull-White, Stein-

    Stein, and Heston). These are all models with moment explosions. For stochastic asset price models

    without moment explosions, the general formulas for the implied volatility can be given an especially

    simple form. Using these simplifications, we prove a modified version of Piterbarg's conjecture. The

    asymptotic formula suggested by Vladimir Piterbarg may be considered as a substitute for Lee's moment

    formula for the implied volatility at large strikes in the case of models without moment explosions. Wewill also discuss the asymptotic behavior of the implied volatility in several special asset price models

    without moment explosions, e.g., Rubinstein's displaced diffusion model, the CEV model, the finite

    moment log-stable model of Carr and Wu, and SV1 and SV2 models of Rogers and Veraart.

    Volatility ForecastingModels

    Iryna Okhrin

  • 8/3/2019 Seminars UCHI

    5/25

    European University Vladrina

    The current research suggests a sequential procedure for monitoring validity of the volatil- ity model. A

    state space representation describes dynamics of the daily integrated volatility. The observation

    equation relates the integrated volatility to its measures, such as the realized volatility or bipower

    variation. A control procedure, based on the corresponding forecasting errors, allows to decide, whetherthe chosen representation remains correctly specified. A signal indicates that the assumed volatility

    model may not be valid anymore. The performance of our approach is analyzed within a Monte Carlo

    study and illustrated in an empirical study for selected U.S. stocks.

    Properties of Hierarchical Archimedean Copulas

    Ostap Okhrin

    Humboldt University Vladrina

    Abstract: In this paper we analyse the properties of hierarchical Archimedean copulas. This class is a

    generalisation of the Archimedean copulas and allows for general non-exchangeable dependency

    structures. We show that the structure of the copula can be uniquely recovered from all bivariate

    margins. We derive the distribution of the copula value, which is particularly useful for tests and

    constructing confidence intervals. Furthermore, we analyse dependence orderings, multivariate

    dependence measures and extreme value copulas. Special attention we pay to the tail dependencies

    and derive several tail dependence indices for general hierarchical Archimedean copulas.

    Near-expirationbehavior of implied volatility for exponential Levy models

    Jose Figueroa-Lopez

    Purdue University

    Abstract: Implied volatility is the market's measure of choice to summarize the risk of an underlying

    asset as reflected by its option prices. The asymptotic behavior of option prices near expiration is a

    problem of current interest with important practical consequences. In this talk, I will present a near-

    expiration expansion for out-of-the-money call options under an exponential Levy model, using a change

    of numeraire technique via the Esscher transform. Using this result, a small-time expansion for the

    implied volatility is obtained for both exponential Levy models and time-changed Levy models.

    Numerical implementation of our results shows that the second order approximation can significantly

    outperform the first order approximation. This talk is based on a joint work with Martin Forde.

    Financial Engineering and the Credit Crisis

    PaulEmbrechts

    ETH Zrich

  • 8/3/2019 Seminars UCHI

    6/25

    Coming out of (for some, still being in) the worst financial crisis since the 1930s Great Depression, we all

    have to ask ourselves some serious questions. In particular, the guild of Financial Engineers has to

    answer criticisms ranging from Michel Rocard, the former French Prime Minister's "Quantitative models

    are a crime against humanity to Felix Salmon's "Recipe for Disaster: The formula that killed Wall

    Street". A simple: "These are ridiculous statements" does not suffice. In this talk I will present my views

    on the above and discuss ways in which quantitative finance (financial engineering, financial and

    actuarial mathematics) has an important role to play going forward. Besides a brief discussion on some

    general points underlying the financial crisis, I will also discuss some more technical issues as there are

    (i) the relevance of micro-correlation for pricing CDO tranches, (ii) dependence modeling beyond linear

    correlation and (iii) model uncertainty.

    Measuring Market Speed

    Kevin Sheppard

    Oxford University

    This paper defines a new concept of market speed based on the time required for prices to be

    synchronized. A new methodology is developed which allows the time required to two-assets to be fully

    synchronized. The speed of a market can be estimated from ultra high-frequency data. This

    methodology is applied to the constituents of the S&P 500 using data form 1996 until 2009. I find that

    the time required to for markets to become synchronized has dropped from more than an hour to less

    than a minute. I explore factors which affect the speed of the market, and characteristics of firms which

    lead to slower synchronization times.

    Modeling Financial Contagion Using Mutually Exciting Jump Processes

    Yacine At-Sahalia

    Princeton University

    As has become abundantly clear during the recent financial crisis, adverse shocks to stock markets

    propagate across the world, with a jump in one region of the world seemingly causing an increase in the

    likelihood of a different jump in another region of the world. To capture this effect, we propose a model

    for asset return dynamics with a drift component, a stochastic volatility component and mutually

    exciting jumps known as Hawkes processes. In the model, a jump in one region of the world increases

    the intensity of jumps occurring both in the same region (self-excitation) as well as in other regions

    (cross-excitation), generating jump clustering. Jump intensities then mean-revert until the next jump.We develop and implement a GMM-based estimation procedure for this model, and show that the

    model fits the data well. The estimates provide evidence for self-excitation both in the US and the other

    world markets, and for asymmetric cross-excitation. Implications of the model for measuring market

    stress, risk management and optimal portfolio choice are also investigated.

    Generalized Method ofMoments with Tail Trimming

  • 8/3/2019 Seminars UCHI

    7/25

    Eric Renault

    University of North Carolina, Chapel Hill, and Brown University

    We develop a GMM estimator for stationary heavy tailed data by trimming an asymptotically vanishing

    sample portion of the estimating equations. Trimming en- sures the estimator is asymptotically normal,

    and self-normalization implies we do not need to know the rate of convergence. Tail-trimming,

    however, ensures asym- metric models are covered under rudimentary assumptions about the

    thresholds, and it implies possibly heterogeneous convergence rates below, at or above pT. Fur- ther, it

    implies super-pT-consistency is achievable depending on regressor and error tail thickness and feedback,

    with a rate arbitrarily close to the largest possible rate amongst untrimmed minimum distance

    estimators for linear models with iid errors, and a faster rate than QML for heavy tailed GARCH. In the

    latter cases the optimal rate is achieved with the e cient GMM weight, and by using simple rules of

    thumb for choosing the number of trimmed equations. Simulation evidence shows the new estimator

    dominates GMM and QML when these estimators are not or have not been shown to be asymptotically

    normal, and for asymmetric GARCH models dominates a heavy tail robust weighted version of QML.

    (Joint with Jonathan B. Hill.)

    Prices and sensitivities ofbarrier options near barrier and convergence of Carr's randomization

    Sergei Levendorskii

    University of Leicester

    The leading term of asymptotics of prices and sensitivities of barrier options and first touch digitals near

    the barrier for wide classes of L\'evy processes with exponential jump densities, including Variance

    Gamma model, KoBoL (a.k.a. CGMY) model and Normal Inverse Gaussian processes. In particular, it is

    proved that option's delta is unbounded for processes of infinite variation, and for processes of finite

    variation and infinite intensity, with zero drift and drift pointing from the barrier. Two-term asymptotic

    formulas are also derived. The convergence of prices, sensitivities and the first two terms of asymptotics

    in Carr's randomization algorithm is proved. Finally, it is proved that, in each case, and for any $m\in

    Z_+$, the error of Carr's randomization approximation can be represented in the form $\sum_{j=1}^m

    c_j(T,x)N^{-j} +O(N^{-j})$, where $N$ is the number of time steps. This justifies not only Richardson

    extrapolation but extrapolations of higher order as well.

    A Multifrequency Theory of the Interest Rate Term Structure

    Liuren Wu

    Zicklin School of Business, Baruch College

    We develop a class of no-arbitrage dynamic term structure models that are extremely parsimonious.

    The model employs a cascade structure to provide a natural ranking of the factors in terms of their

    frequencies, with merely five parameters to describe the interest rate time series and term structure

    behavior regardless of the dimension of the state vector. The dimension-invariance feature allows us to

  • 8/3/2019 Seminars UCHI

    8/25

    estimate low and high-dimensional models with equal ease and accuracy. With 15 LIBOR and swap rate

    series, we estimate 15 models with the dimension going from one to 15. The extensive estimation

    exercise shows that the 15-factor model significantly outperforms the other lower-dimensional

    specifications. The model generates mean absolute pricing errors less than one basis point, and

    overcomes several known limitation of traditional low-dimensional specifications.

    Structural adaptive smoothing using the Propagation-Separation approach

    Jrg Polzehl

    Weierstrass Institute for Applied Analysis and Stochastics (WIAS), Berlin

    The talk presents a class of structural adaptive smoothing methods developed at WIAS. The main focus

    will be on the Propagation-Separation (PS) approach proposed by Polzehl and Spokoiny (2006). The

    method allows to simultaneously identify regions of homogeneity with respect to a prescribed model

    (structural assumption) and to use this information to improve local estimates. This is achieved by an

    iterative procedure. The name Propagation-Separation is a synonym for the two main properties of thealgorithms. In case of homogeneity, that is if the prescribed model holds with the same parameters

    within a large region, the algorithm essentially delivers a series of nonadaptive estimates with

    decreasing variance and propagates to the best estimate from this series. Separation means that, as

    soon as in two design points (X_i) and (X_j) significant differences are detected between estimates,

    observations in (X_j) will not be used to estimate the parameter in (X_j). Both points are separated. The

    power of the approach will be demonstrated using examples from imaging. Current applications range

    from nonstationary time series to the analysis of functional Magnetic Resonance and Diffusion Weighted

    Magnetic Resonance experiments in neuroscience.

    Bayesian Sub

    set Selection in RegressionM

    odels

    AlbertY. Lo

    Hong Kong University of Science andTechnology

    The selection of predictors to include is an important problem in building a multiple regression model.

    The Bayesian approach simply converts the problem into the elementary problem of evaluating

    conditional (or posterior) distributions and is desirable. This approach often assumes a normal error,

    which is a restriction. The Bayesian mixture method can be used to relax this restriction to allow for

    seemingly more realistic errors that are unimodal and/or symmetric. The main thrust of this method

    essentially reduces an infinite-dimensional stochastic analysis problem of averaging randomdistributions to a finite-dimensional one based on averaging random partitions. The posterior

    distribution of the parameters is an average of random partitions. Nesting a Metropolis-Hastings

    algorithm within a weighted Chinese restaurant process of sampling partitions results in an MCMC,

    which provides a stochastic approximation to the posterior mode of the parameters. Numerical

    examples are given. (Joint with Baoqian Pao.)

    A Simple Semiparametrically Efficient Rank-Based Unit Root Test

  • 8/3/2019 Seminars UCHI

    9/25

    Bas Werker

    University ofTilburg

    We propose a simple rank-based test for the unit root hypothesis. Our test is semiparametrically

    efficient if the model contains a non-zero drift, as is the case for many applications. Our test always

    enjoys the advantages of a rank-based test as distribution freeness and exact finite sample sizes. The

    test, being semiparametrically efficient, outperforms the appropriate Dickey-Fuller test, in particular

    when errors have infinite variance. (Joint with Marc Hallin and Ramon van den Akker.)

    Wiener chaos, Malliavin calculus and central limit theorems

    Mark Podolskij

    University of Aarhus

    We present some recent results on central limit theorems for functionals of Gaussian processes. New

    necessary and sufficient conditions on the contraction operator and the Malliavin derivative are

    demonstrated. Finally, we show some illustrating examples.

    Jump Activity in High Frequency Financial Data

    Yacine Ait-Sahalia

    Princeton University

    We propose statistical tests to discriminate between the finite and infinite activity of jumps in a

    semimartingale discretely observed at high frequency. The two statistics allow for a symmetric

    treatment of the problem: we can either take the null hypothesis to be finite activity, or infinite activity.When implemented on high frequency stock returns, both tests point towards the presence of infinite

    activity jumps in the data. We then define a degree of activity for infinitely active jump processes, and

    propose estimators of that degree of activity. (Joint work with Jean Jacod).

    Carr's randomization and new FFT techniques for fast and accurate pricing ofbarrier options

    Dmitri Boyarchenko

    University of Chicago

    I will explain how Carr's randomization approximation can be applied to the problem of pricing a knock-

    out option with one or two barriers in a wide class of models of stock prices used in mathematical

    finance. The approximation yields a backward induction procedure, each of whose steps can be

    implemented numerically with very high speed and precision. The resulting algorithms are significantly

    more efficient than the algorithms based on other approaches to the pricing of barrier options.

    In the first part of my talk I will focus on the classical Black-Scholes model and Kou's double-exponential

    jump-diffusion model, as well as a class of models that contains those two as special cases, namely, the

    hyper-exponential jump-diffusion (HEJD) models. For HEJD models, each step in the backward induction

  • 8/3/2019 Seminars UCHI

    10/25

    procedure for pricing a single or double barrier option can be made very explicit, so that the calculation

    of an option price using our method takes a small fraction of a second.

    In the second part of my talk I will discuss other prominent examples of models used in empirical studies

    of financial markets, including the Variance Gamma model and the CGMY model. In these examples, the

    aforementioned backward induction procedure can be reduced to computing a sequence of Fourier

    transforms and inverse Fourier transforms. However, the numerical calculation of Fourier transforms via

    FFT may lead to significant errors, which are often hard or impossible to control when standard FFT

    techniques are used. I will describe a new approach to implementing FFT techniques that allows one to

    control these errors without sacrificing the computational speed.

    The material I will present is based on joint works with Svetlana Boyarchenko (University of Texas at

    Austin) and Sergei Levendorskii (University of Leicester).

    Why is FinancialMarket Volatility so High?

    RobertEngle

    NewYork University

    Taking risk to achieve return is the central feature of finance. Volatility is a way to measure risk and

    when it is changing over time the task is especially challenging. Measures of volatility are presented

    using up to date information on US Equity markets, bond markets, credit markets and exchange rates.

    Similar measures are shown for international equities. The economic causes of volatility are discussed in

    the light of new research in a cross country study. These are applied to the current economic scene.

    Long run risks are then discussed from the same perspective. Two long run risks are discussed climate

    change and unfunded pension funds. Some policy suggestions are made to reduce these risks and

    benefit society today as well as in the future.

    Maximization by Parts in Extremum Estimation

    Eric Renault

    University of North Carolina at Chapel Hill

    In this paper, we present various iterative algorithms for extremum estimation in cases where direct

    computation of the extremum estimator or via the Newton-Ralphson algorithm is difficult, if not

    impossible. While the Newton-Ralphson algorithm makes use of the full Hessian matrix which may be

    difficult to evaluate, our algorithms use parts of the Hessian matrix only, the parts that are easier to

    compute. We establish convergence and asymptotic properties of our algorithms under regularityconditions including the information dominance conditions. We apply our algorithms to the estimation

    of Merton's structural credit risk model. (Joint work with Yanqin Fan and Sergio Pastorello).

    FinancialMathematics in the Unit Disc: Complexity Bounds for Price Discovery

    Andrew Mullhaupt

  • 8/3/2019 Seminars UCHI

    11/25

    SAC Capital Management

    We consider what sorts of stochastic processes can explain asset prices using tools of complex analysis

    and system theory, and a small amount of empirical evidence. The resulting processes are not the usual

    suspects of financial mathematics.

    Arbitrage bounds on the prices of vanilla options and variance swaps

    Mark H.A. Davis

    Imperial College London

    In earlier work with David Hobson (Mathematical Finance 2007) we established necessary and sufficient

    conditions under which a given set of traded vanilla option prices is consistent with an arbitrage-free

    model. Here we ask, given that these conditions are satisfied, what are the bounds on the price of a

    variance swap written on the same underlying asset (given that its price is a continuous function of

    time). It turns out that there is a non-trivial lower bound, computed by a dynamic programming

    algorithm, but there is no upper bound unless we impose further conditions on the price process. In

    view of the well-known connection between variance swaps and the log option, appropriate conditions

    relate to left-tail information on the price S(T) at the option exercise time T, such as existence of a

    conditional inverse power moment. One can also reverse the question and ask what information a

    variance swap provides about the underlying distribution.

    Skewness and the Bubble

    Eric Ghysels

    University of North Carolina at Chapel Hill and FederalReserve Bank, NewYork

    We use a sample of option prices, and the method of Bakshi, Kapadia and Madan (2003), to estimate the

    ex ante higher moments of the underlying individual securities' risk-neutral returns distribution. We find

    that individual securities' volatility, skewness and kurtosis are strongly related to subsequent returns.

    Specifically, we find a negative relation between volatility and returns in the cross-section. We also find

    a significant relation between skewness and returns, with more negatively (positively) skewed returns

    associated with subsequent higher (lower) returns, while kurtosis is positively related to subsequent

    returns. To analyze the extent to which these returns relations represent compensation for risk, we use

    data on index options and the underlying index to estimate the stochastic discount factor over the 1996-

    2005 sample period, and allow the stochastic discount factor to include higher moments. We find

    evidence that, even after controlling for differences in co-moments, individual securities' skewness

    matters. However, when we combine information in the risk-neutral distribution and the stochastic

    discount factor to estimate the implied physical distribution of industry returns, we find little evidence

    that the distribution of technology stocks was positively skewed during the bubble period--in fact, these

    stocks have the lowest skew, and the highest estimated Sharpe ratio, of all stocks in our sample. (Joint

    with Jennifer Conrad and Robert Dittmar.)

  • 8/3/2019 Seminars UCHI

    12/25

    A New Approach For Modelling and Pricing Equity Correlation Swaps

    Sebastien Bossu

    Columbia University

    A correlation swap on N underlying stocks pays the average of the average correlation coefficient of

    daily returns observed in a given time period. The pricing and hedging of this derivative instrument is

    non-trivial. We show how the payoff can be approximated as the ratio of two types of tradable variances

    in the special case where the underlying stocks are the constituents of an equity index, and proceed to

    derive pricing and hedging formulas within a two-factor toy model'.

    TheMathematics of Liquidity and OtherMatters Concerning Portfolio and RiskManagement

    Ranjan Bhaduri

    Alphametrix

    This talk will cover liquidity matters. A game-theoretic example will demonstrate how it is easy for

    humans to underestimate the value of liquidity. Some problems in the hedge fund space concerning

    popular analytics will be explored, and new potential solutions such as liquidity buckets, liquidity

    derivatives, liquidity duration, and liquidity indices will be introduced. Applications of the Omega

    function, and some important insights on proper due diligence will be examined. The AlternativeEdge

    Short-Term Traders Index will also be discussed.

    Approximations of Risk NeutralMeasures and Derivatives Pricing

    Fangfang Wang

    University of North Carolina at Chapel Hill

    Risk neutral measures are a key ingredient of financial derivative pricing. Much effort has been devoted

    to characterizing the risk neutral distribution pertaining to the underlying asset. In this talk, we revisit

    the class of Generalized Hyperbolic(GH) distributions and study their applications in option pricing.

    Specially, we narrow down to three subclasses: the Normal Inverse Gaussian distribution, the Variance

    Gamma distribution and the Generalized Skewed T distribution. We do this because of their appealing

    features in terms of tail behavior and analytical tractability in terms of moment estimation. Different

    from the existing literature on applying the GH distributions to option pricing, we adopt a simple

    moment-based estimation approach to the specification of the risk neutral measure, which has anintuitive appeal in terms of how volatility, skewness and kurtosis of the risk neutral distribution can

    explain the behavior of derivative prices. We provide numerical and empirical evidence showing the

    superior performance of the Normal Inverse Gaussian distribution as an approximation compared to the

    existing methods and the other two distributions.

    Quasi-Maximum Likelihood Estimation of Volatility with High Frequency Data

  • 8/3/2019 Seminars UCHI

    13/25

    Dacheng Xiu

    Princeton University

    This paper investigates the properties of the well-known maximum likelihood estimator in the presence

    of stochastic volatility and market microstructure noise, by extending the classic asymptotic results of

    quasi-maximum likelihood estimation. When trying to estimate the integrated volatility and the variance

    of noise, this parametric estimator remains consistent, efficient and robust as a quasi-estimator under

    misspecified assumptions. A variety of Monte Carlo simulations show its advantage over the

    nonparametric Two Scales Realized Volatility estimator in terms of the efficiency and the small sample

    accuracy.

    Generalized Affine Models

    Nour Meddahi

    Toulouse School ofEconomics

    Affine models are very popular in modeling financial time series as they allow for analytical calculation

    of prices of financial derivatives like treasury bonds and options. The main property of affine models is

    that the conditional cumulant function, defined as the logarithmic of the conditional characteristic

    function, is affine in the state variable. Consequently, an affine model is Markovian, like an

    autoregressive process, which is an empirical limitation. The paper generalizes affine models by adding

    in the current conditional cumulant function the past conditional cumulant function. Hence, generalized

    affine models are non-Markovian, such as ARMA and GARCH processes, allowing one to disentangle the

    short term and long-run dynamics of the process. Importantly, the new model keeps the tractability of

    prices of financial derivatives. This paper studies the statistical properties of the new model, derives its

    conditional and unconditional moments, as well as the conditional cumulant function of future

    aggregated values of the state variable which is critical for pricing financial derivatives. It derives the

    analytical formulas of the term structure of interest rates and option prices. Different estimating

    methods are discussed (MLE, QML, GMM, and characteristic function based estimation methods). Three

    empirical applications developed in companion papers are presented. The first one based on Feunou

    (2007) presents a no-arbitrage VARMA term structure model with macroeconomic variables and shows

    the empirical importance of the inclusion of the MA component. The second application based on

    Feunou and Meddahi (2007a) models jointly the high-frequency realized variance and the daily asset

    return and provides the term structure of risk measures such as the Value-at-Risk, which highlights the

    powerful use of generalized affine models. The third application based on Feunou, Christoffersen,

    Jacobs and Meddahi (2007) uses the model developed in Feunou and Meddahi (2007a) to price options

    theoretically and empirically. (Joint with Bruno Feunou).

    The extremogram: a correlogram for extreme event

    Thomas Mikosch

    University of Copnenhagen

  • 8/3/2019 Seminars UCHI

    14/25

    We consider a strictly stationary sequence of random vectors whose finite-dimensional distributions are

    jointly regularly varying (regvar) with a positive index. This class of processes includes among others

    ARMA processes with regvar noise, GARCH processes with normal or student noise, and stochastic

    volatility models with regvar multiplicative noise. We define an analog of the autocorrelation function,

    the extremogram, which only depends on the extreme values in the sequence. We also propose a

    natural estimator for the extremogram and study its asymptotic properties under strong mixing. We

    show asymptotic normality, calculate the extremogram for various examples and consider spectral

    analysis related to the extremogram.

    Fragile Beliefs and the Price of Uncertainty

    Lars Peter Hansen

    University of Chicago

    A representative consumer uses Bayes' law to learn about parameters and to construct probabilities

    with which to perform ongoing model averaging. The arrival of signals induces the consumer to alter hisposterior distribution over parameters and models. The consumer copes with the specification doubts

    by slanting probabilities pessimistically. One of his models puts long-run risks in consumption growth.

    The pessimistic probabilities slant toward this model and contribute a counter-cyclical and signal-

    history-dependent component to prices at risk.

    Efficient estimation for discretely sampled ergodic SDE models

    Michael Srensen

    University of Copenhagen

    Simple and easily checked conditions are given that ensure rate optimality and efficiency of estimators

    for ergodic SDE models in a high frequency asymptotic scenario, where the time between observations

    goes to zero while the observation horizon goes to infinity. For diffusion models rate optimality is

    important because parameters in the diffusion coefficient can be estimated at a higher rate than

    parameters in the drift. The focus is on approximate martingale estimating functions, which provide

    simple estimators for many SDE models observed at discrete time points. In particular, optimal

    martingale estimating functions are shown to give rate optimal and efficient estimators. Explicit optimal

    martingale estimating functions are obtained for models based on Pearson diffusions, where the drift is

    linear and the squared diffusion coefficient is quadratic, and for transformations of these processes. This

    class of models is surprisingly versatile. It will be demonstrated that explicit estimating functions canalso be found for integrated Pearson diffusions and stochastic Pearson volatility models.

    Volatility and Covariation of Financial Assets: A High-Frequency Analysis

    Alvaro Cartea

    Birkbeck, University of London

  • 8/3/2019 Seminars UCHI

    15/25

    Using high frequency data for the price dynamics of equities we measure the impact that market

    microstructure noise has on estimates of the: (i) volatility of returns; and (ii) variance-covariance matrix

    of n assets. We propose a Kalman-filter-based methodology that allows us to deconstruct price series

    into the true efficient price and the microstructure noise. This approach allows us to employ volatility

    estimators that achieve very low Root Mean Squared Errors (RMSEs) compared to other estimators that

    have been proposed to deal with market microstructure noise at high frequencies. Furthermore, this

    price series decomposition allows us to estimate the variance covariance matrix of n assets in a more

    efficient way than the methods so far proposed in the literature. We illustrate our results by calculating

    how microstructure noise affects portfolio decisions and calculations of the equity beta in a CAPM

    setting.

    Multivariate Levy driven Stochastic VolatilityModels - OU type and COGARCH

    Technical University of Munich

    Multivariate extensions of two continuous time stochastic volatility models driven by Lvy processes -

    the Ornstein-Uhlenbeck type and the COGARCH model - are introduced. First, Ornstein-Uhlenbeck type

    processes taking values in the positive semi-definite matrices are defined using matrix subordinators

    (special matrix-valued Lvy processes) and a special class of linear operators. Naturally these processes

    can be used to describe the random evolvement of a covariance matrix over time and we therefore use

    them in order to define a multivariate stochastic volatility model for financial data which generalises the

    popular univariate model introduced by Barndorff-Nielsen and Shephard. For this model we derive

    results regarding the second order structure, especially regarding the returns and squared returns,

    which leads to a GMM estimation scheme. Finally, we discuss the tail behaviour and extensions allowing

    to model long memory phenomena. Thereafter, an alternative stochastic volatility model driven only by

    a single d-dimensional Lvy process - the multivariate COGARCH process - is introduced and analysed.

    Nonparametric Testing forMultivariate VolatilityModels

    Wolfgang Polonik

    Department of Statistics

    University of California, Davis

    A novel nonparametric methodology is presented that facilitates the investigation of different features

    of a volatility function. We will discuss the construction of tests for (i) heteroscedasticity, for (ii) a

    bathtub shape, or a "smile effect", and (iii) a parametric volatility model, where interestingly theresulting test for a smile effect can be viewed as a nonparametric generalization of the well-known LR-

    test for constant volatility versus an ARCH model. The tests can also be viewed as tests for the presence

    of certain stochastic dominance relations between two multivariate distributions. The inference based

    on those tests may be further enhanced through associated diagnostic plots. We will illustrated our

    methods via simulations and applications to real financial data. The large sample behavior of our test

    statistics is also investigated.

  • 8/3/2019 Seminars UCHI

    16/25

    This is joint work with Q. Yao, London School of Economics.

    Modeling and Analyzing High-Frequency Financial Data

    Yazhen Wang

    National Science Foundation and University of Connecticut

    Volatilities of asset returns are central to the theory and practice of asset pricing, portfolio allocation,

    and risk management. In financialeconomics, there is extensive research on modeling and

    forecastingvolatility based on Black-Scholes, diffusion, GARCH, stochastic volatilitymodels and option

    pricing formulas. Nowadays, thanks to technologicalinnovations, high-frequency financial data are

    available for a host ofdifferent financial instruments on markets of all locations and at scaleslike

    individual bids to buy and sell, and the full distribution of suchbids. The availability of high-frequency

    data stimulates an upsurgeinterest in statistical research on better estimation of volatility. Thistalk will

    start with a review on low-frequency financial time series andhigh-frequency financial data. Then I willintroduce popular realizedvolatility computed from high-frequency financial data and present my

    workon wavelet methods for analyzing jump and volatility variations and thematrix factor model for

    handling large size volatility matrices. Theproposed wavelet based methodology can cope with both

    jumps in the priceand market microstructure noise in the data, and estimate both volatilityand jump

    variations from the noisy data. The matrix factor model isproposed to produce good estimators of large

    size volatility matrices byattacking non-synchronized problem in high-frequency price data and reducing

    the huge dimension (or size) of volatility matrices. Parts of mytalk are based on joint work with Jianqing

    Fan, Qiwei Yao, and Pengfei Li.

    What happened to the q

    uants in A

    ugu

    st 2007

    AmirE.Khandani and Andrew W. Lo

    MIT

    During the week of August 6, 2007, a number of high-profile and highlysuccessful quantitative

    long/short equity hedge funds experiencedunprecedented losses. Based on empirical results from TASS

    hedge-fund dataas well as the simulated performance of a specific long/short equitystrategy, we

    hypothesize that the losses were initiated by the rapidunwinding of one or more sizable quantitative

    equity market-neutralportfolios. Given the speed and price impact with which this occurred, itwas likely

    the result of a sudden liquidation by a multi-strategy fund orproprietary-trading desk, possibly due tomargin calls or a riskreduction. These initial losses then put pressure on a broader set oflong/short and

    long-only equity portfolios, causing further losses onAugust 9th by triggering stop-loss and de-leveraging

    policies. Asignificant rebound of these strategies occurred on August 10th, which isalso consistent with

    the sudden liquidation hypothesis. This hypothesissuggests that the quantitative nature of the losing

    strategies wasincidental, and the main driver of the losses in August 2007 was thefiresale liquidation of

    similar portfolios that happened to bequantitatively constructed. The fact that the source of dislocation

  • 8/3/2019 Seminars UCHI

    17/25

    inlong/short equity portfolios seems to lie elsewhere---apparently in acompletely unrelated set of

    markets and instruments---suggests thatsystemic risk in the hedge-fund industry may have increased in

    recentyears.

    A continuous time GARCH process driven by a Levy process

    Alexander Lindner

    Technische Universitt Mnchen, University of Marburg, and University of B raunschweig, Germany

    A continuous time GARCH process which is driven by a Levy process is introduced. It is shown that this

    process shares many features with the discrete time GARCH process. In particular, the stationary

    distribution has heavy tails. Extensions of this process are also discussed. We then turn attention to

    some first estimation methods for this process, with particular emphasis on a generalized method of

    moment estimator. Finally, we also report on how the continuous time GARCH process approximates

    discrete time GARCH processes, when sampled at discrete times. The talk is based on joint work with

    Stephan Haug (TU Munich), Claudia Klueppelberg (TU Munich) and Ross Maller (Australian NationalUniversity).

    The Price Impact of Institutional Herding

    Amil Dasgupta

    London School ofEconomics and CEPR

    We present a simple theoretical model of the price impact of institutional herding. In our model, career-

    concerned fund managers interact with profit-motivated proprietary traders and monopolistic market

    makers in a pure dealer-market. The reputational concerns of fund managers generate endogenous

    conformism, which, in turn, impacts the prices of the assets they trade. In contrast, proprietary traders

    trade in a contrarian manner. We show that, in markets dominated by fund managers, assets

    persistently bought (sold) by fund managers trade at prices that are too high (low) and thus experience

    negative (positive) long-term returns, after uncertainty is resolved. The pattern of equilibrium trade is

    also consistent with increasing (decreasing) short-term transaction- price paths during or immediately

    after an institutional buy (sell) sequence. Our results provide a simple and stylized framework within

    which to interpret the empirical l iterature on the price impact of institutional herding. In addition, our

    paper generates several new testable implications. (Joint with with Andrea Prat and Mechela Verardo)

    Signing and Nearly-Gamma Random Variables

    Dale Rosenthal

    University of Chicago

    Many financial events involve delays. I consider data delays and propose metrics for other phenomena:

    the mean time to deletion from a financial index, the weighted-average prepayment time for a loan

    portfolio, and the weighted-average default time for a loan portfolio. Under reasonable conditions,

  • 8/3/2019 Seminars UCHI

    18/25

    these are all nearly-gamma distributed; thus various small-sample approximations are examined. This

    approach also yields a metric of loan portfolio diversity similar to one used in rating collateralized debt

    obligations. Finally, the approximations are used to create a model for signing trades. The model is

    flexible enough to encompass the midpoint, tick, and EMO methods and yields probabilities of correct

    predictions.

    Pricing American-Style Options by Monte Carlo Simulation: Alternatives to Ordinary Least Squares

    Stathis Tompaidis

    University ofTexas

    We investigate the performance of the Ordinary Least Squares (OLS) regression method in Monte Carlo

    simulation algorithms for pricing American options. We compare OLS regression against several

    alternatives and find that OLS regression underperforms methods that penalize the size of coefficient

    estimates. The degree of underperformance of OLS regression is greater when the number of simulation

    paths is small, when the number of functions in the approximation scheme is large, when Europeanoption prices are included in the approximation scheme, and when the number of exercise

    opportunities is large. Based on our findings, instead of using OLS regression we recommend an

    alternative method based on a modification of Matching Projection Pursuit. (Joint with Chunyu Yang)

    Analysing Time Series with Nonstationarity: Common Factors and Curve Series

    QiweiYao

    London School ofEconomics

    We introduce two methods for modelling time series exhibiting nonstationarity. The first method is in

    the form of the conventional factor model. However the estimation is carried out via expanding the

    white noise space step by step, therefore solving a high-dimensional optimization problem by many low-

    dimensional sub-problems. More significantly it allows the common factors to be nonstationary.

    Asymptotic properties of the estimation were investigated. The proposed methodology was illustrated

    with both simulated and real data sets. The second approach is to accommodate some nonstationary

    features into a stationary curved (or functional) time series framework. It turns that the stationarity,

    though defined in a Hilbert space, facilitates the estimation for the dimension of the curved series in

    terms of a standard eigenanalysis.

    Estimation in time series that areboth nonlinear and nonstationary

    Dag Tjostheim

    University of Bergen

    Motivated by problems in nonlinear cointegration theory I will look at estimation in time series that are

    both nonlinear and nonstationary. The models considered are nonlinear generalizations of the random

    walk. Markov recurrence theory is used to establish asymptotic distributions. The emphasis is on

  • 8/3/2019 Seminars UCHI

    19/25

    nonparametric estimation, but I will also look at parametric estimation in a nonstationary threshold

    model.

    Homogeneous Groups and Multiscale IntensityModels forMultiname Credit Derivatives

    Ronnie Sircar

    Princeton University

    The pricing of basket credit derivatives is contingent upon

    1. realistic modeling of the firms' default times and the correlation between them; and

    2. efficient computational methods for computing the portfolio loss distribution from the firms' marginal

    default time distributions.

    We revisit intensity-based models and, with the aforementioned issues in mind, we propose

    improvements

    1. via incorporating fast mean-reverting stochastic volatility in the default intensity processes; and

    2. by considering a hybrid of a top-down and a bottom-up model with homogeneous groups within theoriginal set of firms.

    We present a calibration example from CDO data, and discuss the relative performance of the

    approach.

    This is joint work with Evan Papageorgiou.

    Modeling and Estimation of High-Dimensional CovarianceMatrix for Portfolio Allocation and Risk

    Management

    Jianqing Fan

    Princeton University

    Large dimensionality comparable to the sample size is a common feature as in modern portfolio

    allocation and risk management. Motivated by the Capital Asset Pricing Model, we propose to use a

    multi-factor model to reduce the dimensionality and to estimate the covariance matrix among those

    assets. Under some basic assumptions, we have established the rate of convergence and asymptotic

    normality for the proposed covariance matrix estimator. We identify the situations under which the

    factor approach can gain substantially the performance and the cases where the gains are only marginal,

    where compared with covariance matrix. We also introduce the concept of sparse portfolio allocation

    and propose two efficient algorithms for selecting the optimal subset of the portfolio. The risk of the

    optimally selected portfolio is thoroughly studied and examined. The performance, in terms of risk andutility, of the sparsely selected portfolio is compared with the classical optimal portfolio of Markowitz

    (1952).

    Optimal dividend payments and reinvestments of diffusion processes with both fixed and

    proportional costs

    Jostein Paulsen

  • 8/3/2019 Seminars UCHI

    20/25

    University of Bergen and University of Chicago

    Assets are assumed to follow a diffusion process subject to some conditions. The owners can pay

    dividends at their discretion, but whenever assets reach zero, they have to reinvest money so that

    assets never go negative. With each dividend payment there is a fixed and a proportional cost, and so

    with reinvestments. The goal is to maximize expected value of discounted net cash flow, i.e. dividendspaid minus reinvestments. It is shown that there can be two different solutions depending on the model

    parameters and the costs.

    1. Whenever assets reach a barrier they are reduced by a fixed amount through a dividend payment,

    and whenever they reach 0 they are increased to another fixed amount by a reinvestment.

    2. There is no optimal policy, but the value function is approximated by policies of the form described in

    Item 1 for increasing barriers. We provide criteria to decide whether an optimal solution exists, and

    when not, show how to calculate the value function. It is discussed how the problem can be solved

    numerically and numerial examples are given. The talk is based on a paper with the same title to appear

    in SIAM Journal of Control and Optimization.

    Hitting Time Problems with Applications to Finance and Insurance

    Sebastian Jaimungal

    University ofToronto

    The distribution of the first hitting time of a Brownian motion to a linear boundary is well known.

    However, if the boundary is no longer linear, this distribution is not in general identifiable. Nonetheless,

    the boundary and distribution satisfy a variety of beautiful integral equations due to Peskir. In this talk, I

    will discuss how to generalize those equations and lead to an interesting partial solution to the inverse

    problem: Given a distribution of hitting times, what is the corresponding boundary? By randomizing thestarting point of the Brownian motion, I will show how a kernel estimator of the distribution with

    gamma kernels can be exactly replicated.

    Armed with these tools, there are two natural applications: one to finance and one to insurance. In the

    financial context, the Brownian motion may drive the value of a firm and through a structural modeling

    approach I will show how CDS spread curves can be matched. In the insurance context, suppose an

    individuals health reduces by one unit per annum with fluctuations induced by a Brownian motion and

    once their health hits zero the individual dies. I will show how life-table data can be nicely explained by

    this model and illustrate how to perturb the distribution for pricing purposes.

    This is joint work with Alex Kreinin and Angelo Valov

    Indifference pricing for general semimartingales

    Matheus Grasselli

    McMaster University

  • 8/3/2019 Seminars UCHI

    21/25

    We prove a duality formula for utility maximization with random endowment in general semimartingale

    incomplete markets. The analysis is based on the Orlicz space $L^{\hat u}$ naturally associated with the

    utility function $u$. Using this formulation we prove several key properties of the indifference price

    $\pi(B)$ for a claim $B$ satisfying conditions weaker than those assumed in the literature. In particular,

    the price functional $\pi$ turns out to be, apart from a sign, a convex risk measure on the Orlicz space

    $L^{\hat u}$. This is joint work with S. Biagini and M. Frittelli.

    Consistent Calibration of CDO Tranches with the Generalized-Poisson Loss dynamical model

    Damiano Brigo

    We consider a dynamical model for the loss distribution of a pool of names. The model is based on the

    notion of generalized Poisson process, allowing for the possibility of more than one jump in small time

    intervals. We introduce extensions of the basic model based on piecewise-gamma, scenario-based and

    CIR random intensity in the constituent Poisson processes. The models are tractable, pricing and in

    particular simulation is easy, and consistent calibration to quoted index CDO tranches and tranchelets

    for several maturities is feasible, as we illustrate with detailed numerical examples.

    Filtering withMarked Point Process Observations: Applications to Ultra-High Frequency Data

    Yong Zeng

    Ultra-high-frequency (UHF) data is naturally modeled as a marked point process (MPP). In this talk, we

    propose a general filtering model for UHF data. The statistical foundations of the proposed model -

    likelihoods, posterior, likelihood ratios and Bayes factors - are studied. They are characterized by

    stochastic differential equations such as filtering equations. Convergence theorems for consistent,

    efficient algorithms are established. Two general approaches for constructing algorithms are discussed.

    One approach is Kushner's Markov chain approximation method, and the other is Sequential Monte

    Carlo method or particle filtering method. Simulation and real data examples are provided.

    Maximum Drawdown, Directional Trading andMarket Crashes

    Jan Vecer

    Maximum drawdown (MDD) measures the worst loss of the investor who follows a simple trading

    strategy of buying and subsequently selling a given asset within a certain time framework. MDD is

    becoming a popular performance measure since it can capture the size of the market drops. One can

    view the maximum drawdown as a contingent claim, and price and hedge it accordingly as a derivative

    contract. Similar contracts can be written on the maximum drawup (MDU). We show that buying a

    contract on MDD or MDU is equivalent to adopting a momentum trading strategy, while selling it

    corresponds to contrarian trading. Momentum and contrarian traders represent a larger group of

    directional traders. We also discuss how pricing and hedging MDD contract can be used to measure the

    severity of the market crashes, and how MDD influences volatility evolution during the periods of

    market shocks.

  • 8/3/2019 Seminars UCHI

    22/25

    Long Run Risk

    Lars Peter Hansen (University of Chicago), (joint with Jose Scheinkman, Princeton University)

    We create an analytical structure that reveals the long run risk-return relationship for nonlinear

    continuous time Markov environments. We do so by studying an eigenvalue problem associated with a

    positive eigenfunction for a conveniently chosen family of valuation operators. This family forms a

    semigroup whose members are indexed by the elapsed time between payoff and valuation dates. We

    represent the semigroup using a positive process with three components: an exponential term

    constructed from the eigenvalue, a martingale and a transient eigenfunction term. The eigenvalue

    encodes the risk adjustment, the martingale alters the probability measure to capture long run

    approximation, and the eigenfunction gives the long run dependence on the Markov state. We establish

    existence and uniqueness of the relevant eigenvalue and eigenfunction. By showing how changes in the

    stochastic growth components of cash flows induce changes in the corresponding eigenvalues and

    eigenfunctions, we reveal a long-run risk return tradeoff.

    Accounting forNonstationarity and Heavy Tails in Financial Time SeriesWith Applications to Robust

    Risk Management

    Ying Chen (Humboldt University, Berlin) (joint with Vladimir Spokoiny)

    In the ideal Black-Scholes world, financial time series are assumed 1) stationary (time homogeneous)

    and 2) having conditionally normal distribution given the past. These two assumptions have been

    widely-used in many methods such as the RiskMetrics, one risk management method considered as

    industry standard. However these assumptions are unrealistic. The primary aim of the paper is to

    account for nonstationarity and heavy tails in time series by presenting a local exponential smoothing

    approach, by which the smoothing parameter is adaptively selected at every time point and the heavy-tailedness of the process is considered. A complete theory addresses both issues. In our study, we

    demonstrate the implementation of the proposed method in volatility estimation and risk management

    given simulated and real data. Numerical results show the proposed method delivers accurate and

    sensitive estimates.

    Hedging and portfolio optimization in a Levy market model

    David Nualart (Department of Mathematics, University ofKansas)

    We consider a market model where the stock price process is a geometric Levy process. In general, this

    model is not complete and there are multiple martingale measures. We will present the completion ofthis market with a series of assets related to the power-jump processes of the underlying Levy process.

    On the other hand, we will discuss the maximization of the expected utility of portolios based on these

    new assets. In some particular cases we obtain optimal portfolios based on stocks and bonds, showing

    that the new assets are superfluous for certain martingale measures that depend on the utility function

    we use.

  • 8/3/2019 Seminars UCHI

    23/25

    Nonparametric Regression Models for Nonstationary Variables with Applications in Economics and

    Finance

    Zongwu Cai

    In this talk, I will talk about how to use a nonparametric regression model to do forecasting for

    nonstationary economic and financial data, for example, to forecast the inflation rate using the velocity

    variable in economics and to test predictability efficiency in stock returns using the log dividend-price

    ratio and/or the log earnings-price ratio and/or the three-month T-bill and/or the long-short yield

    spread.

    A local linear approach is developed to estimate the unknown functionals. The consistency and

    asymptotic normality of the proposed estimators are obtained. Our asymptotic results show that the

    asymptotic bias is same for all estimators of coefficient functions but the convergence rates are totally

    different for stationary and nonstationary covariates. The convergence rate for the estimators of the

    coefficient functions for nonstationary covariates is faster than that for stationary covariates with a

    factor of $n^{-1/2}$. This finding seems new and it leads to a two-stage approach to improve the

    estimation efficiency.

    When the coefficient function is a function of nonstationary variable, our new findings are that the

    asymptotic bias term is the same as that for stationary case but the convergence rate is different and

    further, the asymptotic distribution is not a normal but a mixed normal associated with the local time of

    a standard Brownian motion. Moreover, the asymptotic behaviors at boundaries are investigated. The

    proposed methodology is illustrated with an economic time series, which exhibits nonlinear and

    nonstationary behavior.

    This is a join work with Qi Li, Department of Economics, Texas A&M University and Peter M. Robinson,

    Department of Economics, London School of Economics.

    Robust asset allocation using benchmarking

    Andrew Lim (University of California, Berkeley)

    In this paper, we propose and analyze a new approach to finding robust portfolios for asset allocation

    problems. It differs from the usual worst case approach in that a (dynamic) portfolio is evaluated not

    only by its performance when there is an adversarial opponent (``nature"), but also by its performance

    relative to a fully informed benchmark investor who behaves optimally given complete knowledge of the

    model (i.e. nature's decision). This relative performance approach has several important properties: (i)

    optimal decisions are less pessimistic than portfolios obtained from the usual worst case approach, (ii)

    the dynamic problem reduces to a convex static optimization problem under reasonable choices of thebenchmark portfolio for important classes of models including ambiguous jump-diffusions, and (iii) this

    static problem is dual to a Bayesian version of a single period asset allocation problem where the prior

    on the unknown parameters (for the dual problem) correspond to the Lagrange multipliers in this

    duality relationship.

    Models for Option Prices

  • 8/3/2019 Seminars UCHI

    24/25

    Jean Jacod (Universite Paris VI) (joint with Philip Protter, Cornell University)

    In order to remove the market incompleteness inherent to a stock price model with stochastic volatility

    and/or with jumps, we suggest to model at the same time the price of the stock and the prices of

    sufficiently many options, in a way aimilar to what the Heath-Jarrow-Morton model achieves for interest

    rates. This gives rise to a number of mathematical difficulties, pertaining to the fact that option pricesshould be equal to, or closely related with the stock price near maturity. Some of these difficulties are

    solved, under appropriate assumptions, and then we get a model for which completeness holds.

    The Kelly Criterion and its variants: theory and practice in sports, lottery, futures, and options trading

    William Ziemba (University of British Columbia)

    In capital accumulation under uncertainty, a decision-maker must determine how much capital to invest

    in riskless and risky investment opportunities over time. The investment strategy yields a stream of

    capital, with investment decisions made so that the dynamic distribution of wealth has desirable

    properties. The distribution of accumulated capital to a fixed point in time and the distribution of thefirst passage time to a fixed level of accumulated capital are variables controlled by the investment

    decisions. An investment strategy which has many attractive and some not attractive properties is the

    growth optimal strategy, where the expected logarithm of wealth is maximized. This strategy is also

    referred to as the Kelly strategy. It maximizes the rate of growth of accumulated capital. With the Kelly

    strategy, the first passage time to arbitrary large wealth targets is minimized, and the probability of

    reaching those targets is maximized. However, the strategy is very aggressive since the Arrow-Pratt risk

    aversion index is essentially zero. Hence, the chances of losing a substantial portion of wealth are very

    high, particularly if the estimates of the returns distribution are in error. In the time domain, the chances

    are high that the first passage to subsistence wealth occurs before achieving the established wealth

    goals.We survey of the theoretical results and practical uses of the capital growth approach. Alternative

    formulations for capital growth models in discrete and continuous time are presented. Various criteria

    for performance and requirements for feasibility are related in an expected utility framework. Typically,

    there is a trade-off between growth and security with a fraction invested in an optimal growth portfolio

    determined by the risk aversion criteria. Models for calculating the optimal fractional Kelly investment

    with alternative performance criteria are formulated. The effect of estimation and modeling error on

    strategies and performance is discussed. Various applications of the capital growth approach are made

    to futures trading, lotto games, horseracing, and the fundamental problem of asset allocation between

    stocks, bonds and cash. We conclude with a discussion of some of the great investors and speculators,

    and how they used Kelly and fractional Kelly strategies in their investment programs.

    Practical experiences in financial marketsusing Bayesian forecasting systems

    Bluford H. Putnam, EQA Partners, L.P.

    Going from theory to practice can be exciting when real money is on the line. This presentation itemizes

    and discusses from a theoretical and practical perspective a list of lessons learned from 20 years of

  • 8/3/2019 Seminars UCHI

    25/25

    investing using Bayesian statistical forecasting techniques linked to mean-variance optimization systems

    for portfolio construction. Several simulations will be provided to illustrate some of the key points

    related to risk management, time decay of factor data, and other lessons from practical experience. The

    forecasting models focus on currencies, global government benchmark bonds, major equity indices, and

    a few commodities. The models use Bayesian inference (1) in the estimation of factor coefficients for the

    estimation of future excess returns for securities and (2) in the estimation of the forward-looking

    covariance matrix used in the portfolio optimization process. Zellner's seeming unrelated regressions is

    also used, as is Bayesian shrinkage. The mean-variance methodology uses a slightly modified objective

    function to go beyond the risk-return trade-off and also penalize transactions costs and size-unbalanced

    portfolios. The portfolio optimization process is not constrained except for the list of allowable securities

    in the portfolio, given the objective function. This is a multi-model approach, as experience has rejected

    the one-model "Holy Grail" approach to building the one model for all seasons, so several distinct and

    stylized models will be discussed.

    Pricing Credit Derivatives and Measuring Credit Risk in MultifactorModels

    Paul Glasserman, Columbia Business School

    The Gaussian copula remains a standard model for pricing multi-name credit derivatives and measuring

    portfolio credit risk. In practice, the model is most widely used in its single-factor form, though this

    model is too simplistic to match the pattern of implied correlations observed in market prices of CDOs,

    and too simplistic for credible risk measurement. We discuss the use of multifactor versions of the

    model. An obstacle to using a multifactor model is the efficient calculation of the loss distribution. We

    develop two fast and accurate approximations for this problem. The first method is a correlation

    expansion technique that approximates multifactor models in powers of a parameter that scales

    correlation; this reduces pricing to more tractable single-factor models. The second method

    approximates the characteristic function of the loss distribution in a multifactor model and applies

    numerical transform inversion. We analyze the errors in both methods and illustrate their performance

    numerically. This talk is based on joint work with Sira Suchintabandid.

    How Do Waiting Times or Duration Between Trades of Underlying Securities Affect Option Prices

    Alvaro Cartea andThilo Meyer-Brandis

    We propose a model for stock price dynamics that explicitly incorporates (random) waiting times, also

    known as duration, and show how option prices are calculated. We use ultra-high frequency data for

    blue-chip companies to justify a particular choice of waiting time or duration distribution and then

    calibrate risk-neutral parameters from options data. We also show that implied volatilities may be

    explained by the presence of duration between trades.