Upload
others
View
1
Download
0
Embed Size (px)
Citation preview
ESSAYS IN PORTFOLIO CREDIT RISK
A DISSERTATION
SUBMITTED TO THE DEPARTMENT OF MANAGEMENT SCIENCE AND ENGINEERING
AND THE COMMITTEE ON GRADUATE STUDIES
OF STANFORD UNIVERSITY
IN PARTIAL FULFILLMENT OF THE REQUIREMENTS
FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY
Baeho Kim
February 2010
http://creativecommons.org/licenses/by-nc/3.0/us/
This dissertation is online at: http://purl.stanford.edu/bg468jw7546
© 2010 by Baeho Kim. All Rights Reserved.
Re-distributed by Stanford University under license with the author.
This work is licensed under a Creative Commons Attribution-Noncommercial 3.0 United States License.
ii
I certify that I have read this dissertation and that, in my opinion, it is fully adequatein scope and quality as a dissertation for the degree of Doctor of Philosophy.
Kay Giesecke, Primary Adviser
I certify that I have read this dissertation and that, in my opinion, it is fully adequatein scope and quality as a dissertation for the degree of Doctor of Philosophy.
James Primbs
I certify that I have read this dissertation and that, in my opinion, it is fully adequatein scope and quality as a dissertation for the degree of Doctor of Philosophy.
John Weyant
Approved for the Stanford University Committee on Graduate Studies.
Patricia J. Gumport, Vice Provost Graduate Education
This signature page was generated electronically upon submission of this dissertation in electronic format. An original signed hard copy of the signature page is on file inUniversity Archives.
iii
Abstract
This dissertation considers the measurement and management of portfolio credit risk.
Collateralized debt obligations, which are securities with payoffs that are tied to the
cash flows in a portfolio of defaultable assets such as corporate bonds, play a signif-
icant role in the financial crisis that has spread throughout the world. Insufficient
capital provisioning due to flawed and overly optimistic risk assessments is at the
center of the problem. In the first part of the dissertation, we develop stochastic
methods to measure the risk of positions in collateralized debt obligations and re-
lated instruments tied to an underlying portfolio of defaultable assets. We propose
an adaptive point process model of portfolio default timing, a maximum likelihood
method for estimating point process models that is based on an acceptance/rejection
re-sampling scheme, and statistical tests for model validation. To illustrate these
tools, they are used to estimate the distribution of the profit or loss generated by
positions in multiple tranches of a collateralized debt obligation that references the
CDX High Yield portfolio, and the risk capital required to support these positions.
The second part of the dissertation develops maximum likelihood estimators of the
term structure of systemic risk in the U.S. financial sector, defined as the conditional
probability of failure of a large number of financial institutions. The estimators are
based on a new dynamic hazard model of failure timing that captures the influence
of time-varying macro-economic and sector-specific risk factors on the likelihood of
failures, and the impact of risk spillovers due to contagion or incomplete information
about relevant risk factors. The estimation results, which cover the period January
1987 to December 2008, provide strong evidence for the presence of failure clustering
not caused by variations in the observable explanatory covariates, which include the
iv
trailing return on the S&P 500 index, the lagged slope of the U.S. yield curve, the
default and TED spreads, and other sector-specific variables.
v
Acknowledgments
I would first like to express my immeasurable gratitude to my advisor, Professor Kay
Giesecke. I have been very fortunate to have him as my principal advisor during my
graduate studies. His expertise in credit risk modeling and analysis has been truly
valuable in this dissertation that would have not been possible without him. I am also
indebted to my other members of the reading committee, Professor James A. Primbs
and Professor John Weyant. I would like to thank them for their helpful advices.
Professor Primbs introduced me into the field of financial engineering and I learned a
great deal on the basics of my research from him, which I used extensively throughout
this dissertation. Professor Weyant gave me considerable economic insights, which
helped me in finding a nice topic for my research.
Professor Gerd Infanger has served in my oral exam committee and Professor
Antoine Toussaint has served as the oral exam chair. I would like thank Professor
Infanger for his valuable input as an expert in stochastic optimization and Professor
Toussaint for his comments from his background in mathematical finance. In addi-
tion, I benefited from the guidance of a number of professors at Stanford by taking
invaluable classes, which not only prepared me in progressing my research, but also
increased my academic knowledge in the related field.
It was a real fun to have great friends and colleagues in the MS&E department
at Stanford. I am deeply thankful for helpful discussions with Jack Kim, Xiaowei
Ding, Shahriar Azizpour, Eymen Errais, Matt A.V. Leduc and Mohammad Mousavi
among others. I would like to extend my gratitude to my Korean friends in Corner-
stone Community Church for their great mental support. They include, but are not
limited to, Pastor Hun Sol, Taikyun Shin, Minyong Shin, Sewook Wee, Jihyung Yoo,
vi
Junghyun Kim, Yongkyun Na, Wonyoung Lee, Younggeun Cho and many others. I
know they prayed to give me the courage to move forward and finish my study. I am
also grateful for what I have received from Samsung Scholarship Foundation for their
financial support in most of my study at Stanford.
In addition, I would like to especially thank my parents Myung-hwan Kim and
Young-sook Jeun, and my sister Baeok Kim, for their unconditional love, encourage-
ment and support throughout my entire education, which led to my Ph.D. degree at
Stanford. Moreover, I would like to acknowledge and thank my wife Dawoon Jung.
Without her support, understanding and sacrifice, this dissertation would have been
an impossible task. I dedicate this dissertation to her and my lovely son, Daniel J.
Kim. Last, but not least, I praise for the Lord, especially for having given me the
privilege of my life within his glory.
vii
Contents
Abstract iv
Acknowledgments vi
1 Introduction 1
2 Risk Analysis of CDOs 4
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2 Re-sampling based inference . . . . . . . . . . . . . . . . . . . . . . . 7
2.2.1 Preliminaries and problem . . . . . . . . . . . . . . . . . . . . 8
2.2.2 Acceptance/rejection re-sampling . . . . . . . . . . . . . . . . 8
2.2.3 Thinning process specification . . . . . . . . . . . . . . . . . . 10
2.2.4 Likelihood estimators and fitness tests . . . . . . . . . . . . . 12
2.2.5 Portfolios without replacement . . . . . . . . . . . . . . . . . . 14
2.3 An adaptive intensity model . . . . . . . . . . . . . . . . . . . . . . . 15
2.3.1 Re-sampling scenarios . . . . . . . . . . . . . . . . . . . . . . 16
2.3.2 Intensity specification . . . . . . . . . . . . . . . . . . . . . . . 17
2.3.3 Event and loss simulation . . . . . . . . . . . . . . . . . . . . 23
2.3.4 Likelihood estimators . . . . . . . . . . . . . . . . . . . . . . . 24
2.3.5 Testing in-sample fit . . . . . . . . . . . . . . . . . . . . . . . 25
2.3.6 Testing out-of-sample loss forecasts . . . . . . . . . . . . . . . 28
2.4 Synthetic collateralized debt obligations . . . . . . . . . . . . . . . . 30
2.5 Cash collateralized debt obligations . . . . . . . . . . . . . . . . . . . 35
2.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
viii
3 Systemic Risk: What Defaults Are Telling Us 44
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.1.1 Related literature . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.2 Measures of systemic risk . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.3 Statistical methodology . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.3.1 Economy-wide default timing . . . . . . . . . . . . . . . . . . 52
3.3.2 System-wide default timing . . . . . . . . . . . . . . . . . . . 54
3.3.3 Measures of risk . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3.4 Empirical analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.4.1 Default timing data . . . . . . . . . . . . . . . . . . . . . . . . 57
3.4.2 Covariates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3.4.3 Economy-wide intensity . . . . . . . . . . . . . . . . . . . . . 61
3.4.4 Goodness-of-fit tests . . . . . . . . . . . . . . . . . . . . . . . 63
3.4.5 System-wide intensity . . . . . . . . . . . . . . . . . . . . . . 65
3.5 Systemic risk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
3.5.1 Risk measures . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
3.5.2 Forecast evaluation . . . . . . . . . . . . . . . . . . . . . . . . 70
3.6 Sensitivity of systemic risk . . . . . . . . . . . . . . . . . . . . . . . . 74
3.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Appendix 78
A Risk-neutral tranche loss distributions . . . . . . . . . . . . . . . . . 78
B Cash CDO prioritization schemes . . . . . . . . . . . . . . . . . . . . 80
B1 Uniform prioritization . . . . . . . . . . . . . . . . . . . . . . 81
B2 Fast prioritization . . . . . . . . . . . . . . . . . . . . . . . . . 82
C Cash CDO valuation . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
D Covariate time-series model . . . . . . . . . . . . . . . . . . . . . . . 84
E Default volume model . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Bibliography 89
ix
List of Tables
2.1 Maximum likelihood estimates of the parameters of the intensity λ for
the CDX.HY6 portfolio, along with estimates of asymptotic (A) and
bootstrapping (B) 95% confidence intervals (10K bootstrap samples
were used). The “Median” row indicates the median of the empirical
distribution of the per-path MLEs over all re-sampling paths. The
estimates are based on I = 10K re-sampling scenarios, generated by
Algorithm 1 from the observed defaults in the universe of Moody’s
rated names from 1/1/1970 to 11/7/2008. . . . . . . . . . . . . . . . 25
2.2 Maximum likelihood estimates of the parameters of the portfolio inten-
sity λ for the CDX.HY6, along with estimates of asymptotic (A) and
bootstrapping (B) 95% confidence intervals (10K bootstrap samples
were used). The estimates are based on I = 10K re-sampling scenar-
ios for N , generated by Algorithm 1 from the observed economy-wide
defaults in Moody’s universe of rated names from 1/1/1970 to 3/27/2006. 32
2.3 Fitted annualized par coupon rates and spreads on 3/27/2006 for the
constituent bonds and debt tranches of a 5 year cash CDO referenced
on the CDX.HY6, for each of two standard prioritization schemes. The
principal values are (p1, p2, p3) = (85, 10, 5). The fitting procedure is
described in Appendix C. . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.1 Maximum likelihood estimates (MLE) of economy-wide intensity pa-
rameters, asymptotic standard errors (SE), t-statistics (t-stat), and
Bayes factor statistics (Ψ). . . . . . . . . . . . . . . . . . . . . . . . . 62
3.2 Goodness-of-fit tests of the economy-wide intensity. . . . . . . . . . . 64
x
3.3 Maximum likelihood estimates of the coefficients β of the covariate pro-
cess X governing the thinning process Z in (3.4), asymptotic standard
errors (SE), t-statistics, p-values, and Bayes factor statistics (Ψ). . . . 66
3.4 Out-of-sample tests of the forecast accuracy of the fitted system-wide
value at risk, for each of several horizons. The period considered is
January 1998 to June 2009. . . . . . . . . . . . . . . . . . . . . . . . 74
A1 Market data from Morgan Stanley and fitting results for 5 year index
and tranche swaps referenced on the CDX.HY6 on 3/27/2006. The
index, (15 − 25%) and (25 − 35%) contracts are quoted in terms of
a running rate S stated in basis points (10−4). For these contracts
the upfront rate G is zero. The (0, 10%) and (10, 15%) tranches are
quoted in terms of an upfront rate G. For these contracts the running
rate S is zero. The values in the column Model are fitted rates based
on model (A4) and 100K replications. We report the minimum value
of the objective function MinObj and the average absolute percentage
error AAPE relative to market mid quotes. . . . . . . . . . . . . . . . 81
A2 Estimates of the parameters of the risk-neutral portfolio intensity λQ,
obtained from market rates of 5 year index and tranche swaps refer-
enced on the CDX.HY6 on 3/27/2006. . . . . . . . . . . . . . . . . . 81
E1 Fitted coefficients of VAR(1) model (D1) as of 12/31/2008. The t-
statistics are shown in parenthesis. . . . . . . . . . . . . . . . . . . . 86
E2 Fitted covariance matrix Σ of the VAR(1) error term εt as of 12/31/2008. 87
xi
List of Figures
2.1 Left panel : Annual default rate, relative to the number of rated names
at the beginning of a year, in the universe of Moody’s rated corporate
issuers in any year between 1970 and 2008, as of 11/7/2008. Source:
Moody’s Default Risk Service. Right panel : Mean annual default rate
for the CDX.HY6 portfolio for I = 10K re-sampling scenarios. . . . . 17
2.2 Left panel : Sample path of the intensity (left scale, solid) and de-
fault process (right scale, dotted) for the adaptive model (2.10) with
θ = (0.25, 0.05, 0.4, 0.8, 2.5), values that are motivated by our estima-
tion results in Section 2.3.4. The reversion level and speed change at
each event. Algorithm 3 is used to generate the paths. Right panel :
Sample path of the intensity and default process for the Hawkes model
dλt = κ(c−λt)dt+δdUt where the jump magnitudes un = 1 so U = N ,
λ0 = 2.5, and κ = 2.5 and c = δ = 1.5 are chosen so that the
expected number of events over 10 years matches that of the model
(2.10), roughly 37. Note that the Hawkes intensity cannot fall below
the global reversion level c. . . . . . . . . . . . . . . . . . . . . . . . . 22
2.3 Left panel : Fitted mean portfolio intensity λ = I−1∑I
i=1 λθ(ωi) vs.
[5%, 95%] percentiles (boxes), [1%, 99%] percentiles (whiskers) and the
mean number of portfolio defaults in any given year between 1970 and
2008, based on I = 10K re-sampling scenarios for the CDX.HY6. Right
panel : Fitted economy-wide intensity λ/Z∗ vs. economy-wide defaults
between 1970 and 2008, semi-annually. The fitted intensity matches
the time-series fluctuation of observed economy-wide default rates. . . 26
xii
2.4 In-sample fitness tests: empirical distribution of time-scaled, economy-
wide inter-arrival times generated by the fitted (mean) paths of Z∗ and
λ, based on I = 10K re-sampling scenarios for the CDX.HY6. Left
panel : Empirical quantiles of time-scaled inter-arrival times vs. theo-
retical standard exponential quantiles. Right panel : Empirical distri-
bution function (solid) of time-scaled inter-arrival times vs. theoretical
standard exponential distribution function (dotted) along with 1% and
5% bands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.5 Out-of-sample test of loss forecasts for a test portfolio with the same
rating composition as the CDX.HY6. We show the [25%, 75%] per-
centiles (box), the [15%, 85%] percentiles (whiskers) and the median
(horizontal line) of the fitted conditional distribution of the incremen-
tal portfolio loss L′τ+1 − L′τ given Fτ for τ varying annually between
1/1/1996 and 1/1/2007, estimated from 100K paths generated by Al-
gorithms 3 and 2, and the realized portfolio loss during [τ, τ + 1]. Left
panel : The test portfolio is selected at the beginning of each period.
Right panel : The test portfolio is selected in 1996. . . . . . . . . . . . 29
2.6 Kernel smoothed conditional distribution of the normalized 5 year cu-
mulative tranche loss for the CDX.HY6 on 3/27/2006, the Series 6
contract inception date, for each of several standard attachment point
pairs. Here, H represents the maturity date 6/27/2011. Left panel :
Actual distribution, estimated using the model and fitting methodol-
ogy developed in Sections 2.2 and 2.3, based on I = 10K re-sampling
scenarios for N , and 100K replications. Right panel : Risk-neutral dis-
tribution, estimated from the market prices paid for 5 year tranche
protection on 3/27/2006 using the method explained in Appendix A,
based on 100K replications. . . . . . . . . . . . . . . . . . . . . . . . 31
xiii
2.7 Kernel smoothed conditional distribution of the normalized cumulative
portfolio loss L′H/C for the CDX.HY6 on 3/27/2006 for horizons H
varying between 3/27/2009 and 3/27/2016. Left panel : Actual distri-
bution, estimated using the model and fitting methodology developed
in Sections 2.2 and 2.3, based on I = 10K re-sampling scenarios for
N , and 100K replications. Right panel : Risk-neutral distribution, es-
timated from the market prices paid for 5 year tranche protection on
3/27/2006 using the method explained in Appendix A, based on 100K
replications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.8 Kernel smoothed conditional distribution on 3/27/2006 of the normal-
ized cumulative loss (UH(0, 0.1)−UH(0.1, 0.15))/0.1C associated with
selling equity protection and buying mezzanine protection with match-
ing notional and maturity, along with the distribution of the normalized
cumulative equity loss UH(0, 0.1)/0.1C, for the CDX.HY6. . . . . . . 34
2.9 Cash flow “waterfall” of a sample cash CDO. . . . . . . . . . . . . . . 35
2.10 Kernel smoothed conditional distribution on 3/27/2006 of the nor-
malized discounted loss Ujτ (H, pj, v∗, c∗1, c
∗2)/V jτ (H, pj, v
∗, c∗1, c∗2) for a
5 year cash CDO referenced on the CDX.HY6, whose maturity date H
is 6/27/2011. The initial tranche principals are p1 = 85 for the senior
tranche, p2 = 10 for the junior tranche, and p3 = 5 for the residual eq-
uity tranche. We apply the model and fitting methodology developed
in Sections 2.2 and 2.3, based on I = 10K re-sampling scenarios for
N , and 100K replications. Left panel : Uniform prioritization scheme.
Right panel : Fast prioritization scheme. . . . . . . . . . . . . . . . . . 39
2.11 Kernel smoothed conditional distribution on 3/27/2006 of the normal-
ized discounted profit and loss (V3τ (H, 5, v∗, c∗1, c
∗2)− 5)/5 for the resid-
ual equity tranche of a 5 year cash CDO referenced on the CDX.HY6,
whose maturity date H is 6/27/2011. We apply the model and fitting
methodology developed in Sections 2.2 and 2.3, based on I = 10K
re-sampling scenarios for N , and 100K replications. . . . . . . . . . . 40
xiv
2.12 Kernel smoothed conditional distribution on 3/27/2006 of the normal-
ized discounted profit and loss (Vjτ (H, pj, v∗, c∗1, c
∗2)−pj)/pj for the debt
tranches of a 5 year cash CDO referenced on the CDX.HY6, whose ma-
turity date H is 6/27/2011. We apply the model and fitting method-
ology developed in Sections 2.2 and 2.3, based on I = 10K re-sampling
scenarios for N , and 100K replications. Left panel : Senior tranche with
principal p1 = 85. Right panel : Junior tranche with principal p2 = 10. 41
2.13 99.5% value at risk VaR1τ (0.995, H, 85, v∗, c∗1, c∗2) for the 5 year senior
tranche of a cash CDO referenced on the CDX.HY6 with maturity date
6/27/2011, for τ varying weekly between 3/27/2006 and 11/7/2008,
based on I = 10K re-sampling scenarios for N , and 100K replications.
Left panel : Uniform prioritization scheme. Right panel : Fast prioriti-
zation scheme. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.1 Value at risk Vt(α, T ) at level α for a given horizon T . . . . . . . . . . 51
3.2 Default timing and volume data. Left panel : 1-year economy-wide
default rate in the universe of Moody’s rated issuers. Right panel : 1-
year system-wide default rate. The defaults of Lehman Brothers and
Washington Mutual contributed to over 80% of the system-wide default
volume in 2008. Source: Moody’s Default Risk Service. . . . . . . . . 58
3.3 Time-series of explanatory covariates. Left panel : The 1-year lagged
slope of yield curve and the default spread, given by the difference
between Moody’s seasoned Baa-rated and Aaa-rated corporate bond
yields. Right panel : The trailing 1-year returns on the S&P500 index
and the banking and FIRE portfolios. . . . . . . . . . . . . . . . . . . 59
3.4 TED spreads during the sample period, along with significant events. 60
3.5 Fitted economy-wide intensity λ∗. Left panel : Yearly defaults and
fitted intensity. Right panel : Intensity decomposition: fitted baseline
hazard vs. fitted spillover hazard. . . . . . . . . . . . . . . . . . . . . 63
3.6 Left panel : Observed binary response variables Yn and fitted process
Z. Right panel : Power curve for the fitted process Z. . . . . . . . . . 67
xv
3.7 Left panel : Fitted system-wide failure intensity λ, based on the pa-
rameter estimates reported in Tables 3.1 and 3.3. Right panel : Fitted
fraction of λ tied to the spillover hazard term, with default events
indicated. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
3.8 Fitted conditional distribution (kernel-smoothed) of the system-wide
6-month default rate Dt(t+ 0.5) for conditioning times t varying semi-
annually between 12/31/1997 and 12/31/2008. . . . . . . . . . . . . . 69
3.9 Fitted conditional distribution (kernel-smoothed) of the economy-wide
6-month default rate for conditioning times t varying semi-annually
between 12/31/1997 and 12/31/2008. The default rate is obtained by
normalizing the number of economy-wide defaults during (t, t+ 0.5] by
the total number of firms in the economy at t. . . . . . . . . . . . . . 70
3.10 Left Panel: Fitted value at risk Vt(α, t + 0.5) of the system-wide
default rate, for conditioning times t varying semi-annually between
12/31/1997 and 12/31/2008, versus realized default rate. Right Panel:
Fitted value at risk of the economy-wide default rate versus realized
default rate. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
3.11 Left Panel: Term structure of systemic risk on 12/31/2008: fitted value
at risk Vt(α, t + ∆) on 12/31/2008 as a function of ∆. Right Panel:
Fitted conditional probability at t of no failures in the financial system
during (t, t + ∆], for conditioning times t varying quarterly between
12/31/1997 and 12/31/2008, for each of several horizons ∆. . . . . . . 72
3.12 Impact of a default on systemic risk. Left Panel: Absolute change
∆Vt(0.95, t + 1) for conditioning times t varying quarterly between
12/31/1997 and 12/31/2008. Right Panel: Term structure of value at
risk Vt(0.95, t+ ∆) on 12/31/2008, for different scenarios. . . . . . . . 75
D1 Realized vs. VAR(1) predicted time series (monthly) of covariate com-
ponents. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
xvi
E1 Left panel : Empirical default volume distribution vs. fitted generalized
Pareto distribution as of 12/31/2008. Right panel : Empirical quantiles
of the observed default volumes vs. quantiles of realizations of variables
from the fitted Pareto distribution. . . . . . . . . . . . . . . . . . . . 88
xvii
Chapter 1
Introduction
Credit markets in the U.S. have seen a tendency for defaults to cluster due to the
correlation among firms. This correlation comes from either firm-specific sensitivity
to common risk factors or the feedback of an individual event to the aggregate level.
For example, the recent subprime mortgage financial crisis, which has become a global
economic catastrophe, originated with dramatically increased defaults and foreclosure
activities of subprime loans. Despite the impact of the global financial crisis, which
was sparked by the collapse of Lehman Brothers in September 2008, the correlated
default risk is still difficult to measure and its sources are poorly understood, since
financial innovations have enabled risk transfers that were not fully recognized by
financial regulators and institutions.
The first part of the dissertation considers a financial problem of credit investors,
who are exposed to the correlated default risk. The market of credit derivatives has
experienced an impressive growth and one of the most important developments was
the introduction of a collateralized debt obligation (CDO), which is an asset-backed
security and structured credit product whose underlying collateral is usually a port-
folio of defaultable corporate bonds, sovereign bonds, or bank loans. These claims,
in turn, are prioritized by tranches with different levels of seniority in order to allow
a high degree of customization of repackaging risk profiles according to the investors
risk return requirements. However, insufficient capital provisioning with inaccurate
measures of risk transfers due to flawed and overly optimistic risk assessments is at
1
CHAPTER 1. INTRODUCTION 2
the center of the recent financial crisis. To circumvent the limitations of existing
methodologies, we propose new stochastic methods to measure the risk of positions
in collateralized debt obligations and related instruments tied to an underlying port-
folio of defaultable assets under the actual probability measure. Our methodology
provides sophisticated yet practical tools allowing investors to quantify the exposure
associated with their risk positions, and to accurately estimate the amount of risk
capital required to support a position.
The second part of the dissertation extends the scope of ‘portfolio’ and focuses
on the assessment of financial sector-wide systemic risk due to direct and indirect
systemic linkages, including those to the broader economy. An understanding of sys-
temic risk and its sources is of paramount importance, since the cascading failure of
banks and other financial institutions can deprive society of capital and dramatically
increase its cost, which is the most serious consequence of a systemic failure. However,
the effective management of systemic risk in the financial sector is still a key chal-
lenge for regulators and policy makers, since the channels and linkages through which
local financial disturbances can take on systemic characteristics are hard to predict
by their nature. The background of this challenge is the chain of subsequent failures
because of the interconnectedness that leads to risk spillovers for the whole financial
system. For instance, one bank’s default on an obligation to another may adversely
affect that other bank’s ability to meet its obligations to yet other banks and so on
along the financial chain in the banking system. Moreover, the financial system is
composed of intermediaries and the infrastructure of payment, settlement and trad-
ing mechanisms that intertwine banks and other non-bank financial institutions. This
complex network of financial connections is extended through the financing needs of
all economic sectors as well. Hence, a desirable systemic risk measure should be able
to accommodate contagion channeled through trade credit or buyer/supplier relation-
ships in the real sector, and derivatives counterparty relations and interbank lending
arrangements in the financial sector. In this context, we propose a methodology that
develops maximum likelihood estimators of the term structure of systemic risk in the
financial sector, defined as the conditional probability of failure of a large number of
financial institutions, potentially as part of a larger cluster of economy-wide defaults.
CHAPTER 1. INTRODUCTION 3
The estimators are based on a new dynamic hazard model of failure timing that cap-
tures the influence of time-varying macro-economic and sector-specific risk factors
on the likelihood of failures, and the impact of risk spillovers due to contagion or
incomplete information about relevant risk factors. As a modern toolkit for financial
surveillance, the proposed framework provides a metric of potential financial failures
due to both direct and indirect systemic linkages by capturing the statistical implica-
tions of risk spillovers for failure timing without needing to be precise a priori about
the economic mechanisms behind them. It facilitates a macro-prudential supervision
of financial institutions by gauging the vulnerability of the financial system at any
particular time, and thus enhance financial surveillance.
Chapter 2
Risk Analysis of Collateralized
Debt Obligations
Collateralized debt obligations, which are are securities with payoffs that are tied
to the cash flows in a portfolio of defaultable assets such as corporate bonds, play a
significant role in the financial crisis that has spread throughout the world. Insufficient
capital provisioning due to flawed and overly optimistic risk assessments is at the
center of the problem. This chapter develops stochastic methods to measure the
risk of positions in collateralized debt obligations and related instruments tied to an
underlying portfolio of defaultable assets. It proposes an adaptive point process model
of portfolio default timing, a maximum likelihood method for estimating point process
models that is based on an acceptance/rejection re-sampling scheme, and statistical
tests for model validation. To illustrate these tools, they are used to estimate the
distribution of the profit or loss generated by positions in multiple tranches of a
collateralized debt obligation that references the CDX High Yield portfolio, and the
risk capital required to support these positions.
This chapter is joint work with Kay Giesecke.
4
CHAPTER 2. RISK ANALYSIS OF CDOS 5
2.1 Introduction
The financial crisis highlights the need for a holistic, objective and transparent ap-
proach to accurately measuring the risk of investment positions in portfolio credit
derivatives such as collateralized debt obligations (CDOs). Portfolio credit deriva-
tives are securities whose payoffs are tied, often through complex schemes, to the
cash flows in a portfolio of credit instruments such as corporate bonds, loans, or
mortgages. They facilitate the trading of insurance against the default losses in the
portfolio. An investor providing the insurance is exposed to the default risk in the
portfolio.
There is an extensive literature devoted to the valuation and hedging of portfolio
derivatives.1 The basic valuation problem is to estimate the price of default insurance,
i.e., the arbitrage-free value of the portfolio derivative at contract inception. This
value is given by the expected discounted derivative cash flows relative to a risk-
neutral pricing measure. After inception, the derivative position must be marked to
market; that is, the value of the derivative under current market conditions must
be determined. The basic hedging problem is to estimate the sensitivities of the
derivative value to changes of the default risk of the portfolio constituents. These
sensitivities determine the amount of constituent default insurance to be bought or
sold to neutralize the derivative price fluctuations due to changes of the constituent
risks.
The valuation and hedging problems are distinct from the risk analysis problem,
which is to measure the exposure of the derivative investor, who provides default
insurance, to potential payments due to defaults in the portfolio. More precisely, the
goal is to estimate the distribution of the investor’s cumulative cash flows over the
life of the contract. The distribution is taken under the actual measure describing
the empirical likelihood of events, rather than a risk-neutral pricing measure. The
distribution describes the risk/reward profile of a portfolio derivative position, and
1See Arnsdorf & Halperin (2008), Brigo, Pallavicini & Torresetti (2006), Chen & Glasserman(2008), Cont & Minca (2008), Ding, Giesecke & Tomecek (2009), Duffie & Garleanu (2001), Eckner(2009), Errais, Giesecke & Goldberg (2009), Kou & Peng (2009), Longstaff & Rajan (2008), Lopatin& Misirpashaev (2008), Mortensen (2006), Papageorgiou & Sircar (2007) and many others.
CHAPTER 2. RISK ANALYSIS OF CDOS 6
is the key to risk management applications. For example, it allows the investor or
regulator to determine the amount of risk capital required to support a position.
The financial crisis indicates the significance of these applications and the problems
associated with the traditional rating based analysis of portfolio credit derivative
positions; see SEC (2008).
The risk analysis problem has been largely ignored in the academic literature.
This chapter provides stochastic methods to address this problem. It makes several
contributions. First, it develops a maximum likelihood approach to estimating point
process models of portfolio default timing from historical default experience. Second,
it devises statistical tests to validate a fitted model. Third, it formulates, fits and
tests an adaptive point process model of portfolio default timing, and demonstrates
the utility of the estimation and validation methods on this model. Fourth, it pro-
vides an exact simulation algorithm for the adaptive model, and uses it to address
important risk management applications. For example, we estimate profit and loss
distributions for positions in multiple tranches of a CDO. These distributions quan-
tify and differentiate the risk exposure of alternative investment positions, and the
impact of complex contract features. They are preferable to agency ratings, which
are often based on the first moment only.
Estimating a stochastic point process model of portfolio default timing under the
actual probability measure presents unique challenges. Most importantly, inference
must be based on historical default timing data, rather than market derivative pricing
data. However, the default history of the reference portfolio underlying the credit
derivative is often unavailable, so direct inference is usually not feasible. We confront
this difficulty by developing an acceptance/rejection re-sampling scheme that allows
us to generate alternative portfolio default histories from the available economy-wide
default timing data. These histories are then used to construct maximum likelihood
estimators for a portfolio point process model that is specified in terms of an intensity
process. A time-scaling argument leads to testable hypotheses for the fit of a model.
The re-sampling approach is predicated on a top-down formulation of the portfolio
point process model. This formulation has become popular in the credit derivatives
pricing literature. Here, the point process intensity is specified without reference to
CHAPTER 2. RISK ANALYSIS OF CDOS 7
the portfolio constituents. In our risk analysis setting, the combination of top-down
formulation and re-sampling based inference leads to low-dimensional estimation, val-
idation and prediction problems that are highly tractable and fast to address even
for the large portfolios that are common in practice. Alternative bottom-up formu-
lations in Das, Duffie, Kapadia & Saita (2007a), Delloye, Fermanian & Sbai (2006)
and Duffie, Eckner, Horel & Saita (2009) require the specification and estimation of
default timing models for the individual portfolio constituent securities. They allow
one to incorporate firm-specific data into the estimation, but lead to high-dimensional
computational problems.
To demonstrate the effectiveness of the re-sampling approach and the appropri-
ateness of the top-down formulation, we develop and fit an adaptive intensity model.
This model extends the classical Hawkes (1971) model by including a state-dependent
drift coefficient in the intensity dynamics. The state-dependent drift involves a re-
version level and speed that are proportional to the intensity at the previous event.
While this specification is as tractable as the Hawkes model, it avoids the constraints
imposed by the constant Hawkes reversion level and speed. This helps to better fit
the regime-dependent behavior of empirical default rates. In- and out-of-sample tests
show that our adaptive model captures the clustering in the default arrival data. The
tests indicate that a parsimonious top-down model formulation is also statistically
appropriate.
This rest of this chapter is organized as follows. Section 2.2 develops the re-
sampling approach to point process model estimation and validation. Section 2.3
formulates and analyzes the adaptive point process model, and uses the re-sampling
approach to fit it. Section 2.4 applies the fitted model to the risk analysis of synthetic
CDOs, while Section 2.5 analyzes the risk of cash CDOs. Section 2.6 concludes. There
are several appendices.
2.2 Re-sampling based inference
This section develops a re-sampling approach to estimating a stochastic point process
model of default timing. It also provides a method to validate the estimators.
CHAPTER 2. RISK ANALYSIS OF CDOS 8
2.2.1 Preliminaries and problem
The uncertainty in the economy is modeled by a complete probability space (Ω,F , P ),
where P is the actual (statistical) probability measure. The information flow of
investors is described by a right-continuous and complete filtration F = (Ft)t≥0.
Consider a non-explosive counting process N with event stopping times 0 < T1 <
T2 < . . . The Tn represent the ordered default times in a reference portfolio of firms,
with the convention that a defaulted name is replaced with name that has the same
characteristics as the defaulter. A portfolio credit derivative is a security with cash
flows that depend on the financial loss due to default in the reference portfolio.
Suppose N has a strictly positive intensity λ such that∫ t
0λsds < ∞ almost
surely. The intensity represents the conditional mean default rate in the sense that
E(Nt+∆ − Nt | Ft) ≈ λt∆ for small ∆ > 0. This means that N −∫ ·
0λsds is a local
martingale relative to P and F. The process followed by λ determines the distribution
of N . It is the modeling primitive, and is specified without reference to the constituent
firms.
Our goal is to estimate a given model of λ. If the data consist of a path of N ,
then this is a classical statistical problem; see Ogata (1978a), for example. However,
a path of N is rarely available, so direct inference is typically not feasible. Instead,
the data consist of a path of the economy-wide default process N∗ generated by the
default stopping times 0 < T ∗1 < T ∗2 < . . . in the universe of names. We develop a
re-sampling approach to estimating λ from the realization of N∗. The basic idea is to
generate alternative portfolio default histories from N∗, and to estimate λ from these
histories.
2.2.2 Acceptance/rejection re-sampling
We propose to generate paths of N from the realization of N∗ by acceptance/rejection
sampling. Here, we randomly select an event time of N∗ as an event time of N
with a certain conditional probability. The following basic observation provides the
foundation of this mechanism, and the justification of our estimation approach. It
also facilitates the design of tests to evaluate the approach.
CHAPTER 2. RISK ANALYSIS OF CDOS 9
Proposition 2.2.1. Let Z∗ be a predictable process with values in [0, 1]. Select an
economy-wide event time T ∗n with probability Z∗T ∗n . If the economy-wide default process
N∗ has intensity λ∗, then the counting process of the selected times has intensity Z∗λ∗.
Proof. Let (U∗n) be a sequence of standard uniform random variables that are indepen-
dent of one another and independent of the (T ∗n). Let π(du, dt) be the random count-
ing measure with mark space [0, 1] associated with the marked point process (T ∗n , U∗n).
Note that N∗t =∫ t
0
∫ 1
0π(du, ds), and that π(du, dt) has intensity λπ(u, t) = λ∗t− for
u ∈ [0, 1], which we choose to be predictable. This means that the process defined
by∫ t
0
∫ 1
0(π(du, ds)− λπ(u, s)duds) is a local martingale. Now for u ∈ [0, 1] define the
process I(u, ·) by I(u, t) = 1u≤Z∗t . Then the counting process N generated by the
selected event times can be written as
Nt =∑n≥1
I(U∗n, T∗n)1T ∗n≤t =
∫ t
0
∫ 1
0
I(u, s)π(du, ds).
Since Z∗ is predictable, I(u, ·) is predictable for u ∈ [0, 1]. The process I(u, ·) is also
bounded for u ∈ [0, 1]. Therefore, a local martingale M is defined by
Mt = Nt −∫ t
0
∫ 1
0
I(u, s)λπ(u, s)du ds.
But the definitions of I(u, s) and λπ(u, s) yield
Mt = Nt −∫ t
0
∫ Z∗s
0
λ∗s− du ds = Nt −∫ t
0
Z∗sλ∗s ds,
which means that N has intensity Z∗λ∗.
Proposition 2.2.1 states that the counting process obtained by thinning N∗ ac-
cording to Z∗ has intensity given by the product of the thinning process Z∗ and the
intensity of N∗. The specification of Z∗ is subject to a mild predictability condition:
the selection probability at T ∗n can only depend on information accumulated up to but
not including time T ∗n . Proposition 2.2.1 is related to the construction of a marked
point process from a Poisson random measure through a state-dependent thinning
CHAPTER 2. RISK ANALYSIS OF CDOS 10
Algorithm 1 (Acceptance/Rejection Re-Sampling) Generating a sample pathof the portfolio default process N from the realization of the economy-wide defaultprocess N∗ over the sample period [0, τ ].
1: Initialize m← 0.2: for n = 1 to N∗τ do3: Draw u ∼ U(0, 1).4: if u ≤ Z∗T ∗n then5: Assign Tm+1 ← T ∗n and update m← m+ 1.6: end if7: end for
mechanism in Proposition 3.1 of Glasserman & Merener (2003). It is a generalization
of the classical, state-independent thinning scheme for the Monte Carlo simulation of
time-inhomogeneous Poisson processes proposed by Lewis & Shedler (1979).
We generate paths of the portfolio default process N by thinning N∗ with a fixed
process Z∗, see Algorithm 1. Each of these paths represents an alternative event
sequence, or portfolio default history, assuming a defaulter is replaced with a name
that has the same characteristics as the defaulter. Proposition 2.2.1 implies that the
events in each alternative sequence arrive with intensity λ = Z∗λ∗. This justifies the
estimation of λ from the alternative paths of the portfolio default process N .
It is important to note that this estimation approach does not require the speci-
fication of a model for the economy-wide intensity λ∗. We only need to formulate a
model for the thinning process Z∗, to which we turn next.
2.2.3 Thinning process specification
The random measure argument behind Proposition 3.1 in Giesecke, Goldberg & Ding
(2009) can be adapted to show that Z∗ takes the form
Z∗t (ω) = limε→0
E(Nt+ε −Nt | Ft)E(N∗t+ε −N∗t | Ft)
(ω) (2.1)
CHAPTER 2. RISK ANALYSIS OF CDOS 11
in all those points (ω, t) ∈ Ω × (0,∞] where the limit exists.2 The quotient on the
right side of equation (2.1), which is taken to be zero when the denominator vanishes,
represents the conditional probability at time t that the next defaulter is a reference
name, given that a default occurs in the economy by time t+ ε.
We specify Z∗ nonparametrically, guided by formula (2.1). Intuitively, Z∗ must
reflect the relation between the issuer composition of the economy and the issuer
composition of the reference portfolio. We propose to describe the issuer composition
in terms of the credit ratings of the respective constituent issuers. In this case, N∗ is
identified with the default process in the universe of rated issuers.
The use of ratings does not limit the applicability of our specification, because the
reference portfolios of most derivatives consist of rated names only. Moreover, the
rating agencies maintain extensive and accessible data bases that record credit events
including defaults in the universe of rated names, and further attributes associated
with these events, such as recovery rates. The agencies have maintained a high firm
coverage ratio throughout the sectors of the economy, and therefore the universe of
rated names is a reasonable representation of the firms in the economy.
Let [0, τ ] be the sample period, with τ denoting the (current) analysis time. Let
R be the set of rating categories, Xτ (ρ) be the number at time τ of reference firms
with rating ρ ∈ R, and X∗t (ρ) be the number at time t ∈ [0, τ ] of ρ-rated firms in the
universe of rated names. The number of firms X∗(ρ) is an adapted process. It varies
through time because issuers enter and exit the universe of rated names. Exits can be
due to mergers or privatizations, for example. We assume that the thinning process
Z∗ is equal to the predictable projection of the process defined for times 0 < t ≤ τ
by3
Xτ (ρ∗N∗t−+1)
X∗t−(ρ∗N∗t−+1)1X∗t−(ρ∗
N∗t−+1)>0 (2.2)
where ρ∗n ∈ FT ∗n is the rating at the time of default of the nth defaulter in the
2The limit in (2.1) exists and is equal to Z∗t (ω) almost surely with respect to a certain measureon the product space Ω× (0,∞]. See Giesecke et al. (2009) for more details.
3Here and below, if Y is a right-continuous process with left limits, Yt− = lims↑t Ys.
CHAPTER 2. RISK ANALYSIS OF CDOS 12
economy.4 We require that Xτ (ρ) ≤ X∗t (ρ) for all t ≤ τ and ρ ∈ R. Since Xτ (ρ) is
a fixed integer prescribed by the portfolio composition and X∗t−(ρ) is predictable for
fixed ρ,
Z∗t =∑ρ∈R∗t−
Xτ (ρ)
X∗t−(ρ)P (ρ∗N∗t−+1 = ρ | Ft−) (2.3)
almost surely, where R∗t is the set of rating categories ρ ∈ R for which X∗t (ρ) >
0. Formula (2.3) suggests to interpret the value Z∗t as the conditional “empirical”
probability that the next defaulter is a reference name. This conditional probability
respects the ratings of the reference names, as P (ρ∗N∗t−+1 = ρ | Ft−) is the conditional
probability that the next defaulter has rating ρ. Our estimator ν∗t (ρ) of this latter
conditional probability is based on the ratings of the defaulters in [0, t). For ρ ∈ R,
it is given by
ν∗t (ρ) =
∑N∗t−n=1 1ρ∗n=ρ + α∑N∗t−
n=1 1ρ∗n∈R∗t− + α|R∗t−|1ρ∈R∗t− (2.4)
where α ∈ (0, 1] is an additive smoothing parameter guaranteeing that ν∗t (ρ) is well-
defined for t < T ∗1 . For α = 0, equation (2.4) defines the empirical rating distribution,
which treats the observations ρ∗1, . . . , ρ∗N∗t−
as independent samples from a common
distribution and ignores all other information contained in Ft−. Our implementation
assumes α = 0.5, a value that can be justified on Bayesian grounds, see Box & Tiao
(1992, pages 34-36).5
2.2.4 Likelihood estimators and fitness tests
Based on a collection of re-sampling paths N(ωi) : i ≤ I of the portfolio default
process N generated by Algorithm 1, we estimate a model λ = λθ, where θ ∈ Θ
4Taking Z∗ to be the predictable projection of the process given by formula (2.2) guarantees thatZ∗ is defined up to indistinguishability; see Dellacherie & Meyer (1982).
5This choice makes (2.4) the so-called expected likelihood estimator. A sensitivity analysis indi-cates that the model parameter estimates reported in Table 2.1 are robust with respect to variationsof α.
CHAPTER 2. RISK ANALYSIS OF CDOS 13
is a parameter vector and Θ is the set of admissible parameters. The paths of N
induce intensity paths λθ(ωi) : i ≤ I, θ ∈ Θ. We fit θ by solving the log-likelihood
problem6
supθ∈Θ
∫ τ
0
I∑i=1
(log λθs−(ωi)dNs(ωi)− λθs(ωi)ds). (2.5)
The adequacy of the fitted intensity as a model of portfolio defaults depends on
the effectiveness of the re-sampling procedure and the appropriateness of the para-
metric intensity specification. More precisely, it depends on how well the alternative
re-sampling scenarios generated by Z∗ capture the actual default clustering in the
reference portfolio, and how well the fitted model for λ replicates these clusters. We
require statistical tests to assess this. The following result allows us to design such
tests.
Proposition 2.2.2. Suppose that Z∗t > 0, almost surely. Then the economy-wide
default process N∗ is a standard Poisson process under a change of time defined by
A∗t =
∫ t
0
1
Z∗sλs ds. (2.6)
Proof. Proposition 2.2.1 implies that the compensator A =∫ ·
0λsds to N is absolutely
continuous with respect to the compensator A∗ to N∗, with density Z∗. Because
Z∗t > 0 almost surely, A∗ is also absolutely continuous with respect to A, with density
1/Z∗. It follows that A∗ can be written as
A∗t =
∫ t
0
1
Z∗sdAs =
∫ t
0
1
Z∗sλs ds.
The time change theorem of Meyer (1971) implies the result, since A∗ is continuous
and increases to ∞, almost surely.
Proposition 2.2.2 expresses the compensator A∗ to N∗ in terms of Z∗ and λ, and
6The asymptotic properties of the maximum likelihood estimator of θ are developed by Ogata(1978a). The estimator is shown to be consistent, asymptotically normal, and efficient.
CHAPTER 2. RISK ANALYSIS OF CDOS 14
states that this compensator can be used to time-scale N∗ into a standard Poisson
process. We evaluate the joint specification of Z∗ and λ by testing whether the fitted
(mean) paths of these processes generate a realization of A∗ that time-scales the
observed T ∗n into a standard Poisson sequence. The Poisson property can be tested
with a battery of tests.
2.2.5 Portfolios without replacement
The intensity model λ estimated from the re-sampling scenarios generated by Algo-
rithm 1 is based on a portfolio with replacement of defaulters. This is without loss of
generality because we can extend the reach of the fitted model to portfolios without
replacement.
Consider event times Tn of N over some interval [τ,H], where H > τ . The event
times T ′n of the portfolio default process without replacement, N ′, can be obtained
from the Tn by removing event times due to replaced defaulters. This is done by
thinning. To formalize this, let Xt(ρ) be the number of ρ-rated reference names at
time t ≥ τ , assuming defaulters are not replaced. Thus, for fixed ρ, Xt(ρ) ≤ Xτ (ρ)
almost surely for every t ≥ τ . For fixed ρ, the process X(ρ) decreases and vanishes
when all ρ-rated reference names are in default. It suffices to specify Z, the thinning
process for N , at the event times Tn ≥ τ of N . Motivated by formula (2.3), we
suppose that
ZTn =∑ρ∈Rτ
XT−n(ρ)
Xτ (ρ)P (ρn = ρ | FT−n ) (2.7)
where Rτ is the set of rating categories ρ ∈ R for which Xτ (ρ) > 0, and ρn is
the rating of the firm defaulting at Tn. Note that the thinning probability (2.7)
vanishes when all firms in the portfolio are in default. We estimate the conditional
distribution P (ρn = · | FT−n ) by the smoothed empirical distribution ν of the ratings
CHAPTER 2. RISK ANALYSIS OF CDOS 15
Algorithm 2 (Replacement Thinning) Generating a sample path of the portfoliodefault process N ′ without replacement from a path of the portfolio default processN with replacement over [τ,H], for a horizon H > τ .
1: Initialize m← 0 as we define T ′0 = τ , and set Y (ρ)← Xτ (ρ) for all ρ ∈ R.2: for n = Nτ + 1 to NH do3: Draw u ∼ U(0, 1).4: if u ≤ ZTn then5: Assign T ′m ← Tn and update m← m+ 1.6: Draw ρm ∼ ν ′, where
ν ′(ρ) =Y (ρ)
Xτ (ρ)ν(ρ)/ZTn , ρ ∈ R. (2.9)
7: Update Y (ρm)← Y (ρm)− 1.8: end if9: end for
of the defaulters in the re-sampling scenarios N(ωi) : i ≤ I, where
ν(ρ) =
∑Ii=1
∑Nτ (ωi)n=1 1ρn(ωi)=ρ + α∑I
i=1
∑Nτ (ωi)n=1 1ρn(ωi)∈Rτ + α|Rτ |
1ρ∈Rτ (2.8)
for an additive smoothing parameter α ∈ [0, 1]; see formula (2.4). This estimator
treats the observations ρn(ωi) of all paths ωi as independent samples from a common
distribution. Algorithm 2 summarizes the steps required to generate N ′ from N .
2.3 An adaptive intensity model
This section demonstrates the effectiveness of the re-sampling approach. It formu-
lates, fits and evaluates an adaptive intensity model for the CDX High Yield portfolio,
which is a standard portfolio that is referenced by a range of credit derivatives. The
fitted intensity model will be used in Sections 2.4 and 2.5 to analyze the risk of such
derivatives.
CHAPTER 2. RISK ANALYSIS OF CDOS 16
2.3.1 Re-sampling scenarios
We begin by examining the re-sampling scenarios of the default process of the CDX
High Yield Series 6 portfolio, or CDX.HY6. We adopt the rating system of Moody’s,
a leading rating agency.7 The reference portfolio consists of 2 Baa, 45 Ba, 39 B and
14 C rated firms. The realization of N∗ comes from Moody’s Default Risk Service,
and covers all Moody’s rated issuers between 1/1/1970 and 11/7/2008. We observe
a total of 1447 defaults.
Figure 2.1 shows the annual economy-wide default rate between 1970 and 2008,
along with the mean default rate for the CDX.HY6 portfolio, obtained from Algorithm
1.8 Portfolio defaults cluster heavily, indicating the positive dependence between the
defaults of reference names. The excessive default rate in 1970 is due to an exceptional
cluster of 24 railway defaults on June 21, 1970. Other major event clusters are related
to the 1987 crash and the burst of the internet bubble around 2001. These clusters
translate into substantial fluctuations of the empirical portfolio default rate that the
model portfolio intensity λ must replicate. This calls for special model features.
7We distinguish issuers in terms of their “senior rating,” which is an issuer-level rating gen-erated by Moody’s from ratings of particular debt obligations using its Senior Rating Algorithm,Hamilton (2005). We follow a common convention and subsume the categories Caa and Ca intothe category C. We also subsume the numerical sub-categories Aa1, Aa2, Aa3 into the categoryAa, and similarly for the other numerical sub-categories. Then the set of rating categories isR = Aaa, Aa, A, Baa, Ba, B, C, WR. The category WR indicates a withdrawn rating.
8Each default event in the data base has a time stamp; the resolution is one day. There are dayswith multiple events whose exact timing during the day cannot be established. Thus, the sequenceof raw event dates in the data set is not strictly increasing. Algorithm 1 requires distinct eventtimes, however. To address this issue, we assume that an event date is measured with uniformlydistributed noise. The noise is sampled when the paths of N are generated. We convert a raweconomy-wide event date to a real-valued calendar time equal to 12am on the day of the event,and draw the noise from a uniform distribution on [0, 1/365]. With this choice, the randomizationdoes not alter the original time stamp of an observed event. We have experimented with severalalternative randomization schemes but have found that the model estimation results are insensitiveto the chosen randomization scheme. Further, since the data set allows us to measure the numberof ρ-rated firms in the economy X∗t (ρ) only daily, in formula (2.3) for Z∗t we take X∗t−(ρ) as thenumber of ρ-rated firms on the day prior to the day that contains t.
CHAPTER 2. RISK ANALYSIS OF CDOS 17
1970 1975 1980 1985 1990 1995 2000 20050
0.5
1
1.5
2
2.5
3
3.5
4
Pe
rce
nt
1970 1975 1980 1985 1990 1995 2000 20050
2
4
6
8
10
12
Pe
rce
nt
Figure 2.1: Left panel : Annual default rate, relative to the number of rated namesat the beginning of a year, in the universe of Moody’s rated corporate issuers in anyyear between 1970 and 2008, as of 11/7/2008. Source: Moody’s Default Risk Service.Right panel : Mean annual default rate for the CDX.HY6 portfolio for I = 10K re-sampling scenarios.
2.3.2 Intensity specification
Our specification of λ is informed by the results of the empirical analysis in Azizpour
& Giesecke (2008b), which is based on roughly the same historical default data used
here. Using in- and out-of-sample tests, Azizpour & Giesecke (2008b) found that
prediction of economy-wide default activity based on past default timing outperforms
prediction based on exogenous economic covariates. Intuitively, the timing of past
defaults provides information about the timing of future defaults that is statistically
superior to the information contained in exogenous covariates. Further, if the past
default history is the conditioning information set, then the inclusion of additional
economic covariates does not improve economy-wide default forecast performance.
These empirical findings motivate the formulation of a parsimonious portfolio inten-
sity model whose conditioning information set is given by the past default history.
We assume that λt is a function of the path of N over [0, t]. More specifically, we
CHAPTER 2. RISK ANALYSIS OF CDOS 18
propose that λ evolves through time according to the equation
dλt = κt(ct − λt)dt+ dJt (2.10)
where λ0 > 0 is the initial intensity value, κt = κλTNt is the decay rate, ct = cλTNt is
the reversion level, and J is a response jump process given by
Jt =∑n≥1
max(γ, δλT−n )1Tn≤t. (2.11)
The quantities κ > 0, c ∈ (0, 1), δ > 0 and γ ≥ 0 are parameters. We denote by
θ the vector (κ, c, δ, γ, λ0). We give a sufficient condition guaranteeing that Nt < ∞almost surely, for all t. This condition relates the reversion level parameter c to the
parameter δ, which controls the magnitude of a jump of the intensity at an event.
Proposition 2.3.1. If c(1 + δ) < 1, then the counting process N is non-explosive.
Proof. It suffices to show that∫ t
0λsds <∞ almost surely for each t > 0. We assume
that γ = 0, without loss of generality. For s ≥ 0 and h(0) > 0, let
h(s) = ch(0) + (1− c)h(0) exp(−κh(0)s).
The function h describes the behavior of the intensity (2.10) between events. For
Tn ≤ t < Tn+1 and h(0) = λTn , we have that λt = h(t− Tn). The inverse h−1 to h is
given by
h−1(u) =1
κh(0)log
(h(0)(1− c)u− ch(0)
)for ch(0) < u ≤ h(0). Consider the first hitting time
ϕ(h(0)) = infs ≥ 0 : h(s) < h(0)/(1 + δ).
CHAPTER 2. RISK ANALYSIS OF CDOS 19
For c(1 + δ) < 1, we have
ϕ(h(0)) = h−1(h(0)/(1 + δ)) =1
κh(0)log
((1 + δ)(1− c)1− c(1 + δ)
).
Let T0 = 0. The conditional probability given FTn of the intensity jumping at an
event to a value that exceeds the value taken at the previous event is
P (λTn+1 > λTn | FTn) = P (Tn+1 − Tn < ϕ(λTn) | FTn)
= 1− exp
(−∫ ϕ(λTn )
0
h(s)ds
)= 1− exp
(− c
κlog
((1 + δ)(1− c)1− c(1 + δ)
)− δ
κ(1 + δ)
)=: M,
where M is strictly less than 1 and independent of n = 0, 1, 2, . . . It follows that the
unconditional probability P (λTn+1 > λTn) = M , for any n. Further, an induction
argument can be used to show that for any n and k = 1, 2, . . .
P (λTn+k> λTn+k−1
> · · · > λTn) = Mk.
Now letting
Ckt = ω : Nt(ω) ≥ k and λTn+1(ω) > λTn(ω) for n = Nt(ω)− 1, . . . , Nt(ω)− k
for t > 0 and k = 1, 2, . . ., we have that
P (Ckt ) = P (Nt ≥ k and λTNt > λTNt−1
> · · · > λTNt−k)
≤ P (λTNt+k > λTNt+k−1> · · · > λTNt )
= Mk
CHAPTER 2. RISK ANALYSIS OF CDOS 20
independently of t. We then conclude that
P
(∫ t
0
λs(ω)ds <∞)≥ P
(sups∈[0,t]
λs(ω) <∞)
≥ P
( ∞⋂k=1
Ckt
c)(2.12)
= 1− P( ∞⋂k=1
Ckt
)= 1− lim
k→∞P (Ck
t ) (2.13)
≥ 1− limk→∞
Mk
= 1.
Equation (2.13) is due to the fact that Ckt ∈ Ft for all k = 1, 2, . . . and C1
t ⊇ C2t ⊇ · · · .
To justify the inequality (2.12), we argue as follows. For given t > 0 and ω ∈ Ω, define
At(ω) =k ∈ 0, 1, . . . , Nt(ω)− 1 : λTk+1
(ω) ≥ λTk(ω).
Let ω ∈ ⋂∞k=1C
kt c. Then, by definition of Ck
t , either (i) Nt(ω) < ∞, or (ii) there
exists some n <∞ such that λTn+1(ω) < λTn(ω) and |At(ω) \ ATn(ω)| <∞.
In case (i), thanks to the condition |At(ω)| ≤ Nt(ω) <∞, each λTk+1(ω)−λTk(ω) <
∞ for all k ∈ At(ω). Thus, λs(ω) < ∞ for all s ∈ [0, t]. Hence, we conclude that
ω ∈ ω : sups∈[0,t] λs(ω) <∞ in this case.
Now consider case (ii). Note that for any s ∈ [0, t], the condition cλTNs (ω) <
λs(ω) implies that λs(ω) remains at infinity once λTNs (ω) achieves infinity. Hence,
λTn+1(ω) < λTn(ω) implies that λTn(ω) <∞, and we conclude that sups∈[0,Tn] λs(ω) <
∞. Moreover, due to the condition |At(ω)\ATn(ω)| <∞, we conclude that sups∈[Tn,t] λs(ω) <
∞.
Therefore, ⋂∞k=1C
kt c ⊆ ω : sups∈[0,t] λs(ω) <∞ and (2.12) is justified.
The intensity (2.10) follows a piece-wise deterministic process with right-continuous
sample paths. It jumps at an event time Tn. The jump magnitude is random. It is
equal to max(γ, δλT−n ), and depends on the intensity just before the event, which itself
CHAPTER 2. RISK ANALYSIS OF CDOS 21
is a function of the event times T1, . . . , Tn−1. The minimum jump size is γ. From Tn
onwards the intensity reverts exponentially to the level cλTn , at rate κλTn . Since the
reversion rate and level are proportional to the value of the intensity at the previous
event, they depend on the times T1, . . . , Tn and change adaptively at each default.
For Tn ≤ t < Tn+1, the behavior of the intensity is described by the FTn-measurable
function
λt = cλTn + (1− c)λTn exp(−κλTn(t− Tn)). (2.14)
The dependence of the reversion level ct, reversion speed κt and jump magnitude
max(γ, δλT−n ) on the path of the counting process N distinguishes our specification
(2.10) from the classical Hawkes (1971) model. The Hawkes intensity follows a piece-
wise deterministic process dλt = κ(c−λt)dt+ δdUt, where U = u1 + · · ·+uN and the
jump magnitudes un are drawn from a fixed distribution on R+. This model is more
rigid than our adaptive model (2.10): it imposes a global, state-independent reversion
level c and speed κ, and the magnitude of a jump in the Hawkes intensity is drawn
independently of past event times. Figure 2.2 contrasts the sample paths of (λ,N)
for the Hawkes model with those for our model (2.10). The paths exhibit different
clustering behavior, with the Hawkes model generating a more regular clustering
pattern. While Azizpour & Giesecke (2008a) found that a variant of the Hawkes
model performs well on the economy-wide default data, we had difficulty fitting this
and several other variants of the Hawkes model to the portfolio default times generated
by the re-sampling mechanism. We found the constant reversion level and speed to
be too restrictive. Our adaptive specification (2.10) relaxes these constraints while
preserving parsimony and computational tractability.
The jumps of the intensity process (2.10) are statistically important. They gener-
ate event correlation: an event increases the likelihood of further events in the near
future. This feature facilitates the replication of the event clusters seen in Figure
2.1, a fact that we establish more formally below. The intensity jumps can also be
motivated in economic terms. They represent the impact of a default on the other
firms, which is channeled through the complex web of contractual relationships in the
CHAPTER 2. RISK ANALYSIS OF CDOS 22
0 2 4 6 8 100
5
10
15
20
25
30
35
Inte
nsi
ty
0
5
10
15
20
25
30
35
40
Nu
mb
er
of
De
fau
lts
0 2 4 6 8 100
1
2
3
4
5
6
7
8
9
10
Inte
nsi
ty
0
5
10
15
20
25
30
35
40
Nu
mb
er
of
De
fau
lts
Figure 2.2: Left panel : Sample path of the intensity (left scale, solid) anddefault process (right scale, dotted) for the adaptive model (2.10) with θ =(0.25, 0.05, 0.4, 0.8, 2.5), values that are motivated by our estimation results in Section2.3.4. The reversion level and speed change at each event. Algorithm 3 is used togenerate the paths. Right panel : Sample path of the intensity and default processfor the Hawkes model dλt = κ(c − λt)dt + δdUt where the jump magnitudes un = 1so U = N , λ0 = 2.5, and κ = 2.5 and c = δ = 1.5 are chosen so that the expectednumber of events over 10 years matches that of the model (2.10), roughly 37. Notethat the Hawkes intensity cannot fall below the global reversion level c.
economy. The existence of these feedback phenomena is indicated by the ripple effects
associated with the default of Lehman Brothers on September 15, 2008, and is further
empirically documented in Azizpour & Giesecke (2008a), Jorion & Zhang (2007) and
others. Our jump size specification guarantees that the impact of an event increases
with the default rate prevailing at the event: the weaker the firms the stronger the
impact. An alternative motivation of the intensity jumps is Bayesian learning: an
event reveals information about the values of unobserved event covariates, and this
leads to an update of the intensity, see Collin-Dufresne, Goldstein & Helwege (2009),
Duffie et al. (2009) and others.
CHAPTER 2. RISK ANALYSIS OF CDOS 23
Algorithm 3 (Default Time Simulation) Generating a sample path of the port-folio default process N over [τ,H] for a horizon H > τ and the model (2.10).
1: Draw (Nτ , TNτ , λTNτ ) uniformly from (Nτ (ωi), TNτ (ωi), λTNτ (ωi)) : i ≤ I.2: Initialize (n, S)← (Nτ , τ) and λS ← cλTn + (1− c)λTn exp(−κλTn(S − Tn)).3: loop4: Draw E ∼ Exp(λS) and set T ← S + E .5: if T > H then6: Exit loop.7: end if8: λT ← cλTn + (λS − cλTn) exp(−κλTn(T − S))9: Draw u ∼ U(0, 1).
10: if u ≤ λT/λS then11: λT ← λT + max(γ, δλT )12: Assign Tn+1 ← T and update n← n+ 1.13: end if14: Set S ← T and λS ← λT .15: end loop
2.3.3 Event and loss simulation
The piece-wise deterministic dynamics of the intensity (2.10) generate computational
advantages for model calculation. One benefit of this feature is that it allows us to
estimate the distribution of N and related quantities by exact Monte Carlo simulation
of the jump times of N . The inter-arrival times of N can be generated sequentially
by acceptance-rejection sampling from a dominating counting process with intensity
λTNt− ≥ λt−.
The steps are summarized in Algorithm 3. In Step 1, we draw the state at the
analysis time τ from the re-sampling scenarios of N , and the corresponding fitted
intensity scenarios. The rate λS of the dominating Poisson process, from which a
candidate default time is generated in Step 4, is not only redefined at each acceptance,
but also at each rejection of a candidate time. This improves the efficiency of the
algorithm. The acceptance probability is increased and fewer candidate times are
wasted.
Algorithm 3 leads to a trajectory of the default process N for a portfolio with
replacement of defaulters. A trajectory of the corresponding portfolio loss process
CHAPTER 2. RISK ANALYSIS OF CDOS 24
L = `1 + · · ·+ `N is obtained by drawing the loss `n at each event time Tn of N . The
distribution µ` of `n is estimated by the empirical distribution of the losses associated
with all defaults in the re-sampling scenarios N(ωi) : i ≤ I.Once a path of N is generated, Algorithm 2 can be used to obtain a path of the
default process N ′ for a portfolio without replacement of defaulters. A trajectory of
the corresponding portfolio loss process L′ = `′1 + · · · + `′N ′ is obtained by drawing
the loss `′n at each event time from µ`.
Algorithm 3 is easy to implement and runs fast. For example, generating 100K
paths of N over 5 years for the fitted model parameters in Table 2.1 takes 1.67 seconds.
Generating 100K paths of N ′ takes, in the same configuration, 2.14 seconds.9
2.3.4 Likelihood estimators
Another advantage of the piece-wise deterministic intensity dynamics (2.10) is that
the log-likelihood function in problem (3.3) takes a closed form that can be computed
exactly. Based on I = 10K re-sampling scenarios, we address the problem (3.3) with
the Nelder-Mead algorithm, which uses only function values. The algorithm is initial-
ized at a set of random parameter values, which are drawn from a uniform distribution
on the parameter space Θ = (0, 2)×(0, 1)×(0, 2)2×(0, 20). For each of 100 randomly
chosen initial parameter sets, the algorithm converges to the optimal parameter value
θ reported in Table 2.1.10 We also provide asymptotic and bootstrapping confidence
intervals. The left panel of Figure 2.3 shows the path of the fitted mean intensity
λ = I−1∑I
i=1 λθ(ωi).
To provide some perspective on the parameter estimates, we employ an alternative
inference procedure. Instead of maximizing the total likelihood (3.3) associated with
9The numerical experiments in this chapter were performed on a desktop PC with an AMDAthlon 1.00 GHz processor and 960 MB of RAM, running Windows XP Professional. The codeswere written in C++ and the compiler used was Microsoft Visual C++ .NET version 7.1.3088. Forrandom number generation, numerical optimization, and numerical root-finding, we used the GNUScientific Library, Version 1.11.
10In the computing environment described in Footnote 9, it takes 0.15 seconds to evaluate thelog-likelihood function for I = 10K and a given parameter set. The full optimization takes 146.88seconds when the initial parameters are set to the mid-points of the parameter space.
CHAPTER 2. RISK ANALYSIS OF CDOS 25
Parameter κ c δ γ λ0
MLE 0.254 0.004 0.419 0.810 8.70995% CI (A) (0.252,0.255) (0.004,0.004) (0.417,0.422) (0.806,0.813) (8.566,8.851)95% CI (B) (0.250,0.258) (0.003,0.004) (0.406,0.433) (0.791,0.829) (8.538,8.882)Median 0.263 0.004 0.430 0.779 9.405
Table 2.1: Maximum likelihood estimates of the parameters of the intensity λ forthe CDX.HY6 portfolio, along with estimates of asymptotic (A) and bootstrapping(B) 95% confidence intervals (10K bootstrap samples were used). The “Median” rowindicates the median of the empirical distribution of the per-path MLEs over all re-sampling paths. The estimates are based on I = 10K re-sampling scenarios, generatedby Algorithm 1 from the observed defaults in the universe of Moody’s rated namesfrom 1/1/1970 to 11/7/2008.
all paths of N , we maximize the path log-likelihood
supθ(ωi)∈Θ
∫ τ
0
(log λθ(ωi)s− (ωi)dNs(ωi)− λθ(ωi)s (ωi)ds)
for each i = 1, 2, . . . , I. The last row in Table 2.1 shows the median of the empirical
distribution of the per-path MLE θ(ωi) over all paths ωi. These values are in good
agreement with the MLEs, supporting our total likelihood estimation strategy.
2.3.5 Testing in-sample fit
Above we have stressed the computational advantages of our intensity model (2.10).
In this section and the next, we evaluate the model statistically. We show that the
model fits the default data, and that the fitted model leads to accurate event forecasts.
We evaluate the joint specification of Z∗ and λ based on Proposition 2.2.2.11
The right panel of Figure 2.3 shows the fitted path of the economy-wide intensity
λ∗ = λ/Z∗, which defines the time change. We test the Poisson property of the
time-scaled inter-arrival times W ∗n =
∫ T ∗nT ∗n−1
1Z∗sλsds using a Kolmogorov-Smirnov (KS)
11Proposition 2.2.2 requires the process Z∗ to be strictly positive. The fitted values Z∗t are indeedstrictly positive during our sample period. We approximate the paths of the fitted process Z∗ andthe fitted mean intensity λ on a discrete-time grid with daily spacing. When multiple economy-widedefaults occur on the same day, then the arrival times are spaced equally.
CHAPTER 2. RISK ANALYSIS OF CDOS 26
1970 1975 1980 1985 1990 1995 2000 20050
2
4
6
8
10
12
14
16
18
20Fitted Mean Portfolio Intensity
1970 1975 1980 1985 1990 1995 2000 20050
10
20
30
40
50
60
70
80
90
100
110Semi−annual DefaultsFitted Economy−wide Intensity
Figure 2.3: Left panel : Fitted mean portfolio intensity λ = I−1∑I
i=1 λθ(ωi) vs.
[5%, 95%] percentiles (boxes), [1%, 99%] percentiles (whiskers) and the mean numberof portfolio defaults in any given year between 1970 and 2008, based on I = 10Kre-sampling scenarios for the CDX.HY6. Right panel : Fitted economy-wide intensityλ/Z∗ vs. economy-wide defaults between 1970 and 2008, semi-annually. The fittedintensity matches the time-series fluctuation of observed economy-wide default rates.
test, a QQ plot, and Prahl’s (1999) test. The KS test addresses the deviation of
the empirical distribution of the (W ∗n) from their theoretical standard exponential
distribution. Prahl’s test is particularly sensitive to large deviations of the (W ∗n)
from their theoretical mean 1. Prahl shows that if the (W ∗n) are independent samples
from a standard exponential distribution, then
M =1
m
∑n:W ∗n<µ
(1− W ∗
n
µ
)(2.15)
is asymptotically normally distributed with mean µM = e−1− 0.189/m and standard
deviation σM = 0.2427/√m, where m = N∗τ is the number of arrivals and µ is the
sample mean of (W ∗n)n≤m. We reject the null hypothesis of a correct joint specification
of Z∗ and λ if the test statistic ∆M = (M−µM)/σM lies outside of the interval (−1, 1).
Figure 2.4 contrasts the empirical distribution of the (W ∗n) with the theoretical
standard exponential distribution. The KS test has a p-value of 0.094, indicating
CHAPTER 2. RISK ANALYSIS OF CDOS 27
0 1 2 3 4 5 6 7 8 90
1
2
3
4
5
6
7
8
9
Empirical Quantiles
Th
eo
retic
al E
xpo
ne
ntia
l Qu
an
tile
s
0 0.5 1 1.5 2 2.5 3 3.5 40
0.2
0.4
0.6
0.8
1
Figure 2.4: In-sample fitness tests: empirical distribution of time-scaled, economy-wide inter-arrival times generated by the fitted (mean) paths of Z∗ and λ, based onI = 10K re-sampling scenarios for the CDX.HY6. Left panel : Empirical quantiles oftime-scaled inter-arrival times vs. theoretical standard exponential quantiles. Rightpanel : Empirical distribution function (solid) of time-scaled inter-arrival times vs.theoretical standard exponential distribution function (dotted) along with 1% and5% bands.
that the deviation of the empirical distribution from the theoretical distribution is not
statistically significant at standard confidence levels. The value of Prahl’s test statistic
∆M = 0.87 leads to the same conclusion. The results of these and other12 tests
suggest that the economy-wide intensity λ∗ generated by (2.3) and (2.10) replicates
the substantial time-series variation of default rates during 1970–2008. This means
that our adaptive intensity model (2.10) captures the clustering of defaults observed
during the sample period.
A benefit of the time-scaling tests is that they can be applied to alternative model
formulations, facilitating a direct comparison of fitting performance. Consider the
bottom-up specification of Das et al. (2007a), which appears to be much richer than
12To detect a possible serial dependence of the W ∗n , we also consider the test statistics SC1 andSC2 described by Lando & Nielsen (2009). These tests check the dependence of the number ofre-scaled events in non-overlapping time bins. Under the null of Poisson arrivals, the event countsare independent. We cannot reject the null for bin sizes 2, 4, 6, 8 and 10 at standard confidencelevels.
CHAPTER 2. RISK ANALYSIS OF CDOS 28
ours: they specify firm-level intensity models with conditioning information given by
a set of firm-specific and macro-economic covariates. Das et al. (2007a) find that
the time-scaled, economy-wide times generated by their model deviate significantly
from those of a Poisson process. In particular, they find that their specification does
not completely capture the event clusters in the data. Thus, they reject this model
formulation at standard confidence levels. This may indicate that, relative to our
portfolio-level formulation with conditioning information given by past default timing,
the additional modeling and estimation effort involved in a firm-level intensity model
formulation with a large set of exogenous covariates may not translate into better fits
and forecast performance.
Finally we note that the time-scaling tests can also be used to evaluate the spec-
ification of λ directly on the re-sampling scenarios N(ωi) : i ≤ I. For each path
ωi, we time-scale the portfolio default process N(ωi) with its fitted compensator
A(ωi) =∫ ·
0λs(ωi)ds, and calculate the statistics of the KS test and Prahl’s test for
the time-scaled inter-arrival times Wn =∫ Tn(ωi)
Tn−1(ωi)λθs(ωi)ds, which can be calculated
exactly thanks to formula (2.14). The 95% confidence interval for the p-value of the
KS test is 0.24± 0.01. The 95% confidence interval for Prahl’s test statistic, defined
for (Wn) in analogy to (2.15), is 0.81±0.07. Also these tests indicate that the adaptive
model (2.10) is well-specified.
2.3.6 Testing out-of-sample loss forecasts
Next we assess the out-of-sample event forecast performance of the fitted model (2.10).
We contrast the loss forecast for a test portfolio without replacement with the actual
loss in the test portfolio. The firms in the test portfolio are randomly selected from the
universe of rated issuers, such that the test portfolio has the same rating composition
as the CDX.HY6.13 We apply Algorithms 3 and 2 to obtain an unbiased estimate of
the conditional distribution of the incremental test portfolio loss L′τ+1−L′τ given Fτ .We then contrast this distribution with the actual loss in the test portfolio during
13We do not use the CDX.HY6 itself, since it was formed only in 2006, and we want to test theloss process over a longer time period, starting in 1996. The analysis would suffer from survivorshipbias if we were to take the CDX portfolio itself.
CHAPTER 2. RISK ANALYSIS OF CDOS 29
1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 20070
1
2
3
4
5
6
7
8
9
10
1 Y
ea
r L
oss
Realized Loss
1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 20070
1
2
3
4
5
6
7
8
1 Y
ea
r L
oss
Realized Loss
Figure 2.5: Out-of-sample test of loss forecasts for a test portfolio with the samerating composition as the CDX.HY6. We show the [25%, 75%] percentiles (box), the[15%, 85%] percentiles (whiskers) and the median (horizontal line) of the fitted condi-tional distribution of the incremental portfolio loss L′τ+1 − L′τ given Fτ for τ varyingannually between 1/1/1996 and 1/1/2007, estimated from 100K paths generated byAlgorithms 3 and 2, and the realized portfolio loss during [τ, τ + 1]. Left panel :The test portfolio is selected at the beginning of each period. Right panel : The testportfolio is selected in 1996.
(τ, τ + 1], for yearly analysis times τ between 1/1/1996 and 1/1/2007.
Figure 2.5 shows the results for two settings that represent common situations in
practice. In one setting (left panel), we roll over the test portfolio in each 1 year test
period. That is, for each test period, we select a new test portfolio of 100 issuers at
the beginning of the period, estimate the portfolio loss distribution based on data
available at the beginning of the period, and compare it with the realized portfolio
loss during the forecast period. In the other setting (right panel), we select the test
portfolio only once, in 1996. For each period, we estimate the loss distribution for the
portfolio of names that have survived to the beginning of the period, and compare it
with the realized portfolio loss during the forecast period. This setting is motivated
by the situation of a buy and hold investor. The graphs indicate that the portfolio
loss forecasts are very accurate.
CHAPTER 2. RISK ANALYSIS OF CDOS 30
2.4 Synthetic collateralized debt obligations
Having established the fitting and forecast performance of the adaptive model (2.10)
of portfolio default timing, we use it to estimate the exposure of an investor selling
default protection on a tranche of a synthetic CDO. A synthetic CDO is based on
a portfolio of C single-name credit swaps with notional 1 and maturity date H, the
maturity date of the CDO. The constituent credit swaps are referenced on bonds
issued by rated firms. When a bond issuer defaults, the corresponding credit swap
pays the loss associated with the event. A defaulter is not replaced.
A tranche of a synthetic CDO is a swap contract specified by a lower attachment
point K ∈ [0, 1), and an upper attachment point K ∈ (K, 1]. An index swap has
attachment points K = 0 and K = 1. The protection seller agrees to cover all losses
due to default in the reference portfolio, provided these losses exceed a fraction K of
the total notional C of the reference contracts but are not larger than a fraction K
of C. In exchange, the protection buyer pays to the protection seller an upfront fee
at inception, and a quarterly fee, both of which are negotiated at contract inception.
We estimate the risk exposure of the tranche protection seller, which is measured
by the conditional distribution at contract inception of the cumulative tranche loss,
i.e., the portfolio loss allocated to the tranche over the life of the contract. With the
convention that the portfolio loss at contract inception is equal to zero, the cumulative
tranche loss at a post-inception time t ≤ H is given by the “call spread”
Ut(K,K) = (L′t − CK)+ − (L′t − CK)+. (2.16)
The left panel of Figure 2.6 shows the conditional distribution of the normalized 5 year
cumulative tranche loss UH(K,K)/C(K − K) for the CDX.HY6 on 3/27/2006, the
Series 6 contract inception date,14 for each of several standard attachment point pairs.
The maturity date H for Series 6 contracts is 6/27/2011.15 To estimate the tranche
14Every 6 months, the CDX High Yield index portfolio is “rolled.” That is, a new portfolio with anew serial number is formed by replacing names that have defaulted since the last roll, and possiblyother names. The index and tranche swaps we consider are tied to a fixed series.
15Although the actual time to maturity is 5 years and 3 months at contract inception, here andbelow, we follow the market convention of referring to the contract as a “5 year contract.” At an
CHAPTER 2. RISK ANALYSIS OF CDOS 31
0 20 40 60 80 1000
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Index/Tranche Loss (%)
Cu
mu
lativ
e L
oss
Dis
trib
utio
nIndex 0−10% 10−15% 15−25%
0 20 40 60 80 1000
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Index/Tranche Loss (%)
Cu
mu
lativ
e L
oss
Dis
trib
utio
n
Index 0−10% 10−15% 15−25%
Figure 2.6: Kernel smoothed conditional distribution of the normalized 5 year cu-mulative tranche loss for the CDX.HY6 on 3/27/2006, the Series 6 contract inceptiondate, for each of several standard attachment point pairs. Here, H represents the ma-turity date 6/27/2011. Left panel : Actual distribution, estimated using the model andfitting methodology developed in Sections 2.2 and 2.3, based on I = 10K re-samplingscenarios for N , and 100K replications. Right panel : Risk-neutral distribution, esti-mated from the market prices paid for 5 year tranche protection on 3/27/2006 usingthe method explained in Appendix A, based on 100K replications.
loss distribution, we generate default scenarios during 1/1/1970 – 3/27/2006 for the
CDX portfolio from Moody’s default history by the re-sampling Algorithm 1. Next
we estimate the intensity model (2.10) from these scenarios, as described in Section
2.3.4, with parameter estimates reported in Table 2.2. Then we generate event times
during the prediction interval 3/27/2006 – 6/27/2011 from the fitted intensity using
Algorithm 3, which we thin using Algorithm 2 to obtain paths of portfolio default
times and losses without replacement. The trajectories for U(K,K) thus obtained
lead to an unbiased estimate of the desired distribution.
The loss distributions indicate the distinctions between the risk profiles of the
different tranches. The equity tranche, which has attachment points 0 and 10%,
carries the highest exposure among all tranches. For the equity protection seller,
index roll, new “5 year contracts” are issued, and these mature in 5 years and 3 months from theroll date.
CHAPTER 2. RISK ANALYSIS OF CDOS 32
Parameter κ c δ γ λ0
MLE 0.260 0.003 0.439 0.856 8.49495% CI (A) (0.259,0.261) (0.003,0.003) (0.437,0.441) (0.853,0.859) (8.368,8.621)95% CI (B) (0.256,0.265) (0.003,0.004) (0.424,0.454) (0.836,0.876) (8.327,8.650)
Table 2.2: Maximum likelihood estimates of the parameters of the portfolio intensityλ for the CDX.HY6, along with estimates of asymptotic (A) and bootstrapping (B)95% confidence intervals (10K bootstrap samples were used). The estimates arebased on I = 10K re-sampling scenarios for N , generated by Algorithm 1 from theobserved economy-wide defaults in Moody’s universe of rated names from 1/1/1970to 3/27/2006.
the probability of trivial losses is roughly 15%, and the probability of losing the
full tranche notional is 20%. For the (10%, 15%)-mezzanine protection seller, the
probabilities are roughly 80% and 10%, respectively. The mezzanine tranche is less
risky, since the equity tranche absorbs the first 10% of the total portfolio loss. For
the (15%, 25%)-senior protection seller, the probabilities are roughly 90% and 5%,
respectively.
To provide some perspective on these numbers, we also estimate the risk-neutral
tranche loss distributions implied by the market prices paid for 5 year tranche protec-
tion on 3/27/2006. The estimation procedure, explained in Appendix A, is based on
the intensity model (2.10) relative to a risk-neutral pricing measure. The risk-neutral
intensity parameters are chosen such that the model prices match the market index
and tranche prices as closely as possible. The fitting errors are small, which indi-
cates that our adaptive intensity model (2.10) performs also well under a risk-neutral
pricing measure.
The right panel of Figure 2.6 shows the risk-neutral distribution of UH(K,K)/C(K−K) for the CDX.HY6, for each of several standard attachment point pairs. For the
equity protection seller, the risk-neutral probability of trivial losses is zero, and the
risk-neutral probability of losing the full tranche notional is 70%. Compare with the
actual probabilities indicated in the left panel of Figure 2.6. For any tranche, the
risk-neutral probability of losing more than any given fraction of the tranche notional
is much higher than the corresponding actual probability. The distinction between
CHAPTER 2. RISK ANALYSIS OF CDOS 33
010
2030
40
34
56
78
910
0
0.02
0.04
0.06
0.08
0.1
Loss (%)
Time toMaturity (Years)
010
2030
40
34
56
78
910
0
0.02
0.04
0.06
0.08
0.1
Loss (%)
Time toMaturity (Years)
Figure 2.7: Kernel smoothed conditional distribution of the normalized cumula-tive portfolio loss L′H/C for the CDX.HY6 on 3/27/2006 for horizons H varying be-tween 3/27/2009 and 3/27/2016. Left panel : Actual distribution, estimated using themodel and fitting methodology developed in Sections 2.2 and 2.3, based on I = 10Kre-sampling scenarios for N , and 100K replications. Right panel : Risk-neutral dis-tribution, estimated from the market prices paid for 5 year tranche protection on3/27/2006 using the method explained in Appendix A, based on 100K replications.
the probabilities reflects the risk premium the protection seller requires for bearing
exposure to the correlated corporate default risk in the reference portfolio. Figure
2.7 shows the distributions of the normalized cumulative portfolio loss L′H/C for mul-
tiple horizons H. For all horizons, the risk-neutral distribution has a much fatter
tail than the corresponding actual distribution. In other words, when pricing index
and tranche contracts, the market overestimates the probability of extreme default
scenarios relative to historical default experience.
Our tools can also be used to analyze more complex investment positions. In-
vestors often trade the CDO capital structure, i.e. they simultaneously sell and buy
protection in different tranches. A popular trade used to be the “equity-mezzanine”
trade, in which the investor sells equity protection and hedges the position by buy-
ing mezzanine protection with matching notional and maturity. Figure 2.8 con-
trasts the conditional distribution on 3/27/2006 of the normalized cumulative loss
CHAPTER 2. RISK ANALYSIS OF CDOS 34
0 2 4 6 8 100
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Loss (%)
Cu
mu
lativ
e L
oss
Dis
trib
utio
n
Equity−MezzanineEquity Only
Figure 2.8: Kernel smoothed conditional distribution on 3/27/2006 of the normal-ized cumulative loss (UH(0, 0.1)− UH(0.1, 0.15))/0.1C associated with selling equityprotection and buying mezzanine protection with matching notional and maturity,along with the distribution of the normalized cumulative equity loss UH(0, 0.1)/0.1C,for the CDX.HY6.
(UH(0, 0.1) − UH(0.1, 0.15))/0.1C generated by this trade with the conditional dis-
tribution of the normalized cumulative equity loss UH(0, 0.1)/0.1C, both for the
CDX.HY6. While the mezzanine hedge does not alter the probability of trivial losses
in the equity-only position, it does reduce the probability of a total loss of notional
from 20% to virtually zero. This is because the mezzanine protection position gen-
erates cash flows when equity is wiped out. The magnitude of these cash flows is
however capped at 50% of the total equity notional, and this property generates the
point mass at 5% in the loss distribution of the equity-mezzanine position.
We can also measure the risk of positions in tranches referenced on different port-
folios. For example, an investor may sell and buy protection on several of the refer-
ence portfolios in the CDX family, including the High Yield, Investment Grade and
Crossover portfolios. In this case, we estimate default and loss processes for each of
the portfolios based on the realization of the economy-wide default process. We gen-
erate events for each portfolio, and then aggregate the corresponding position losses
as in the equity-mezzanine case.
CHAPTER 2. RISK ANALYSIS OF CDOS 35
Figure 2.9: Cash flow “waterfall” of a sample cash CDO.
2.5 Cash collateralized debt obligations
Next we use the adaptive model to estimate the exposure of an investor in a tranche
of a cash CDO. A cash CDO is based on a portfolio of corporate bonds, mortgages
or other credit obligations, which is bought by a special purpose vehicle (SPV) that
finances the purchase by issuing a collection of tranches that may pay coupons. The
interest and recovery cash flows generated by the reference bonds are collected by
the SPV in a reserve account, and are then allocated to the tranches according to a
specified prioritization scheme. Losses due to defaults are absorbed by the tranche
investors, usually in reverse priority order. Figure 2.9 illustrates a typical structure.
The reference portfolio of a cash CDO may be actively managed. In this case, the
asset manager can buy and sell collateral bonds, and replace defaulted obligations.
We analyze a basic unmanaged cash CDO, in which the reference bonds are held
to maturity and defaulted bonds are not replaced. We assume that the C reference
bonds are straight coupon bonds with notional 1, maturity date H equal to the
CDO maturity date, issuance date t0 equal to the CDO inception date, coupon dates
(tm)1≤m≤M with tM = H, and per-period coupon rate v. The total interest income to
CHAPTER 2. RISK ANALYSIS OF CDOS 36
the SPV in period m is therefore
W (m) = v(C −N ′tm). (2.17)
When a reference bond defaults, the SPV collects the recovery at the coupon date
following the event. The total recovery cash flow in period m is
K(m) = N ′tm −N′tm−1− (L′tm − L
′tm−1
).
The total cash flows from coupons and recoveries are invested in a reserve account
that earns interest at the risk-free rate r. For period m, they are given by
B(m) =
W (m) +K(m) if 1 ≤ m < M
W (m) +K(m) + C −N ′tm if m = M.
The SPV issues three tranches, of which two are debt tranches that promise to
pay a specified coupon at times (tm). There is one senior debt tranche, represented
by a sinking-fund bond with initial principal p1 and per-period coupon rate c1, and a
junior debt tranche that is represented by a sinking-fund bond with initial principal
p2 and per-period coupon rate c2. The initial principal of the residual equity tranche
is p3 = C − p1 − p2. Each tranche has maturity date H.
For the debt tranches j = 1, 2 and coupon period m, we denote by Fj(m) the
remaining principal. The scheduled interest payment is Fj(m)cj, and the accrued
unpaid interest is Aj(m − 1). If Qj(m), the actual interest paid in period m, is less
than Fj(m)cj, then the difference is accrued at cj to generate an accrued unpaid
interest of
Aj(m) = (1 + cj)Aj(m− 1) + cjFj(m)−Qj(m).
The actual payments to the debt tranches are prioritized. We consider two priori-
tization schemes, the uniform and fast schemes, which were introduced by Duffie &
Garleanu (2001) and which are reviewed in Appendix B for completeness. A pri-
oritization scheme specifies the actual interest payment Qj(m), the pre-payment of
principal Pj(m), and the contractual unpaid reduction in principal Jj(m). The total
CHAPTER 2. RISK ANALYSIS OF CDOS 37
Reference Uniform Scheme Fast SchemeBond Senior Junior Senior Junior
Annualized Par Coupon Rate (bp) 811.23 476.58 1299.42 483.38 749.75Par Coupon Spread (bp) 339.20 23.68 846.52 11.29 277.77
Table 2.3: Fitted annualized par coupon rates and spreads on 3/27/2006 forthe constituent bonds and debt tranches of a 5 year cash CDO referenced on theCDX.HY6, for each of two standard prioritization schemes. The principal values are(p1, p2, p3) = (85, 10, 5). The fitting procedure is described in Appendix C.
cash payment in period m is Qj(m) + Pj(m). The remaining principal after interest
payments is
Fj(m) = Fj(m− 1)− Pj(m)− Jj(m) ≥ 0.
The par coupon rate on tranche j is the scheduled coupon rate cj with the property
that the initial market value of the bond is equal to its initial face value Fj(0) = pj.
The residual equity tranche does not make scheduled coupon or principal payments
so c3 = 0. Instead, at maturityH, after the debt tranches have been paid as scheduled,
any remaining funds in the reserve account are allocated to the equity tranche.
We analyze the exposure of a tranche investor for a 5 year cash CDO referenced
on the CDX.HY6 of C = 100 names. This choice of reference portfolio allows us
to compare the results with those for the synthetic CDO. Before we can start this
analysis, we need to price the reference bonds and the debt tranches.16 The first
step is to estimate the par coupon rate of the reference bonds. This is the scheduled
coupon rate v∗ with the property that the initial market value of a reference bond is
equal to its initial face value 1. Given v∗, the second step is to estimate the par coupon
rates c∗j of the debt tranches. Appendix C gives details on these steps, and Table 2.3
reports the annualized par coupon rates and spreads. The par coupon spread is the
difference between the par coupon rate and the (hypothetical) par coupon rate that
would be obtained if the reference bonds were not subject to default risk.
Next we estimate the conditional distribution of the discounted cumulative tranche
16In practice, this is done by the SPV, which then offers the tranche terms to potential investors.We have to perform this task here since we do not have access to actual data for the structure weanalyze.
CHAPTER 2. RISK ANALYSIS OF CDOS 38
loss, which is the loss the tranche investor faces during the life of the contract, dis-
counted at the risk-free rate r. At a post-inception time t ≤ H, the discounted
cumulative tranche loss is given by
Ujt(H, pj, v∗, c∗1, c
∗2) = V jt(H, pj, v
∗, c∗1, c∗2)− Vjt(H, pj, v∗, c∗1, c∗2) (2.18)
where Vjt(H, pj, v∗, c∗1, c
∗2) is the present value at time t of the coupon and principal
cash flows actually paid to tranche j over the life of the tranche, and V jt(H, pj, v∗, c∗1, c
∗2)
is the present value of these cash flows assuming no defaults occur during the remain-
ing life. For a debt tranche, V jt(H, pj, v∗, c∗1, c
∗2) represents the present value of all
scheduled coupon and principal payments. For the equity tranche, this quantity rep-
resents the present value of the reserve account value after the debt tranches have
been paid as scheduled.
Note that due to the complex structure of the cash flow prioritization, the tranche
loss Ujt(H, pj, v∗, c∗1, c
∗2) does not admit a simple analytic expression in terms of the
portfolio default count and loss of the reference portfolio. This leaves simulation as
the only feasible tool for analyzing the cash CDO tranche loss, regardless of whether
or not the models of the portfolio default count and loss processes are analytically
tractable.
Figure 2.10 shows the fitted conditional distribution on 3/27/2006 of the normal-
ized loss Ujτ (H, pj, v∗, c∗1, c
∗2)/V jτ (H, pj, v
∗, c∗1, c∗2) for a 5 year structure whose matu-
rity date H is 6/27/2006, for each of several tranches. To estimate that distribution,
we proceed as described in Section 2.4 for the synthetic CDO, and generate paths
of the fitted portfolio default and loss processes. These are then fed into the cash
flow calculator, which computes Vjt(H, pj, v∗, c∗1, c
∗2) for each path. We assume that
coupons are paid quarterly. The risk-free interest rate r is deterministic, and is esti-
mated from Treasury yields for multiple maturities on 3/27/2006, obtained from the
website of the Department of Treasury.
As in the synthetic CDO case, the risk profile of a cash CDO tranche depends on
the degree of over-collateralization, i.e. the location of the attachment points. It also
depends on the prioritization scheme specified in the CDO terms. With the uniform
CHAPTER 2. RISK ANALYSIS OF CDOS 39
0 20 40 60 80 1000
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Tranche Loss (%)
Cu
mu
lativ
e L
oss
Dis
trib
utio
n
SeniorJuniorResidual
0 20 40 60 80 1000
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Tranche Loss (%)
Cu
mu
lativ
e L
oss
Dis
trib
utio
n
SeniorJuniorResidual
Figure 2.10: Kernel smoothed conditional distribution on 3/27/2006 of the normal-ized discounted loss Ujτ (H, pj, v
∗, c∗1, c∗2)/V jτ (H, pj, v
∗, c∗1, c∗2) for a 5 year cash CDO
referenced on the CDX.HY6, whose maturity date H is 6/27/2011. The initial trancheprincipals are p1 = 85 for the senior tranche, p2 = 10 for the junior tranche, andp3 = 5 for the residual equity tranche. We apply the model and fitting methodologydeveloped in Sections 2.2 and 2.3, based on I = 10K re-sampling scenarios for N ,and 100K replications. Left panel : Uniform prioritization scheme. Right panel : Fastprioritization scheme.
scheme, the cash flows generated by the reference bonds are allocated sequentially
to the debt tranches according to priority order, and default losses are applied to
all tranches in reverse priority order. With the fast scheme, the cash flows are used
to retire the senior tranche as soon as possible. After the senior tranche is retired,
the cash flows are used to service the junior tranche. After the junior tranche is
retired, any residual cash flows are distributed to the equity tranche. Therefore, the
senior tranche investor is less exposed under the fast scheme: the probability of zero
losses is increased from roughly 95% for the uniform scheme to roughly 98%. This
reduction in risk is reflected by the par coupon spreads reported in Table 2.3. For
the junior investor, the probability of zero losses increases from roughly 82% for the
uniform scheme to roughly 93% for the fast scheme, but the probability of a loss equal
to the present value of all scheduled coupon and principal payments increases from
zero to about 3%. Nevertheless, the junior tranche commands a much smaller spread
CHAPTER 2. RISK ANALYSIS OF CDOS 40
−100 −50 0 50 100 150 200 250 3000
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Tranche P&L (%)
Cu
mu
lativ
e D
istr
ibu
tion
Uniform SchemeFast Scheme
Figure 2.11: Kernel smoothed conditional distribution on 3/27/2006 of the nor-malized discounted profit and loss (V3τ (H, 5, v
∗, c∗1, c∗2) − 5)/5 for the residual equity
tranche of a 5 year cash CDO referenced on the CDX.HY6, whose maturity date His 6/27/2011. We apply the model and fitting methodology developed in Sections 2.2and 2.3, based on I = 10K re-sampling scenarios for N , and 100K replications.
under the fast scheme. The risk reduction for the debt tranche investors comes at
the expense of the equity investor: while the probability of zero losses is about 15%
for both schemes, the probability of a loss equal to the present value of the reserve
account value after the debt tranches have been paid as scheduled increases from zero
to roughly 8% for the fast scheme. On the other hand, the equity investor has a
much higher upside under the fast scheme. Figure 2.11 shows the distribution of the
normalized discounted profit and loss (V3τ (H, 5, v∗, c∗1, c
∗2)−5)/5 for an equity tranche
position. For the equity investor, the probability of more than doubling the principal
p3 after discounting is roughly 75% under the fast scheme, while it is only 60% under
the uniform scheme. Figure 2.12 shows that for the debt tranches, the upside is much
more limited, for any scheme. For example, the normalized discounted profit on the
senior tranche can be at most 0.5% under the fast scheme and about 1% under the
uniform scheme.
A standard measure to quantify the risk of a position is value at risk, a quantile
CHAPTER 2. RISK ANALYSIS OF CDOS 41
−12 −10 −8 −6 −4 −2 0 20
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Tranche P&L (%)
Cu
mu
lativ
e D
istr
ibu
tion
Uniform SchemeFast Scheme
−100 −50 0 500
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Tranche P&L (%)
Cu
mu
lativ
e D
istr
ibu
tion
Uniform SchemeFast Scheme
Figure 2.12: Kernel smoothed conditional distribution on 3/27/2006 of the normal-ized discounted profit and loss (Vjτ (H, pj, v
∗, c∗1, c∗2)−pj)/pj for the debt tranches of a
5 year cash CDO referenced on the CDX.HY6, whose maturity date H is 6/27/2011.We apply the model and fitting methodology developed in Sections 2.2 and 2.3, basedon I = 10K re-sampling scenarios for N , and 100K replications. Left panel : Seniortranche with principal p1 = 85. Right panel : Junior tranche with principal p2 = 10.
of the position’s loss distribution. We estimate the value at risk at time t ≤ H of a
position in a cash CDO tranche maturing at H, given by
VaRjt(α,H, pj, v∗, c∗1, c
∗2) = inf
x ∈ R : P
(Ujt(H, pj, v
∗, c∗1, c∗2)
V jt(H, pj, v∗, c∗1, c∗2)≤ x
∣∣∣Ft) > α
for some level of confidence α ∈ (0, 1). Figure 2.13 shows the 99.5% value at risk for
the 5 year senior tranche with maturity date H given by 6/27/2011, for analysis times
varying weekly between 3/27/2006 and 11/7/2008, for each of the two prioritization
schemes. Assuming risk capital is allocated according to the value at risk, a value in
the time series represents the amount of risk capital that is needed at that time to
support the tranche position over its remaining life. For both prioritization schemes,
the time series behavior reflects the rising trend in corporate defaults that started in
2008 and that is evidenced in Figure 2.1. The fast scheme requires less risk capital
than the uniform scheme but leads to higher volatility of risk capital.
CHAPTER 2. RISK ANALYSIS OF CDOS 42
Mar−06 Aug−06 Jan−07 Jun−07 Nov−07 Apr−08 Sep−0812
13
14
15
16
17
18
19
20
21
22
23
99
.5%
Va
lue
−a
t−R
isk
(%)
Uniform (Senior)
Mar−06 Aug−06 Jan−07 Jun−07 Nov−07 Apr−08 Sep−086
7
8
9
10
11
12
13
14
15
16
99
.5%
Va
lue
−a
t−R
isk
(%)
Fast (Senior)
Figure 2.13: 99.5% value at risk VaR1τ (0.995, H, 85, v∗, c∗1, c∗2) for the 5 year senior
tranche of a cash CDO referenced on the CDX.HY6 with maturity date 6/27/2011, forτ varying weekly between 3/27/2006 and 11/7/2008, based on I = 10K re-samplingscenarios for N , and 100K replications. Left panel : Uniform prioritization scheme.Right panel : Fast prioritization scheme.
2.6 Conclusion
This chapter develops, implements and validates stochastic methods to measure the
risk of investment positions in collateralized debt obligations and related credit deriva-
tives tied to an underlying portfolio of defaultable assets. The ongoing financial crisis
highlights the need for sophisticated yet practical tools allowing potential and ex-
isting investors as well as regulators to quantify the exposure associated with such
positions, and to accurately estimate the amount of risk capital required to support
a position.
The key to address the risk analysis problem is a model of default timing that cap-
tures the default clustering in the underlying portfolio, and a method to estimate the
model parameters based on historical default experience. This chapter contributes
to each of these two sub-problems. It formulates an adaptive, intensity-based point
process model of default timing that performs well according to in- and out-of-sample
tests. Moreover, it develops a maximum likelihood approach to estimating point pro-
cess models. This approach is based on an acceptance/rejection re-sampling scheme
CHAPTER 2. RISK ANALYSIS OF CDOS 43
that generates alternative portfolio default histories from the available economy-wide
default process.
The point process model and inference method have potential applications in
other areas dealing with correlated event arrivals, within and beyond financial engi-
neering. Financial engineering examples include the pricing and hedging of securities
exposed to correlated default risk, and order book modeling. Other example areas
are insurance, healthcare, queuing, and reliability.
Chapter 3
Systemic Risk: What Defaults Are
Telling Us
This chapter develops maximum likelihood estimators of the term structure of sys-
temic risk in the U.S. financial sector, defined as the conditional probability of failure
of a large number of financial institutions. The estimators are based on a new dy-
namic hazard model of failure timing that captures the influence of time-varying
macro-economic and sector-specific risk factors on the likelihood of failures, and the
impact of risk spillovers due to contagion or incomplete information about relevant
risk factors. The estimation results, which cover the period January 1987 to Decem-
ber 2008, provide strong evidence for the presence of failure clustering not caused by
variations in the observable explanatory covariates, which include the trailing return
on the S&P 500 index, the lagged slope of the U.S. yield curve, the default and TED
spreads, and other sector-specific variables.
This chapter is joint work with Kay Giesecke.
3.1 Introduction
Systemic risk in the financial sector is difficult to measure, and its sources are poorly
understood. This makes it hard for regulators and policy makers to address it effec-
tively. A major challenge is to take account of the risk spillovers that may be caused
44
CHAPTER 3. SYSTEMIC RISK: WHAT DEFAULTS ARE TELLING US 45
by incomplete information about relevant risk factors or contagion in an increasingly
opaque network of interbank loans, derivative trading relationships, and other links
between firms.
This chapter develops maximum likelihood estimators of the term structure of
systemic risk, defined as the conditional probability of failure of a sufficiently large
fraction of the total population of financial institutions. Its main contribution over
prior work is to incorporate the effects of spillovers when measuring systemic risk. The
estimators provide strong evidence for the presence of risk spillovers in the U.S. finan-
cial system during 1987–2008, after controlling for the exposure of firms to observable
risk factors, a source of systemic risk that has been emphasized in the literature on
bank failure prediction. The part of systemic risk not caused by the variation of the
trailing return of the S&P 500, the lagged slope of the U.S. yield curve, the default
and TED spreads, and other observable variables can be substantial, and tends to
be higher in periods of adverse economic conditions. The results indicate that the
systemic risk in the U.S. financial sector can be much greater than would be esti-
mated under the conventional assumption that bank failure clusters arise only from
exposure to observable risk factors.
Our empirical analysis is based on a new dynamic hazard model of correlated fail-
ure timing that extends the proportional hazards specification used by Das, Duffie,
Kapadia & Saita (2007b), Duffie, Saita & Wang (2007), McDonald & Van de Gucht
(1999) and others to predict non-financial corporate default, and by Brown & Dinc
(2005), Brown & Dinc (2009), Lane, Looney & Wansley (1986), Whalen (1991),
Wheelock & Wilson (2000) and others to forecast bank failures. The distinguishing
feature of our formulation is an additional hazard term that is designed to capture
the statistical implications of risk spillovers within and between the real and finan-
cial sectors. The specification addresses the effects of Bayesian learning at defaults
about unobserved or missing risk factors, a source of spillovers emphasized by Collin-
Dufresne et al. (2009), Duffie et al. (2009) for the real sector, and by Acharya &
Yorulmazer (2008), Aharony & Swary (1983), Cooperman, Lee & Wolfe (1992) and
others for the financial sector. It also covers spillovers channeled through the com-
plex web of derivatives counterparty relations, interbank loans, trade credit chains,
CHAPTER 3. SYSTEMIC RISK: WHAT DEFAULTS ARE TELLING US 46
parent-subsidiary relationships, and other links between firms. A traditional propor-
tional hazard formulation ignores spillover effects. It focuses on the dependence of
default timing on observable explanatory variables whose dynamics are assumed to
be unaffected by failure events.
We estimate our extended hazard model using data on economy-wide default ex-
perience in the U.S. for the period January 1987 to December 2008, and a collection of
time-varying explanatory covariates that capture the influence of economic conditions
on failure timing. To address the implications of industrial defaults for bank failures
and vice versa, we develop a two-step maximum likelihood estimation approach. In
contrast to the traditional one-step estimation method, our approach does not treat
the sequence of financial failure events in isolation, but in the context of the sequence
of defaults in the wider economy. It seeks to extract the information contained in in-
dustrial defaults relevant for predicting financial failures, and allows us to capture the
dynamic interaction between the real and financial sectors when measuring systemic
risk.
Rigorous statistical tests demonstrate the in-sample fit of our new hazard model
of correlated failure timing, the significance of the spillover hazard term, and the out-
of-sample predictive power of our fitted measures of systemic risk. For example, the
fitted measures accurately forecast the quantiles of the fraction of failures in the U.S.
financial system during 1998–2009, for each of several confidence levels and forecast
horizons. These tests validate our modeling and estimation approach.
The estimators provide time-series and term-structure perspectives of systemic
risk in the U.S. financial sector, information that is of paramount importance to reg-
ulators and policy makers, and that can be used to implement a macro-prudential
supervision of financial institutions. The estimators indicate that systemic risk in-
creased dramatically during the second half of 2008, and reached unprecedented levels
towards the end of 2008. While the magnitude of economy-wide default risk in 2008
is roughly comparable to the level during the burst of the internet bubble around
2001, we find that the systemic risk during that period is dwarfed by the magnitude
of systemic risk in 2008. During the entire sample period, the failure of a financial
firm is estimated to have a relatively greater impact on systemic risk than the default
CHAPTER 3. SYSTEMIC RISK: WHAT DEFAULTS ARE TELLING US 47
of an industrial firm.
The remainder of this introduction discusses the related literature. Section 3.2
introduces our measures of systemic risk and discusses their properties. Section 3.3
develops the statistical methodology. Section 3.4 describes our data, the basic esti-
mation results, and their statistical evaluation. Section 3.5 analyzes the behavior of
systemic risk during 1987–2008, provides risk forecasts for future periods, and evalu-
ates these forecasts. Section 3.6 assesses the impact on systemic risk of the failure of
an industrial firm or a financial institution. Section 3.7 concludes. There are several
appendices.
3.1.1 Related literature
There is a substantial literature on bank failure prediction, which includes Brown &
Dinc (2005), Brown & Dinc (2009), Cole & Wu (2009), Lane et al. (1986), Whalen
(1991), Wheelock & Wilson (2000) and many others. These papers employ traditional
hazard models, in which the timing of bank failures is influenced by observable ex-
planatory covariates, which may be time-varying. They focus on predicting individual
bank failures, and do not directly address the correlation between failures, which is
the driving force behind systemic risk. To incorporate the different sources of this
dependence, we significantly extend the traditional hazard model. Our formulation
assumes that firms are exposed to common, time-varying risk factors. Movements of
these observable factors affect firms across the board, and induce failure clusters. Our
formulation also includes an additional “spillover hazard” term, which seeks to ad-
dress the clustering of failures not due to the variation of observable risk factors. The
spillover hazard depends on the timing of past defaults and the volume of defaulted
debt. It models the influence of past defaults on failure timing, with a role for the
size of a default. This influence can be due to informational spillovers, i.e., Bayesian
learning at defaults about risk factors that are unobserved or missing from the list of
explanatory covariates as in Aharony & Swary (1996), Collin-Dufresne et al. (2009),
Duffie et al. (2009), Giampieri, Davis & Crowder (2005), Giesecke (2004), Koopman,
Lucas & Monteiro (2008) and others. It can also be due to the contagious spread of
CHAPTER 3. SYSTEMIC RISK: WHAT DEFAULTS ARE TELLING US 48
distress from one firm to another, as in Azizpour & Giesecke (2008b), Jorion & Zhang
(2007), Lando & Nielsen (2009) and others. Contagion may be channeled through
trade credit or buyer/supplier relationships in the real sector, and derivatives coun-
terparty relations and interbank loans in the financial sector (see Upper & Worms
(2004) and others in this regard). The goal of our dynamic hazard model is to capture
the statistical implications of risk spillovers for failure timing without needing to be
precise a priori about the economic mechanisms behind them.
Our measures of systemic risk are related to alternative measures discussed in the
literature. Adrian & Brunnermeier (2009) propose a family of quantile measures of
systemic risk that are based on the distribution of the change of the market value of
total financial assets of public financial institutions. They estimate these risk mea-
sures from time series of equity returns and balance sheet information using quantile
regressions. Acharya, Pedersen, Philippon & Richardson (2009) develop and esti-
mate expected shortfall measures of systemic risk that are based on the distribution
of market equity returns of financial institutions. Lehar (2005) takes a related struc-
tural perspective, and defines systemic risk in terms of adverse changes in the market
values of several institutions.
We propose quantile measures of systemic risk that are predicated on the condi-
tional distribution of the failure rate in the financial system, and are estimated from
actual failure experience. This is an important difference to Adrian & Brunnermeier
(2009), Acharya et al. (2009) and Lehar (2005), who assess systemic risk in terms of
adverse asset price changes across the financial sector, and use equity price data to
estimate the corresponding risk measures. By tying systemic risk to clustered failures
in the financial sector and focusing on actual failure timing data, our measures tend to
be less susceptible to equity market factors unrelated to systemic risk. Nevertheless,
they incorporate market information through the explanatory variables.
Since they summarize the relevant properties of a conditional distribution, our
risk measures are dynamic, and vary through time with the available information.
They also define a term structure of systemic risk over multiple future periods that
incorporates the dynamics of the explanatory macro-economic and sector-wide vari-
ables. The basic risk measures extend naturally to co-risk measures a la Adrian &
CHAPTER 3. SYSTEMIC RISK: WHAT DEFAULTS ARE TELLING US 49
Brunnermeier (2009). A co-risk statistic quantifies the impact on systemic risk of an
adverse event, such as the collapse of an industrial firm or financial institution.
Avesani, Pascual & Li (2006), Bhansali, Gingrich & Longstaff (2008), Chan-Lau
& Gravelle (2005), Huang, Zhou & Zhu (2009) and others estimate alternative risk
measures from market rates of credit derivatives. These measures quantify systemic
risk relative to a risk-neutral pricing measure, and incorporate the risk premia in-
vestors demand for bearing correlated default risk. Our measures are based on actual
failure behavior rather than market prices, and do not reflect risk premia.
Elsinger, Lehar & Summer (2006) develop and estimate a static network model of
the interbank lending market to incorporate spillover phenomena induced by inter-
bank loans when quantifying systemic risk; see also Eisenberg & Noe (2001).
Staum (2009) considers the total premium required to insure all deposits in the
banking system as a measure of systemic risk. A bank’s contribution to this risk
measure is proposed as the bank’s deposit insurance premium.
3.2 Measures of systemic risk
This section discusses our definition of systemic risk, describes a measure to quantify
systemic risk, and examines the basic properties of this measure.
We define systemic risk in the financial sector as the conditional probability of
failure of a sufficiently large fraction of the total population of institutions in the
financial system. This definition targets the scenario of a failure cluster of financial
institutions, potentially as part of a larger cluster of economy-wide defaults. Such
a cluster could be due to a severe macro-economic shock, or a contagious spread of
distress from one institution to another. Financial distress can be propagated through
the informational and contractual relationships within the financial system, or the
relationships between financial institutions and other non-financial firms. Lehman
Brothers is an example of how the collapse of a single institution can induce distress
at multiple other entities.
CHAPTER 3. SYSTEMIC RISK: WHAT DEFAULTS ARE TELLING US 50
To provide a quantitative measure of systemic risk, consider the process N count-
ing defaults in the financial system.1 The value Nt represents the number of defaults
in the financial system observed by time t. For a given horizon T , consider the con-
ditional distribution at time t < T of the default rate in the financial system, given
by Dt(T ) = (NT −Nt)/Wt, where Wt denots the number of financial institutions ex-
isting at t. This distribution gives the likelihood of failure by T of any fraction of the
population of financial institutions at t. The right tail of this distribution reflects the
magnitude of systemic risk. To measure this magnitude more precisely, we consider
statistics that summarize the information in the tail of the distribution. A standard
statistic is a quantile of the distribution, or value at risk. The value at risk Vt(α, T )
at level α ∈ (0, 1) is the smallest number x ≥ 0 such that the conditional probability
at t that the default rate Dt(T ) during (t, T ] exceeds x is no larger than (1 − α).
Figure 3.1 illustrates the value at risk.
The value at risk Vt(α, T ) of the financial system is intuitive and easily communi-
cated, relying on the popularity of value at risk in the financial industry. There are
other advantages. As indicated by the notation, Vt(α, T ) depends on the conditioning
time t, and thus changes over time as new information is revealed. This leads to a
dynamic risk measure. The value Vt(α, T ) also depends on the risk horizon T . By
varying T for fixed t we obtain a term structure of systemic risk. Further, as shown
in Section 3.6, Vt(α, T ) extends naturally to a co-risk measure that quantifies the
contribution to systemic risk of a particular event, such as the default of a financial
institution.
The quantification of systemic risk need not be predicated on the value at risk.
Our statistical methodology focuses on the entire conditional distribution of Dt(T ),
so our analysis extends to alternative downside risk measures such as the expected
shortfall or average value at risk, which is defined as the conditional mean of Dt(T )
given Dt(T ) ≥ c, where c is some high level, such as Vt(α, T ). While the value at
risk is silent about the magnitude of the failure rate in excess of Vt(α, T ), expected
shortfall provides more detailed information about the severity of large failure clusters.
1We fix a complete probability space (Ω,F , P ) with an information filtration (Ft)t≥0 that satisfiesthe usual conditions. Here, P denotes the actual (empirical) probability measure.
CHAPTER 3. SYSTEMIC RISK: WHAT DEFAULTS ARE TELLING US 51
0 1 2 3 4 50
0.1
0.2
0.3
0.4
0.5
0.6
Dt(T) in Percent
Pro
babi
lity
Vt(α,T)
1 − α
Figure 3.1: Value at risk Vt(α, T ) at level α for a given horizon T .
More generally, our analysis extends to any statistic of the conditional distribution
at t of the system-wide default rate Dt(T ), including the moments and other tail
risk measures. Moreover, our analysis extends to risk measures of the conditional
distribution of the value-weighted default rate, which takes account of the default
volume.
The measures of systemic risk we propose are distinct from the measures discussed
in the literature. The fundamental difference is the underlying distribution. While
we define systemic risk in terms of the distribution of the failure rate in the financial
system, Adrian & Brunnermeier (2009), Acharya et al. (2009) and Lehar (2005) relate
systemic risk to the distribution of the change of the market equity value of financial
institutions. Avesani et al. (2006), Chan-Lau & Gravelle (2005), Huang et al. (2009)
and others define systemic risk in terms of a risk-neutral probability, which reflects
the risk premia investors demand for bearing correlated default risk.
CHAPTER 3. SYSTEMIC RISK: WHAT DEFAULTS ARE TELLING US 52
3.3 Statistical methodology
This section develops a likelihood approach to estimating the measures of systemic risk
proposed in Section 3.2. In a first step, we formulate and estimate a new hazard, or
intensity-based, model of economy-wide default timing. In a second step, we extract
the system-wide failure intensity from the economy-wide default intensity. The fitted
system-wide intensity then leads to estimators of our systemic risk measures.
3.3.1 Economy-wide default timing
Consider the process N∗ counting defaults in the economy. The value N∗t is the
number of defaults observed by time t. We suppose that N∗ has hazard rate or
intensity λ∗, which represents the conditional mean default rate in the economy and
is measured in events per year. We assume that the intensity evolves through time
according to the model
λ∗t = exp(β∗X∗t ) +
∫ t
0
e−κ(t−s)dJs (3.1)
where X∗ is a vector of time-varying explanatory covariates specified in Section 3.4.2,
β∗ is a vector of constant parameters, κ is a strictly positive parameter, and
Jt = ν1 + · · ·+ νN∗t (3.2)
where νn = γ+ δmax(0, logD∗n). Here, γ and δ are non-negative parameters, and D∗n
is the default volume, i.e. the total amount of debt outstanding at default of the n-th
defaulter, measured in million dollars.2
The intensity (3.1) is the sum of two terms. The first term, termed baseline hazard,
takes a standard Cox proportional hazards form. It models the influence on default
arrivals of explanatory covariates X∗, and captures the clustering of defaults due to
the exposure of different firms to variations in X∗. The dynamics of the variables X∗
are not affected by default arrivals. The proportional hazards formulation is used by
2We assume that each variable max(0, logD∗n) has finite mean, and that each component of X∗tis finite almost surely. Under these conditions, the process N∗ is non-explosive.
CHAPTER 3. SYSTEMIC RISK: WHAT DEFAULTS ARE TELLING US 53
Duffie et al. (2007), McDonald & Van de Gucht (1999) and many others to predict
industrial defaults, and by Brown & Dinc (2005), Brown & Dinc (2009), Cole & Wu
(2009), Lane et al. (1986), Whalen (1991), Wheelock & Wilson (2000) and others to
predict bank failures.
The second term, called spillover hazard, is not present in the traditional propor-
tional hazards formulation. It models the influence of past defaults on current default
rates, which is not captured by the baseline hazard term. At an event, the default
rate jumps, with magnitude given by γ plus δ times the positive part of the logarithm
of the defaulter’s total outstanding debt, which is a proxy of the defaulter’s firm
size.3 Thus, the bigger a defaulter the greater the impact of the event, with minimum
impact governed by γ. After an event, the intensity decays to the baseline hazard,
exponentially at rate κ.
The inclusion of the spillover hazard term is motivated by the results of the em-
pirical analyses of Aharony & Swary (1996), Azizpour & Giesecke (2008b), Collin-
Dufresne et al. (2009), Das et al. (2007b), Duffie et al. (2009), Lando & Nielsen (2009)
and others. For U.S. corporate defaults, these papers found evidence of the presence
of contagion or unobservable or missing explanatory covariates, called frailties. With
contagion, a default increases the likelihood of additional defaults, a process that may
be channeled through trade credit or buyer/supplier relationships in the real sector,
and derivatives counterparty relations and interbank loans in the financial sector.
With frailty, Bayesian updating of the conditional distribution of the unobserved
explanatory variables leads to a jump of the econometrician’s default rate.
The spillover hazard term in (3.1) seeks to capture the statistical implications of
contagion and frailty for failure timing. In particular, it is designed to replicate the
excess default clustering not caused by the variation of the observable covariates X∗
defining the baseline hazard. Owing to the reduced-form nature of the specification,
this formulation does not distinguish the potential sources of the excess clustering.
The inference problem for the default timing model (3.1)–(3.2) is addressed as
follows. Letting θ = (β∗, κ, γ, δ) be the set of parameters of the intensity λ∗ = λ∗(θ),
3For the purposes of our analysis, we found the total amount of debt outstanding at default tobe a better measure of firm size than market capitalization, which was used by Shumway (2001) andothers to predict non-financial corporate default.
CHAPTER 3. SYSTEMIC RISK: WHAT DEFAULTS ARE TELLING US 54
Θ be the set of admissible parameters, and [0, t] be the sample period, we solve the
log-likelihood problem
supθ∈Θ
∫ t
0
(log λ∗s−(θ)dN∗s − λ∗s(θ)ds). (3.3)
The calculation of the likelihood function is based on a standard measure change
argument. Since the dynamics of the covariates X∗ are assumed to be unaffected by
default arrivals, the likelihood problem for X∗ can be treated separately from that for
λ∗. Given a trajectory of X∗, the log-likelihood function takes a closed form, allowing
for computational tractability of estimation. Under technical conditions stated in
Ogata (1978b), the maximum likelihood estimator of θ is asymptotically normal and
efficient.
We have experimented with several alternative model formulations, including a
proportional hazards model in which average spillover effects are captured by a co-
variate given by the trailing 1-year default rate, as in Duffie et al. (2009). We have
also tested alternative specifications of the impact variables νn in (3.2). However,
based on the in- and out-of-sample tests described in Section 3.4.3 below, we found
these alternatives to be statistically inferior to the model (3.1)–(3.2).
3.3.2 System-wide default timing
Next we extract from the fitted economy-wide model λ∗ the dynamics of system-wide
defaults, i.e., failures in the financial system. This is based on the following result.
Proposition 3.3.1. There is a (predictable) process Z taking values in the unit in-
terval, such that the intensity λ of system-wide failures is given by λ = λ∗Z.
Proof. The system-wide failure times form a subsequence of the economy-wide default
times. The existence and uniqueness of Z follows from the Radon-Nikodym theorem
applied to the random measures associated with the time-integrals of the intensities
λ∗ and λ. The predictability of Z follows from the predictability of the processes
generated by these time-integrals.
CHAPTER 3. SYSTEMIC RISK: WHAT DEFAULTS ARE TELLING US 55
The value Zt is the conditional probability at t that a firm in the financial system
defaults next, given a default in the economy in the next instant. For a precise
statement, see Proposition 3.1 in Giesecke et al. (2009). We formulate and estimate
a parametric model of Z, which then leads to λ via Proposition 3.3.1.
We use probit regression to estimate the process Z from the observed economy-
and system-wide default counting processes N∗ and N , respectively. Letting Yn be
a binary response variable equal to one if the n-th defaulter belongs to the financial
system and 0 otherwise, we obtain a value Yn for each economy-wide default time T ∗n
in the sample. Each Yn is a Bernoulli variable with success probability ZT ∗n , where4
Zt = Zt(β) = Φ(βXt−) (3.4)
and where Φ is the cumulative distribution function of a standard normal variable,
Xt is a vector of time-varying explanatory covariates specified in Section 3.4.2, and
β is a vector of constant parameters. Given observations (Yn)n=1,...,N∗tand (Xs)s≤t
during the sample period [0, t], we estimate β by solving the log-likelihood problem
supβ∈Σ
N∗t∑n=1
[YT ∗n log(ZT ∗n (β)) + (1− YT ∗n ) log(1− ZT ∗n (β))
](3.5)
where Σ is the set of admissible parameters. The maximum likelihood estimator of
β is consistent, asymptotically normal and efficient if the covariance matrix of the
vector of regressors exists and is non-singular. See McCullagh & Nelder (1989) for
details. It can also be shown that the log-likelihood function is globally concave in
β, and therefore a standard numerical optimization routine converges quickly to the
unique maximum.
The two-step approach to estimating λ has a significant advantage over an al-
ternative one-step approach in which λ would be estimated directly based on the
historical default experience in the financial system. The two-step approach allows
us to extract the information contained in the observed default times of non-financial
4We experimented with several alternative link functions, including a logit model. All thesealternatives were found to be statistically inferior to the probit model.
CHAPTER 3. SYSTEMIC RISK: WHAT DEFAULTS ARE TELLING US 56
firms, which otherwise would not be utilized in the estimation process. Financial
firms are intertwined with the real sector, so defaults in that sector clearly have an
influence on financial firms, and vice versa. Our estimation approach seeks to capture
this influence. It responds to an argument made by Schwarcz (2008) and many oth-
ers that systemic risk measures should account for the relationship between financial
institutions and industrial firms.
The two-step approach has another, statistical advantage. Failures in the financial
system are relatively rare. The number of economy-wide defaults is much larger,
leading to a greater sample size and more accurate inference.5
3.3.3 Measures of risk
The intensity λ = λ∗Z governs the dynamics of the system-wide default process
N , and hence the measures of systemic risk introduced in Section 3.2. Given the
fitted models of λ∗ and Z, we estimate the entire conditional distribution at t of the
system-wide default rate Dt(T ) by exact Monte Carlo simulation of default times
during (t, T ].6 From the conditional distribution we obtain unbiased estimates of the
value at risk Vt(α, T ) or any other risk measure based on the distribution of Dt(T ) or
related quantities, including the value-weighted default rate.
The risk measure estimates take account of the idiosyncratic and clustered default
risk of financial institutions. They capture several sources of default clustering. Clus-
tering can occur due to the spread of distress within the financial sector, or between
the industrial and financial sectors. It can also occur due to the exposure of institu-
tions to the common risk factors represented by the covariate vector X∗. The risk
measure estimates reflect the time-variation of X∗ and the cross-sectional variation of
the default volume D∗n. As detailed in Appendices D and E, this is based on a vector
autoregressive time-series model of the covariates, and a generalized Pareto model of
the default volume. The importance for industrial default prediction of incorporating
the time-series dynamics of explanatory covariates was emphasized by Duffie et al.
5For our sample period 1987-2008, the number of system-wide failures is 83 while the number ofeconomy-wide defaults is 1193.
6The simulation is based on an acceptance/rejection scheme. Details are available upon request.
CHAPTER 3. SYSTEMIC RISK: WHAT DEFAULTS ARE TELLING US 57
(2007).
3.4 Empirical analysis
This section describes the default timing data, the data on explanatory covariates,
our basic estimation results, and their statistical evaluation.
3.4.1 Default timing data
Our sample period is 1/1/1987 to 12/31/2008.7 Data on U.S. corporate default timing
were obtained from Moody’s Default Risk Service. For our purposes, a “default”
is a credit event in any of the following Moody’s default categories: (1) A missed
or delayed disbursement of interest or principal, including delayed payments made
within a grace period; (2) Bankruptcy (Section 77, Chapter 10, Chapter 11, Chapter
7, Prepackaged Chapter 11), administration, legal receivership, or other legal blocks
to the timely payment of interest or principal; (3) A distressed exchange occurs where:
(i) the issuer offers debt holders a new security or package of securities that amount
to a diminished financial obligation; or (ii) the exchange had the apparent purpose of
helping the borrower avoid default. A repeated default by the same issuer is included
in the set of events if it was not within a year of the initial event and the issuer’s rating
was raised above Caa after the initial default. This treatment of repeated defaults is
consistent with that of Moody’s. This leaves us with 1193 economy-wide defaults.
For the purpose of analyzing systemic risk, we take the U.S. financial system to
be the set of firms classified in Moody’s industry category “Banking” or “FIRE”
(Finance, Insurance and Real Estate).8 This set includes commercial and invest-
ment banks, bank holding companies, credit unions, thrifts, investment management,
trading, leasing, mortgage and securities firms, financial guarantors, insurance and
7This period was determined by the availability of data for the covariates specified in Section3.4.2. Default data for the period 1/1/2009 to 6/30/2009 were used for the out-of-sample analysis.
8Moody’s uses several industry classifications. Our analysis is based on the “Moody’s 11” scheme,which specifies 11 industries: 1. Banking, 2. Capital Industries, 3. Consumer Industries, 4. Energyand Environment, 5. FIRE, 6. Media and Publishing, 7. Retail and Distribution, 8. Sovereign andPublic Finance, 9. Technology, 10. Transportation, and 11. Utilities.
CHAPTER 3. SYSTEMIC RISK: WHAT DEFAULTS ARE TELLING US 58
1987 1989 1991 1993 1995 1997 1999 2001 2003 2005 2007 20090
2
4
6P
erc
en
t
0
80
160
240
$B
illio
n
Annual U.S. Default Rate (Left Axis)Annual U.S. Default Volume (Right Axis)
1987 1989 1991 1993 1995 1997 1999 2001 2003 2005 2007 20090
1
2
3
4
Pe
rce
nt
0
3
6
9
12
$B
illio
n
Annual U.S. Financial Default Rate (Left Axis)Annual U.S. Financial Default Volume (Right Axis) 157.2
Figure 3.2: Default timing and volume data. Left panel : 1-year economy-widedefault rate in the universe of Moody’s rated issuers. Right panel : 1-year system-widedefault rate. The defaults of Lehman Brothers and Washington Mutual contributedto over 80% of the system-wide default volume in 2008. Source: Moody’s DefaultRisk Service.
insurance brokerage firms, and REITs and REOCs. Figure 3.2 shows the 1-year
economy- and system-wide default rates during the sample period, along with default
volume information obtained from Moody’s Default Risk Service.9
3.4.2 Covariates
We examine the influence on systemic risk of two types of macro-economic and sector-
wide variables, which are measured monthly. These include:
(i) The trailing 1-year return on the S&P500 index, obtained from Economagic.
Duffie et al. (2007) found this variable to be a significant predictor of industrial
defaults.
(ii) The 1-year lagged slope of the yield curve, computed as the spread between
10-year and 3-month Treasury constant maturity rates, as a forward-looking
9As explained by Hamilton (2005), the volume reported by Moodys excludes debt obligations thatdo not reflect the fundamental default risk of the obligor such as structured finance transactions,short-term debt (e.g., commercial paper), secured lease obligations, and so forth.
CHAPTER 3. SYSTEMIC RISK: WHAT DEFAULTS ARE TELLING US 59
1987 1989 1991 1993 1995 1997 1999 2001 2003 2005 2007 2009−3
−1
1
3
5
Pe
rce
nt
0
1
2
3
4
Pe
rce
nt
1−year Lagged Slope of Yield Curve (Left Axis)Moody’s Seasoned Baa−Aaa Spread (Right Axis)
1987 1989 1991 1993 1995 1997 1999 2001 2003 2005 2007 2009−1.2
−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
S&P500BankingFinancialInsuranceReal Estate
Figure 3.3: Time-series of explanatory covariates. Left panel : The 1-year laggedslope of yield curve and the default spread, given by the difference between Moody’sseasoned Baa-rated and Aaa-rated corporate bond yields. Right panel : The trailing1-year returns on the S&P500 index and the banking and FIRE portfolios.
indicator of real economic activity. Estrella & Trubin (2006) found this variable
to have strong predictive power for future recessions. We obtain the H.15 release
of Treasury rates from the website of the Federal Reserve Bank of New York.
(iii) The default spread, defined as the yield differential between Moody’s seasoned
Aaa-rated and Baa-rated corporate bonds. Chen, Collin-Dufresne & Goldstein
(2008) argue that the default spread is a measure of aggregate credit risk that is
largely unaffected by bond market frictions such taxes and liquidity. The data
are obtained from the website of the Federal Reserve Bank of New York. The
left panel of Figure 3.3 shows the time series of the default spread and the slope
of the yield curve.
(iv) The TED (Treasury-Eurodollar) spread, defined as the difference between the 3-
month LIBOR and 3-month Treasury rates, as an indicator of credit risk in the
financial system.10 We obtain the historical LIBOR rates from Economagic.
10An increase of the TED spread is a sign that lenders believe that the risk of default on interbankloans is increasing. In that case, lenders demand a higher rate of interest, or accept lower returns onrisk-free Treasuries. The 3-month LIBOR-OIS (overnight index swap) spread is a similar indicator.
CHAPTER 3. SYSTEMIC RISK: WHAT DEFAULTS ARE TELLING US 60
1987 1989 1991 1993 1995 1997 1999 2001 2003 2005 2007 20090
0.5
1
1.5
2
2.5
3
3.5
4
4.5
5
Per
cent
Black Monday
Subprime Crisis
EconomicCrisis
in Mexico9.11
Lehman Brothers
Bear Stearns
Ford / GM
Gulf War&
JapaneseAsset Bubble
Collapsed
Internet BubbleBursts
LTCM Crisis
Figure 3.4: TED spreads during the sample period, along with significant events.
Figure 3.4 shows the TED spread during the sample period, with significant
events indicated.
(v) The trailing 1-year returns on banking and FIRE portfolios, as a proxy for
business cycle activity in the financial system. The data were obtained from
the website of Kenneth French.11 The right panel of Figure 3.3 shows the return
series.
(vi) The default ratio (Nt − Nt−h)/(N∗t − N∗t−h + 1), which for fixed h > 0 relates
the number of failures in the financial system during (t − h, t] to one plus the
number of economy-wide defaults during that period. It increases at a failure
in the financial system, and decreases at a default of a non-financial firm.
We have also considered, and rejected for lack of significance in the presence of
the above variables, a number of additional covariates, including the 3-month, 1-year,
10-year, 30-year Treasury rates, the spread between Moody’s Baa rate and the 10
year treasury rate, the monthly VIX, and the 3-month LIBOR rate.
11http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/data library.html
CHAPTER 3. SYSTEMIC RISK: WHAT DEFAULTS ARE TELLING US 61
3.4.3 Economy-wide intensity
We start by addressing the likelihood problem (3.3) for the economy-wide intensity
(3.1), taking the covariate vector X∗ to include a constant, the trailing return on
the S&P500, the lagged slope of the yield curve, and the default spread. We have
also considered, but rejected for lack of significance in the presence of these variables,
the other covariates discussed in Section 3.4.2. The other covariates are used for the
estimation of the process Z in Section 3.4.5 below.
Table 3.1 reports the parameter estimates, along with estimates of asymptotic
standard errors.12 The intensity is increasing in the default spread, and decreasing
in the trailing return on the S&P 500 and the lagged slope of the yield curve. The
jump of the intensity at a default, measured in events per year, is estimated to be
2.3 plus roughly one half of the logarithm of the default volume, measured in million
dollars. The impact of an event fades away exponentially with time: the fitted half
life is log(2)/6.0592 = 0.1144 years.
To develop some insight into the relative statistical importance of the baseline and
spillover hazard terms for model fit, we take a Bayesian perspective, following Duffie
et al. (2009), Eraker, Johannes & Polson (2003) and others. Specifically, we consider
the Bayes factor, given by the ratio of the likelihood of a benchmark model to the
likelihood of an alternative model, both evaluated at their respective estimators. The
test statistic Ψ is given by twice the natural logarithm of the Bayes factor. According
to Kass & Raftery (1995), a value for Ψ between 2 and 6 provides positive evidence,
a value between 6 and 10 strong evidence, and a value larger than 10 provides very
strong evidence in favor of the benchmark model. Due to the marginal nature of
the likelihoods used for computing Ψ, this criterion does not necessarily favor more
complex models.
We first test our model against an alternative specification that does not include
a spillover hazard term (i.e., a traditional proportional hazards model). When the
covariate set of the alternative model includes a constant, the trailing return on the
12The parameter space Θ = (−5, 5)4× (0, 15)× (0, 5)2. The fmincon routine of Matlab was used tosearch for the optimal parameter set. We performed a search for each of 10 randomly chosen initialparameter sets. Each of these searches converged to the values reported in Table 3.1.
CHAPTER 3. SYSTEMIC RISK: WHAT DEFAULTS ARE TELLING US 62
Baseline Hazard Spillover HazardConstant S&P500 Yield Slope Baa-Aaa κ γ δ
MLE 2.3026 −0.4410 −0.2140 0.5092 6.0592 2.3205 0.4781SE 0.0605 0.0524 0.0336 0.0534 0.1108 0.0811 0.0233t-stat 38.04 −8.42 −6.37 9.53 54.71 28.60 20.56
Ψ0.1298 3.0987 1.8310
213.403926.5308
Table 3.1: Maximum likelihood estimates (MLE) of economy-wide intensity parame-ters, asymptotic standard errors (SE), t-statistics (t-stat), and Bayes factor statistics(Ψ).
S&P 500, the lagged slope of the yield curve, and the default spread, then the outcome
of Ψ is 213.4, providing extremely strong evidence in favor of including the spillover
hazard term. When the alternative model is based on an unconstrained covariate
set that includes, in addition to the variables just mentioned, the TED spread, the
trailing 1-year returns of banking, financial, insurance and real-estate portfolios,13
then the outcome of Ψ is 131.4, still providing strong evidence in favor of including
the spillover hazard term. Testing our model against one that does not include the
baseline hazard term, the outcome of Ψ is 26.5, providing very strong evidence in
favor of including the baseline hazard term.
The left panel of Figure 3.5 shows the fitted economy-wide intensity against the
number of economy-wide defaults. The fitted intensity tracks the observed arrivals
well. The right panel of Figure 3.5 graphs the decomposition of the fitted intensity
into baseline and spillover hazards. The time series behavior of the components is
similar. However, during periods of higher than average default activity, the spillover
hazard represents a relatively larger fraction of the total default hazard than the
baseline hazard.
13The parameter estimates are as follows (SE in parentheses): Constant 4.1597 (0.1443), S&P 500−2.1542 (0.3036), Yield Slope −0.2346 (0.0272), Baa-Aaa 0.4612 (0.1370), TED −0.8716 (0.3040),Banking 0.5059 (0.2606), Financial 1.5882 (0.3551), Insurance−0.7845 (0.1548), Real Estate−0.6032(0.1054). The default ratio was found to be insignificant in the presence of these covariates.
CHAPTER 3. SYSTEMIC RISK: WHAT DEFAULTS ARE TELLING US 63
1987 1989 1991 1993 1995 1997 1999 2001 2003 2005 2007 20090
20
40
60
80
100
120
140
160
180
200Number of DefaultsFitted Intensity
1987 1989 1991 1993 1995 1997 1999 2001 2003 2005 2007 20090
20
40
60
0
60
120
180Fitted Baseline Hazard (Left Axis)Fitted Spillover Hazard (Right Axis)
Figure 3.5: Fitted economy-wide intensity λ∗. Left panel : Yearly defaults and fittedintensity. Right panel : Intensity decomposition: fitted baseline hazard vs. fittedspillover hazard.
3.4.4 Goodness-of-fit tests
We test the fit of the economy-wide intensity model λ∗ to the historical default tim-
ing data. The tests are based on a result of Meyer (1971), which implies that the
default arrivals follow a standard Poisson process under a change of time given by
the cumulative intensity λ∗. Thus, if λ∗ is correctly specified, then the time-scaled
inter-arrival times are independent standard exponential variables.
The properties of the time-scaled arrival times can be analyzed with a battery of
alternative tests. We use a family of tests of the binned arrival time data, following
Das et al. (2007b). For given bin size c, we denote by Un the number of observed
events in the n-th successive time interval lasting for c units of transformed time.
With a total of K bins, the null hypothesis is that the U1, . . . , UK are independent
Poisson variables with mean c. We consider bin sizes c = 2, 4, 6, 8 and 10.
We start with Fisher’s dispersion test. Under the null, W =∑K
n=1(Un− c)2/c has
a chi-squared distribution with K − 1 degrees of freedom. Table 3.2a indicates that
there is no evidence against the null for bin sizes 4 through 10, at standard confidence
levels.
CHAPTER 3. SYSTEMIC RISK: WHAT DEFAULTS ARE TELLING US 64
Bin Size Number of Bins χ2 Statistic p-Value2 596 838.50 0.00004 298 332.75 0.07516 198 207.17 0.29568 149 167.38 0.131610 119 125.70 0.2967
(a) Fisher’s Dispersion Test
Mean of Tails Median of TailsBin Size Data Simulation p-Value Data Simulation p-Value
2 3.9694 3.6740 0.0000 4.0000 3.0524 0.00004 6.1739 6.1575 0.3092 6.0000 5.9956 0.04766 8.5676 8.8643 0.6254 8.0000 8.5337 0.54198 11.6667 11.3794 0.2190 11.0000 10.9284 0.091610 14.0313 13.7454 0.2459 13.5000 13.2444 0.2899All - - 0.2799 - - 0.1942
(b) Mean and Median of Default Upper Quartile Tail Test
Bin Size Number of Bins A (tA) B (tB) R2
2 596 2.3634∗ (3.4556) −0.1847∗ (−4.5767) 0.03414 298 4.0348 (0.1321) −0.0121 (−0.2074) 0.00016 198 6.1971 (0.4250) −0.0372 (−0.5203) 0.00148 149 8.8132 (1.1613) −0.1074 (−1.3032) 0.111510 119 10.3584 (0.3650) −0.0378 (−0.4018) 0.0014
(c) Excess Default Autocorrelation Test (t-statistics for A are presented for the test A = c andasterisks indicate significance at the 5% level.)
Table 3.2: Goodness-of-fit tests of the economy-wide intensity.
To examine the extent to which our intensity model captures the clustering of
defaults, we perform an upper tail test developed by Das et al. (2007b). We generate
10,000 data sets by Monte Carlo simulation, each consisting of K iid Poisson random
variables with mean c. The p-value of the test is the fraction of the simulated data
sets whose sample upper-quantile mean (or median) is above the actual sample mean
(or median). The p-values reported in Table 3.2b suggest that there is no significant
deviation of the upper-quartile tails from the theoretical Poisson tails for bin sizes 4
through 10, at standard confidence levels. Furthermore, the null hypothesis cannot
be rejected by the joint test across all bin sizes, at conventional confidence levels.
CHAPTER 3. SYSTEMIC RISK: WHAT DEFAULTS ARE TELLING US 65
Finally we test for serial dependence of the Uk. To this end, we estimate an
autoregressive model, given by Uk = A+BUk−1 + εk for coefficients A and B. Under
the null, A = c, B = 0, and the εk are independent, demeaned Poisson random
variables. Table 3.2c shows that the fitted coefficients are not significantly different
from their theoretical values for bin sizes 4 through 10, at standard confidence levels.
The results of these tests suggest that the fitted λ∗ time-scales most arrival times
correctly, indicating a good overall fit of our default timing model (3.1). Additional
experiments suggest that the rejections of the null for bin size 2 are due to events
arriving in very short time intervals. On the time scale of the sample period, which
stretches over 21 years, these are almost simultaneous arrivals. It is difficult to match,
at the same time, the few extremely short inter-arrival times, and the many longer
inter-arrival times that constitute the vast majority of the sample.
3.4.5 System-wide intensity
Next we address the likelihood problem (3.5) for the process Z in (3.4). The value
Zt represents the conditional probability at t that the next defaulter is a financial
firm, given that there is a default in the economy in the next instant. We take the
covariate vector X to include a constant, the 1-year lagged slope of the yield curve,
the TED spread, the trailing 1-year returns of banking and real-estate portfolios, and
the default ratio for h = 1/12.14
Table 3.3 provides the estimates of the coefficient vector β, along with asymptotic
standard errors and t-statistics. A likelihood ratio test indicates that the covariates
are informative. The coefficient linking the trailing 1-year return of the banking port-
folio to the probability Zt is positive, and of unexpected sign by univariate reasoning.
With multiple covariates, however, the sign need not be evidence that a good year in
the banking sector foreshadows a higher fraction of bank defaults.
The time-series behavior of the fitted process Z, shown in the left panel of Figure
3.6, indicates the dramatic increase during the second half of 2008 of the number of
defaults in the financial sector relative to the total number of events in the economy.
14We experimented with different window sizes h, but found h = 1/12 to work best. This windowsize is consistent with the frequency of the observations of the other covariates.
CHAPTER 3. SYSTEMIC RISK: WHAT DEFAULTS ARE TELLING US 66
Covariate Coefficient SE t-statistic p-value ΨConstant −2.0873 0.1484 −14.0659 0.0000
Yield Slope 0.1256 0.0585 2.1469 0.0318 4.6502TED Spread 0.3710 0.1506 2.4632 0.0138 5.8223
Banking 0.8952 0.3462 2.5856 0.0097 6.6832Real Estate −0.8073 0.2973 −2.7218 0.0065 7.4439
Default Ratio 1.4171 0.4351 3.2572 0.0011 10.1015Model Fit LR-ratio (χ2) = 36.8117 p-value < 0.0001
Table 3.3: Maximum likelihood estimates of the coefficients β of the covariate processX governing the thinning process Z in (3.4), asymptotic standard errors (SE), t-statistics, p-values, and Bayes factor statistics (Ψ).
To measure how accurately the fitted model of Z distinguishes between economy-
and system-wide events out-of-sample, we construct a power curve, shown in the right
panel of Figure 3.6. The diagonal line represents an uninformative model that sorts
events randomly. The larger the area under the curve (AUC), the more accurate the
model predictions. For our model, the AUC is 0.7076, with 95% confidence interval
given by [0.6433, 0.7719]. The standardized AUC is 6.3283, implying that the area is
statistically greater than 0.5 with p-value less than 0.0001.
3.5 Systemic risk
This section analyzes the behavior of systemic risk during the sample period, provides
risk forecasts for future periods, and evaluates these forecasts.
3.5.1 Risk measures
We start by examining the fitted system-wide intensity λt, which measures the level
of instantaneous systemic risk prevailing at time t. It is calculated as the product of
the economy-wide intensity λ∗t and the thinning variable Zt, as explained in Section
3.3. The time-series behavior of λt, shown in the left panel of Figure 3.7, indicates
that the level of instantaneous systemic risk reached unprecedented levels during the
fall of 2008. The right panel of Figure 3.7 shows the fitted fraction of λt tied to
CHAPTER 3. SYSTEMIC RISK: WHAT DEFAULTS ARE TELLING US 67
1987 1989 1991 1993 1995 1997 1999 2001 2003 2005 2007 20090
0.1
0.2
0.3
0.4
0.5
Th
inn
ing
Pro
cess
(Z
)
0
1
Sys
tem
ic I
nd
ica
tor
(Y)
0 0.2 0.4 0.6 0.8 10
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
False Positive Ratio
Tru
e P
osi
tive
Ra
tio
Figure 3.6: Left panel : Observed binary response variables Yn and fitted process Z.Right panel : Power curve for the fitted process Z.
the spillover hazard term, calculated as the fitted ratio of the spillover hazard to the
total default intensity λ∗t . The estimators provide strong evidence for the presence of
failure clustering not caused by variations in the observable explanatory covariates.
The fraction of systemic risk tied to the spillover term can be substantial, and tends
to be higher in periods of adverse economic conditions. Moreover, financial firms
tend to fail when the fitted contribution of spillovers to instantaneous systemic risk
is relatively large.
Next we estimate the conditional distribution at time t of the default rate in the
financial system during the period (t, t + ∆], for given ∆. As indicated in Section
3.3.3, this is done by exact Monte Carlo simulation.15 The estimation is based on
the models for λ∗ and Z, fitted with data observed from 1/1/1987 to t. As explained
above, the estimation takes account of the time-variation of the covariates during the
forecast period, and the cross-sectional variation of the default volume.
Figure 3.8 shows the conditional distribution of the system-wide default rate
Dt(t + 0.5) for conditioning times t varying semi-annually between 12/31/1997 and
12/31/2008, for a 6-month horizon. As explained in Section 3.2, the system-wide
15The estimates are based on 100,000 Monte Carlo replications.
CHAPTER 3. SYSTEMIC RISK: WHAT DEFAULTS ARE TELLING US 68
1987 1989 1991 1993 1995 1997 1999 2001 2003 2005 2007 20090
10
20
30
40
50
60
1987 1989 1991 1993 1995 1997 1999 2001 2003 2005 2007 20090
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Sp
illo
ver
Ha
zard
Ra
tio
Industrial DefaultFinancial Default
Figure 3.7: Left panel : Fitted system-wide failure intensity λ, based on the parameterestimates reported in Tables 3.1 and 3.3. Right panel : Fitted fraction of λ tied to thespillover hazard term, with default events indicated.
default rate is obtained by normalizing the number of system-wide defaults during
(t, t + 0.5] by the number of firms in the financial system at t. The right tail of the
distribution indicates the magnitude of systemic risk. The fatter the tail, the greater
the likelihood that a large fraction of the financial system fails. The time series be-
havior suggests that systemic risk has increased very sharply during the second half
of 2008.
We contrast the system-wide distribution with the distribution of the economy-
wide default rate, shown in Figure 3.9. While the increase of aggregate default risk
in the second half of 2008 is clearly visible, the magnitude of risk is only somewhat
greater than that during the internet bubble. This is in stark contrast to the behavior
of systemic risk shown in Figure 3.8: the systemic risk during the burst of the internet
bubble is dwarfed by the systemic risk prevailing at the end of 2008.
Next we consider the value at risk Vt(α, t + ∆) of the system-wide default rate.
For a given level of confidence α ∈ (0, 1), the conditional probability at time t that
the system-wide default rate Dt(t+∆) exceeds Vt(α, t+∆) is 1−α. For conditioning
times t varying semi-annually between 12/31/1997 and 12/31/2008, the left panel of
Figure 3.10 shows Vt(α, t+ 0.5), along with realized default rates. The right panel of
CHAPTER 3. SYSTEMIC RISK: WHAT DEFAULTS ARE TELLING US 69
01
23
45 1998
20002002
20042006
2008
0
0.5
1
1.5
2
Years
Default Rate(Percent)
Figure 3.8: Fitted conditional distribution (kernel-smoothed) of the system-wide 6-month default rate Dt(t+0.5) for conditioning times t varying semi-annually between12/31/1997 and 12/31/2008.
Figure 3.10 plots the value at risk for economy-wide defaults, defined similarly.
The value at risk defines a term structure of systemic risk. To illustrate this, the
left panel of Figure 3.11 plots Vt(α, t + ∆) on 12/31/2008, the end of the sample
period, as a function of ∆, for each of several α.
There are alternative measures of systemic risk that may be of interest. An ex-
ample is the conditional probability at t of no failures in the financial system during
(t, T ]. This measure does not require the choice of a confidence level. The right panel
of Figure 3.11 shows this probability during the sample period, for each of several
horizons ∆.
CHAPTER 3. SYSTEMIC RISK: WHAT DEFAULTS ARE TELLING US 70
01
23
45 1998
20002002
20042006
2008
0
0.5
1
1.5
YearsDefault Rate
(Percent)
Figure 3.9: Fitted conditional distribution (kernel-smoothed) of the economy-wide 6-month default rate for conditioning times t varying semi-annually between 12/31/1997and 12/31/2008. The default rate is obtained by normalizing the number of economy-wide defaults during (t, t+ 0.5] by the total number of firms in the economy at t.
3.5.2 Forecast evaluation
We evaluate the out-of-sample forecast accuracy of the fitted value at risk Vt(α, t+∆)
by comparing it to the realized default rate. Our selection of tests is informed by the
results of the test performance analysis in Berkowitz, Christoffersen & Pelletier (2009).
Let n be the number of forecast periods. Further, let n1 ≤ n be the number
of periods for which the corresponding value at risk forecast was violated, i.e. the
number of periods for which the realized default rate was strictly greater than the
fitted value at risk Vt(α, t + ∆). Then, n0 = n − n1 denotes the number of periods
for which the realized rate was less than or equal to the fitted value at risk. We test
whether the actual violation rate n1/n is significantly different than the theoretical
violation rate (1 − α), as in Kupiec (1995). Fixing a level α ∈ (0, 1) and assuming
CHAPTER 3. SYSTEMIC RISK: WHAT DEFAULTS ARE TELLING US 71
1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 20090
0.5
1
1.5
2
2.5
3
3.5
4
4.5
De
fau
lt R
ate
(P
erc
en
t)
95% Value−at−Risk99% Value−at−RiskRealized Default Rate
1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 20090
0.5
1
1.5
2
2.5
3
3.5
4
4.5
De
fau
lt R
ate
(P
erc
en
t)
95% Value−at−Risk99% Value−at−RiskRealized Default Rate
Figure 3.10: Left Panel: Fitted value at risk Vt(α, t + 0.5) of the system-wide de-fault rate, for conditioning times t varying semi-annually between 12/31/1997 and12/31/2008, versus realized default rate. Right Panel: Fitted value at risk of theeconomy-wide default rate versus realized default rate.
violations are independent of one another, the log-likelihood ratio test statistic
LRUC = −2 log
(αn0(1− α)n1
(n0/n)n0(n1/n)n1
)(3.6)
has, asymptotically, a chi-squared distribution with 1 degree of freedom under the
null hypothesis of the theoretical (1− α) violation rate.16
A test based on the statistic (3.6) does not address the time-series properties
of the sequence of “hit” indicators associated with violations in different periods.
The hit indicator It for the forecast period (t, t + ∆] is equal to 1 if the realized
default rate for the period is greater than the fitted value at risk Vt(α, t+ ∆), and 0
otherwise. A more stringent conditional coverage test with higher power tests whether
the indicators are independent and identically distributed Bernoulli variables with
success probability (1 − α). We consider two alternative tests of this property, a
Markov test due to Christoffersen (1998) and the CAViaR test of Engle & Manganelli
(2004). According to the performance analysis in Berkowitz et al. (2009), the CAViaR
16In case of n1 = 0, we follow the convention 00 = 1 so that the test statistic is well-defined.
CHAPTER 3. SYSTEMIC RISK: WHAT DEFAULTS ARE TELLING US 72
0 0.25 0.5 0.75 10
2
4
6
8
10
12
14
∆
Vt(α
,t+
∆)
in P
erc
en
t
α = 95%α = 99%
1998 2000 2002 2004 2006 20080
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Pt(D
t(t+
∆)=
0)
∆=1.0∆=0.5∆=0.25
Figure 3.11: Left Panel: Term structure of systemic risk on 12/31/2008: fitted valueat risk Vt(α, t+∆) on 12/31/2008 as a function of ∆. Right Panel: Fitted conditionalprobability at t of no failures in the financial system during (t, t+∆], for conditioningtimes t varying quarterly between 12/31/1997 and 12/31/2008, for each of severalhorizons ∆.
test has particularly high power for the relatively small sample sizes we encounter here,
for both the 99% and 95% levels.
The Markov test of Christoffersen (1998) tests the Bernoulli distribution of the
actual hit indicators and their independence. The test of the Bernoulli property
relies on the statistic (3.6). The independence is tested against an explicit first-order
Markov alternative, with log-likelihood ratio test statistic given by
LRInd = −2 log
((1− π1)n00+n10πn01+n11
1
(1− π01)n00πn0101 (1− π11)n10πn11
11
). (3.7)
Here, nij denotes the number of periods with a state of j following a state of i,
πij = nij/(ni0 + ni1), and π1 = (n01 + n11)/(n00 + n01 + n01 + n11).17 Under the
null of the indicators forming a first-order Markov chain, this statistic has a limiting
chi-squared distribution with 1 degree of freedom. The combined test of the coverage
17In case of n10 + n11 = 0, we suppose π11 = 0 so that the test statistic is well-defined.
CHAPTER 3. SYSTEMIC RISK: WHAT DEFAULTS ARE TELLING US 73
ratio and independence is based on the statistic
LRM = LRUC + LRInd,
which has a limiting chi-squared distribution with 2 degrees of freedom.18
The CAViaR test described in Berkowitz et al. (2009), which is based on Engle &
Manganelli (2004), considers a first-order autoregression for the hit indicator:
It = γ + β1It−∆ + β2Vt(α, t+ ∆) + εt (3.8)
where the error term εt has a logistic distribution. We test whether the βi coefficients
are statistically significant and whether P (It = 1) = eγ/(1 + eγ) = 1 − α. Denote
the ith response variable by Yi and the corresponding vector of regressors by Xi, for
i = 1, . . . , n − 1. Also, let πi = eγ+βXi/(1 + eγ+βXi), where (γ, β) is the maximum
likelihood estimator of (γ, (β1, β2)) obtained by logistic regression. Then, under the
null of β1 = β2 = 0 and γ = log(
1−αα
), the log-likelihood ratio test statistic
LRCAViaR = −2 log
(n−1∏i=1
(1− α)Yiα1−Yi
πYii (1− πi)1−Yi
)(3.9)
has a limiting chi-squared distribution with 3 degrees of freedom.
Table 3.4 reports the test results for the system-wide value at risk Vt(α, t + ∆),
for each of several forecast horizons ∆ and confidence levels α.19 None of the null
hypotheses can be rejected at the 10% level. This suggests that the fitted measures
accurately quantify systemic risk, for each of several risk horizons and confidence
levels.
CHAPTER 3. SYSTEMIC RISK: WHAT DEFAULTS ARE TELLING US 74
Uncond. Coverage Markov CAViaR∆ Obs. LR p-value LR p-value LR p-value1Y 11 0.3153 0.5744 0.5157 0.7727 2.3203 0.5086
95% 6M 23 2.3595 0.1245 2.3595 0.3074 2.2569 0.5208VaR 3M 45 0.9143 0.3390 0.9609 0.6185 5.5926 0.1332
1M 133 2.6284 0.1050 2.6900 0.2605 4.5851 0.20481Y 11 0.2211 0.6382 0.2211 0.8953 0.2211 0.9741
99% 6M 23 0.4623 0.4965 0.4623 0.7936 0.4422 0.9314VaR 3M 45 0.9045 0.3416 0.9045 0.6362 0.8844 0.8292
1M 133 0.0905 0.7636 0.1057 0.9485 0.6689 0.8805
Table 3.4: Out-of-sample tests of the forecast accuracy of the fitted system-widevalue at risk, for each of several horizons. The period considered is January 1998 toJune 2009.
3.6 Sensitivity of systemic risk
We show how to measure the impact of a hypothetical default event on systemic
risk. This analysis could be useful to regulatory authorities. For example, regulators
could estimate the potential impact on systemic risk of a default of a given financial
institution.
Fix a conditioning time t, horizon ∆ and confidence level α. We consider the
change ∆Vt(α, t+∆) of the value at risk Vt(α, t+∆) at t in response to a default at t,
which measures the event’s impact on systemic risk. To estimate the change, we first
estimate the time t value at risk Vt(α, t+ ∆) based on data up to t. Next we enlarge
the data set by including a hypothetical default event at t, and then re-estimate
Vt(α, t+ ∆) based on the enlarged data set. Finally we calculate ∆Vt(α, t+ ∆) as the
difference between the two risk measure estimates.
The change ∆Vt(α, t + ∆) reflects the influence of the hypothetical event on the
other firms in the financial system and the economy at large, including potential
spillover effects. It depends on the characteristics of the hypothetical event, including
the sector of the defaulter (industrial vs. financial) and the total debt outstanding at
18This ignores the first observation in the hit sequence.19We also use 2009 default data in the tests: we validate the forecasts obtained on 12/31/2008 on
the realized default rates in 2009, which are available for the first 1, 3, and 6 months of 2009.
CHAPTER 3. SYSTEMIC RISK: WHAT DEFAULTS ARE TELLING US 75
1998 2000 2002 2004 2006 20080
0.5
1
1.5
2
2.5
Ab
solu
te C
ha
ng
e ∆
Vt(0
.95
,t+
1)
in P
erc
en
t
Industrial DefaultFinancial Default
0 0.25 0.5 0.75 10
2
4
6
8
10
12
∆
Vt(0
.95
,t+
∆)
in P
erc
en
t
OriginalIndustrial DefaultFinancial Default
Figure 3.12: Impact of a default on systemic risk. Left Panel: Absolute change∆Vt(0.95, t + 1) for conditioning times t varying quarterly between 12/31/1997and 12/31/2008. Right Panel: Term structure of value at risk Vt(0.95, t + ∆) on12/31/2008, for different scenarios.
default, which proxies the size of the defaulter.
We calculate the change for each of two hypothetical events, a default of a financial
institution, and a default of an industrial firm. The left panel of Figure 3.12 shows
the absolute change ∆Vt(0.95, t+ 1) for each of the two events, for conditioning times
t varying quarterly between 12/31/1997 and 12/31/2008. The right panel of Figure
3.12 shows the impact of each of these events on term structure of the value at risk
Vt(0.95, t + ∆) on 12/31/2008. The total debt outstanding at default is taken to be
the sample mean of the default volumes observed to the conditioning time, for the
respective firm class.
The failure of a financial institution has a higher impact on systemic risk than the
default of an industrial firm. This means the financial system is more vulnerable to
the collapse of a financial firm. The impact of a financial firm default is also more
volatile during the sample period. If measured on an absolute scale, the impact of
a default has increased dramatically during the second half of 2008, indicating the
vulnerability of the financial system during that period.
The sensitivity analysis can be extended to measure the impact on systemic risk
CHAPTER 3. SYSTEMIC RISK: WHAT DEFAULTS ARE TELLING US 76
of a hypothetical adverse shock to the explanatory covariates (risk factors), along the
lines of Avesani et al. (2006), Huang et al. (2009), and others.
3.7 Conclusion
This chapter provides an econometric method for estimating the term structure of
systemic risk over multiple future periods. The maximum likelihood estimators incor-
porate the dependence of failure timing on time-varying macro-economic and sector-
specific risk factors. Unlike traditional estimators, they also capture the impact of
risk spillovers due to contagion or frailty, and the influence of industrial defaults on
financial failures.
Applying our method to data on U.S. firms over 1987 to 2008, we find that the
level and shape of the term structure of systemic risk in the U.S. financial sector
depend on the timing and severity of past financial and industrial failures, as well
as the current values of observable risk factors including the trailing S&P 500 index
return, the lagged slope of the U.S. yield curve, the default and TED spreads, and
other sector-wide variables. We find that the variation of these risk factors generally
has a less significant impact on systemic risk than spillover effects associated with
failures in the past. This highlights the importance of addressing spillovers when
managing systemic risk.
Several topics are left for future research, including the estimation of premia for
systemic risk. The estimates provided in this chapter can be compared to estimates
of the risk-neutral probability of failure of a large number of financial institutions,
obtained from market rates of credit derivatives contracts. This analysis would shed
light on the magnitude of the premia investors demand for bearing systemic risk.
Our econometric method has potential applications in other subject areas requir-
ing estimates of event probabilities in situations where network or information effects
may play a role. These applications include the analysis of market transaction data,
the analysis of purchase-timing behavior of households, the analysis of unemployment
timing, and many others. The extant analyses of these applications in Engle & Rus-
sell (1998), Seetharaman & Chintagunta (2003), and Lancaster (1979), respectively,
CHAPTER 3. SYSTEMIC RISK: WHAT DEFAULTS ARE TELLING US 77
employ standard proportional hazard formulations. Our generalized hazard model
could be used to study the implications of network and information effects in these
settings.
Appendix
A Risk-neutral tranche loss distributions
To describe the estimation of the risk-neutral tranche loss distributions discussed in
Section 2.4, it is necessary to explain the arbitrage-free valuation of index and tranche
swaps. These contracts are based on a portfolio of C credit swaps with common
notional 1, common maturity date H and common quarterly premium payment dates
(tm).
In an index swap, the protection seller covers portfolio losses as they occur (default
leg), and the protection buyer pays Sam(C − N ′tm) at each premium date tm, where
S is the swap rate and am is the day count fraction for coupon period m (premium
leg). The value at time t ≤ H of the default leg is given by Dt(H, 0, 1), where
Dt(H,K,K) = EQ
(∫ H
t
exp
(−∫ s
t
rudu
)dUs(K,K)
∣∣∣Ft) . (A1)
Here, EQ(· | Ft) denotes the conditional expectation operator relative to a risk-neutral
pricing measure Q. The value at time t ≤ H of the premium leg is given by
Pt(S) = S∑tm≥t
amEQ
(exp
(−∫ tm
t
rudu
)(C −N ′tm)
∣∣∣Ft) . (A2)
The index rate at t is the solution S = St(H) to the equation Dt(H, 0, 1) = Pt(S).
In a tranche swap with upfront rate G and running rate S, the protection seller
covers tranche losses (2.16) as they occur (default leg) and, for K < 1, the protection
buyer pays GKC at inception and Sam(KC − Utm) at each premium date tm, where
78
APPENDIX 79
K = K −K is the tranche width (premium leg). The value at τ ≤ H of the default
leg is given by (A1). The value at time t ≤ H of the premium leg is given by
Pt(K,K,G, S) = GKC + S∑tm≥t
amEQ
(exp
(−∫ tm
t
rudu
)(KC − Utm)
∣∣∣Ft) .(A3)
For a fixed upfront rate G, the running rate S is the solution S = St(H,K,K,G) to
the equation Dt(H,K,K) = Pt(K,K,G, S). For a fixed rate S, the upfront rate G is
the solution G = Gt(H,K,K, S) to the equation Dt(H,K,K) = Pt(H,K,K,G, S).
The valuation relations are used to estimate the risk-neutral portfolio and tranche
loss distributions from market rates of index and tranche swaps. First we formulate a
parametric model for the risk-neutral dynamics of N and L, the portfolio default and
loss processes with replacement. Our risk-neutral model parallels the model under P .
Suppose that N has risk-neutral intensity λQ with Q-dynamics
dλQt = κQt (cQt − λQt )dt+ dJt (A4)
where λQ0 > 0, κQt = κQλQTNtis the decay rate, cQt = cQλQTNt
is the reversion level, and
J is a response jump process given by
Jt =∑n≥1
max(γQ, δQλQT−n
)1Tn≤t. (A5)
The quantities κQ > 0, cQ ∈ (0, 1), δQ > 0 and γQ ≥ 0 are parameters such that
cQ(1 + δQ) < 1. We let θQ = (κQ, cQ, δQ, γQ, λQ0 ).
Since the Q-dynamics of λQ mirror the P -dynamics of the P -intensity λ, we can
apply the algorithms developed above to estimate the expectations (A1)–(A3) and to
calculate the model index and tranche rates for the model (A4). We first generate
event times Tn of N by Algorithm 3, with λ and its parameters replaced by their risk-
neutral counterparts, and with initial condition (Nτ , TNτ , λQTNτ
) = (0, 0, λQτ ). Then
we apply the replacement Algorithm 2 as stated to generate event times T ′n without
replacement, and the corresponding paths of N ′ and L′ required to estimate the
APPENDIX 80
expectations (A1)–(A3). In this last step, we implicitly assume that the thinning
probability (2.7), the rating distributions (2.8) and (2.9), and the distribution µ` of
the loss at an event are not adjusted when the measure is changed from P to Q.
The risk-free interest rate r is assumed to be deterministic, and is estimated from
Treasury yields for multiple maturities on 3/27/2006, obtained from the website of
the Department of Treasury.
The risk-neutral parameter vector θQ is estimated from a set of market index and
tranche rates by solving the nonlinear optimization problem
minθQ∈ΘQ
∑i
(MarketMid(i)−Model(i, θQ)
MarketAsk(i)−MarketBid(i)
)2
(A6)
where ΘQ = (0, 2)× (0, 1)× (0, 2)2× (0, 20) and the sum ranges over the data points.
Here, MarketMid is the arithmetic average of the observed MarketAsk and MarketBid
quotes. We address the problem (A6) by adapted simulated annealing. The algorithm
is initialized at a set of random parameter values, which are drawn from a uniform
distribution on the parameter space ΘQ. For each of 100 randomly chosen initial
parameter sets, the algorithm converges to the optimal parameter values given in
Table A2. The market data and fitting results for 5 year index and tranche swaps
referenced on the CDX.HY6 on 3/27/2006 are reported in Table A1. The model fits
the data, with an average absolute percentage error of 2.9%.
B Cash CDO prioritization schemes
This appendix describes the prioritization schemes for the cash CDOs analyzed in
this chapter. These schemes were introduced by Duffie & Garleanu (2001).
APPENDIX 81
Contract MarketBid MarketAsk MarketMid ModelIndex 332.88 333.13 333.01 333.080-10% 84.50% 85.00% 84.75% 86.75%10-15% 52.75% 53.75% 53.25% 48.19%15-25% 393.00 403.00 398.00 398.8825-35% 70.00 85.00 77.50 79.43MinObj 33.77AAPE 2.90%
Table A1: Market data from Morgan Stanley and fitting results for 5 year index andtranche swaps referenced on the CDX.HY6 on 3/27/2006. The index, (15 − 25%)and (25 − 35%) contracts are quoted in terms of a running rate S stated in basispoints (10−4). For these contracts the upfront rate G is zero. The (0, 10%) and(10, 15%) tranches are quoted in terms of an upfront rate G. For these contracts therunning rate S is zero. The values in the column Model are fitted rates based onmodel (A4) and 100K replications. We report the minimum value of the objectivefunction MinObj and the average absolute percentage error AAPE relative to marketmid quotes.
Parameter κQ cQ δQ γQ λQτEstimate 0.522 0.333 0.366 1.646 8.586
Table A2: Estimates of the parameters of the risk-neutral portfolio intensity λQ,obtained from market rates of 5 year index and tranche swaps referenced on theCDX.HY6 on 3/27/2006.
B1 Uniform prioritization
The total interest income W (m) from the reference bonds is sequentially distributed
as follows. In period m, the debt tranches receive the coupon payments
Q1(m) = min(1 + c1)A1(m− 1) + c1F1(m),W (m)
Q2(m) = min(1 + c2)A2(m− 1) + c2F2(m),W (m)−Q1(m).
APPENDIX 82
Unpaid reductions in principal from default losses, Jj(m), occur in reverse priority
order, so that the residual equity tranche suffers the reduction
J3(m) = minF3(m− 1), ξ(m),
where
ξ(m) = max0, L′tm − L′tm−1− [W (m)−Q1(m)−Q2(m)]
is the cumulative loss since the previous coupon date minus the collected and undis-
tributed interest income. Then, the debt tranches are reduced in principal by
J2(m) = minF2(m− 1), ξ(m)− J3(m)
J1(m) = minF1(m− 1), ξ(m)− J3(m)− J2(m).
With uniform prioritization there are no early payments of principal, so P1(m) =
P2(m) = 0 for m < M . At maturity, principal and accrued interest are treated
identically, while the remaining reserve is paid in priority order. The payments of
principal at maturity are
P1(M) = minF1(M) + A1(M), R(M)−Q1(M),
P2(M) = minF2(M) + A2(M), R(M)−Q1(M)− P1(M)−Q2(M)
where R(M) is the value of the reserve account at M . The residual equity tranche
receives
D3(M) = R(M)−Q1(M)− P1(M)−Q2(M)− P2(M).
B2 Fast prioritization
The senior tranche collects interest and principal payments as quickly as possible until
maturity or until its remaining principal becomes zero, whichever is first. In period
APPENDIX 83
m, the senior tranche receives interest and principal payments
Q1(m) = minB(m), (1 + c1)A1(m− 1) + c1F1(m)
P1(m) = minF1(m− 1), B(m)−Q1(m)
As long as the senior tranche receives payments the junior tranche accrues coupons.
After the senior tranche has been retired, the junior tranche is allocated interest and
principal until maturity or until its principal is written down, whichever is first:
Q2(k) = min(1 + c2)A2(m− 1) + c2F1(m), B(m)−Q1(m)− P1(m)
P2(k) = minF2(m− 1), B(m)−Q1(m)− P1(m)−Q2(m).
The equity tranche receives any residual cash flows. There are no contractual reduc-
tions in principal.
C Cash CDO valuation
This appendix explains the valuation of the cash CDO reference bonds and debt
tranches. At time t ≤ H, the value of the reference portfolio is
Ot(v) =∑tm≥t
EQ
(exp
(−∫ tm
t
rudu
)B(m, v)
∣∣∣Ft) (C1)
where the notation B(m) = B(m, v) indicates the dependence of the cash flow B(m)
on the coupon rate v of a reference bond. The par coupon rate of a reference bond is
the number v = v∗ such that Ot0(v) = C at the CDO inception date t0.
The par coupon rates for the debt tranches are determined similarly. LetOjt(v∗, c1, c2)
be the value at time t ≤ H of the bond representing tranche j = 1, 2 when the refer-
ence bonds accrue interest at their par coupon rate v∗, and when the coupon rates on
the debt tranches are c1 and c2, respectively. Because of the complexity of the cash
flow “waterfall,” this quantity does not admit a simple analytic expression. The par
coupon rates are given by the pair (c1, c2) = (c∗1, c∗2) such that Ojt0(v
∗, c1, c2) = pj for
APPENDIX 84
j = 1, 2.
For the CDX.HY6 of reference bonds, assuming that the cash CDO is established
on 3/27/06 and has a 5 year maturity, the par coupon rates are estimated as follows.
We first generate a collection of 100K paths of N ′ and L′ under the risk neutral
measure, based on the risk-neutral intensity model (A4) with calibrated parameters
in Table A2, as described in Appendix A. Based on these paths, we can estimate
Ot0(v) for fixed v, and then solve numerically for the par coupon rate v∗. The risk-
free interest rate r is deterministic, and is estimated from Treasury yields for multiple
maturities on 3/27/2006. Given v∗, c1, c2, we can calculate the tranche cash flows for
a given path of (N ′, L′) according to the specified prioritization scheme, and then
estimate Ojt0(v∗, c1, c2). We then solve numerically a system of two equations for
(c∗1, c∗2).
D Covariate time-series model
We formulate a vector autoregressive VAR(1) time-series model for the covariates.
This model incorporates the dynamic relationships between the different variables.
Let Φt denote the (n× 1) vector of covariate values at t. We suppose that
Φt = Π0 + Π1Φt−1 + εt (D1)
where Π0 is an (n×1) vector, Π1 is an (n×n) coefficient matrix and εt is a (n×1) zero
mean vector of error processes that is serially uncorrelated, and has time-invariant
covariance matrix Σ. Table E1 reports the estimators Πi of Πi for i = 0, 1, which
are based on monthly observations of Φt during the sample period. Given the Φt and
the Πi, we recover the corresponding values of εt. From these values, we estimate the
covariance matrix Σ, assuming weak stationarity. The fitted Σ is reported in Table
E2. An analysis of the error series indicates the appropriateness of the model (D1)
for our covariates. Figure D1 visualizes the goodness-of-fit by plotting the predicted
vs. the realized covariates.
APPENDIX 85
1987 1991 1995 1999 2003 20070
0.050.1
T−Bill (3M)
1987 1991 1995 1999 2003 20070
0.050.1
T−Bill (10Y)
1987 1991 1995 1999 2003 20070.05
0.10.15
Moody’s Baa
1987 1991 1995 1999 2003 20070.020.070.12
Moody’s Aaa
1987 1991 1995 1999 2003 20070
0.060.12
LIBOR (3M)
1987 1991 1995 1999 2003 2007−1
01
S&P500
1987 1991 1995 1999 2003 2007−1
01
Banks
1987 1991 1995 1999 2003 2007−1
01
Financial
1987 1991 1995 1999 2003 2007−1
01
Insurance
1987 1991 1995 1999 2003 2007−1
01
Real Estate
Actual Predicted
Figure D1: Realized vs. VAR(1) predicted time series (monthly) of covariate compo-nents.
E Default volume model
We adopt a simple but empirically meaningful model of default volumes. We assume
that each D∗n has a generalized Pareto distribution with shape parameter ξ > 0 and
scale parameter σ > 0. We have
P (D∗n > x) =(
1 + ξx
σ
)− 1ξ
(E1)
for all x ≥ 0. The maximum likelihood estimators of (ξ, σ) are given by (0.5960, 225.8828),
with standard errors (0.0427, 10.9864) as of 12/31/2008. The left panel of Figure E1
contrasts the fitted Pareto distribution with the empirical distribution of default vol-
umes. The right panel of Figure E1 compares the observed default volumes to the
realizations of variables from the fitted Pareto distribution. The plots indicate the
statistical appropriateness of our model.
APPENDIX 86
Con
stan
tT
B(3
M)
TB
(10Y
)B
aaA
aaL
IBO
RS&
P50
0B
anks
Fin
Insu
rR
lEst
TB
(3M
)0.
172
1.00
60.
133
-0.1
280.
031
-0.0
360.
194
-0.2
380.
244
-0.4
990.
200
(1.2
79)
(21.
395)
(2.7
62)
(-1.
776)
(0.3
51)
(-0.
844)
(1.1
70)
(-1.
551)
(1.9
07)
(-2.
872)
(2.6
89)
TB
(10Y
)-0
.179
0.13
30.
925
0.04
70.
037
-0.1
270.
622
-0.2
02-0
.053
-0.1
280.
152
(-1.
127)
(2.3
93)
(16.
299)
(0.5
59)
(0.3
56)
(-2.
564)
(3.1
77)
(-1.
120)
(-0.
352)
(-0.
623)
(1.7
33)
Baa
-0.0
02-0
.018
-0.0
510.
999
0.04
20.
019
0.43
5-0
.249
-0.0
01-0
.043
0.06
4(-
0.01
8)(-
0.37
9)(-
1.04
0)(1
3.57
6)(0
.461
)(0
.445
)(2
.560
)(-
1.59
1)(-
0.00
6)(-
0.24
2)(0
.837
)
Aaa
-0.0
880.
080
-0.0
250.
048
0.98
0-0
.078
0.46
8-0
.270
0.06
0-0
.032
0.07
9(-
0.70
4)(1
.837
)(-
0.56
2)(0
.728
)(1
1.95
8)(-
1.99
8)(3
.050
)(-
1.90
4)(0
.502
)(-
0.20
2)(1
.154
)
LIB
OR
0.14
80.
227
0.10
1-0
.073
0.01
20.
752
0.65
9-0
.564
0.25
5-0
.614
0.25
5(0
.961
)(4
.224
)(1
.835
)(-
0.88
5)(0
.121
)(1
5.59
2)(3
.465
)(-
3.21
2)(1
.737
)(-
3.08
9)(2
.993
)
S&P
500
0.12
60.
020
0.02
3-0
.047
0.01
7-0
.016
0.89
80.
101
-0.0
56-0
.080
-0.0
32(3
.204
)(1
.424
)(1
.648
)(-
2.25
6)(0
.662
)(-
1.29
1)(1
8.48
4)(2
.246
)(-
1.49
9)(-
1.57
3)(-
1.47
7)
Ban
ks0.
057
0.03
10.
010
-0.0
360.
026
-0.0
29-0
.033
0.95
40.
016
-0.0
53-0
.046
(1.0
98)
(1.7
03)
(0.5
52)
(-1.
289)
(0.7
77)
(-1.
812)
(-0.
514)
(16.
264)
(0.3
32)
(-0.
794)
(-1.
607)
Fin
0.12
80.
048
0.00
1-0
.069
0.06
2-0
.044
0.14
70.
111
0.79
5-0
.121
-0.0
37(2
.096
)(2
.261
)(0
.068
)(-
2.11
9)(1
.548
)(-
2.31
3)(1
.952
)(1
.592
)(1
3.65
6)(-
1.53
5)(-
1.10
6)
Insr
0.01
70.
046
-0.0
08-0
.033
0.04
2-0
.039
0.04
00.
016
0.03
40.
801
0.00
3(0
.379
)(2
.987
)(-
0.50
1)(-
1.42
2)(1
.446
)(-
2.82
5)(0
.745
)(0
.331
)(0
.814
)(1
4.21
9)(0
.121
)
RlE
st0.
030
0.05
9-0
.008
-0.0
330.
043
-0.0
580.
055
0.09
2-0
.041
-0.0
670.
921
(0.5
96)
(3.4
10)
(-0.
459)
(-1.
228)
(1.3
09)
(-3.
753)
(0.8
99)
(1.6
24)
(-0.
870)
(-1.
052)
(33.
539)
Tab
leE
1:F
itte
dco
effici
ents
ofV
AR
(1)
model
(D1)
asof
12/3
1/20
08.
Thet-
stat
isti
csar
esh
own
inpar
enth
esis
.
APPENDIX 87
TB
(3M
)T
B(1
0Y)
Baa
Aaa
LIB
OR
S&
P50
0B
anks
Fin
Insu
rR
lEst
TB
(3M
)0.
0424
0.02
200.
0048
0.00
830.
0257
0.00
200.
0006
0.00
360.
0005
0.00
21T
B(1
0Y)
0.02
200.
0589
0.03
960.
0405
0.02
480.
0010
-0.0
009
0.00
16-0
.001
60.
0019
Baa
0.00
480.
0396
0.04
430.
0367
0.02
14-0
.002
1-0
.002
3-0
.003
0-0
.003
10.
0002
Aaa
0.00
830.
0405
0.03
670.
0362
0.01
85-0
.000
9-0
.001
9-0
.001
4-0
.002
30.
0007
LIB
OR
0.02
570.
0248
0.02
140.
0185
0.05
55-0
.000
6-0
.000
80.
0003
-0.0
014
0.00
23S&
P50
00.
0020
0.00
10-0
.002
1-0
.000
9-0
.000
60.
0036
0.00
300.
0038
0.00
240.
0019
Ban
ks
0.00
06-0
.000
9-0
.002
3-0
.001
9-0
.000
80.
0030
0.00
620.
0053
0.00
430.
0031
Fin
0.00
360.
0016
-0.0
030
-0.0
014
0.00
030.
0038
0.00
530.
0087
0.00
400.
0039
Insu
r0.
0005
-0.0
016
-0.0
031
-0.0
023
-0.0
014
0.00
240.
0043
0.00
400.
0045
0.00
24R
lEst
0.00
210.
0019
0.00
020.
0007
0.00
230.
0019
0.00
310.
0039
0.00
240.
0058
Tab
leE
2:F
itte
dco
vari
ance
mat
rix
Σof
the
VA
R(1
)er
ror
term
ε tas
of12
/31/
2008
.
APPENDIX 88
100
101
102
103
104
105
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
x
F(x
)
Empirical DistributionFitted Distribution
−4 −2 0 2 4 6 8 10 12−6
−4
−2
0
2
4
6
8
10
12
14
Empirical Quantiles (Logged)
Sim
ula
ted
Qu
an
tile
s (L
og
ge
d)
Figure E1: Left panel : Empirical default volume distribution vs. fitted generalizedPareto distribution as of 12/31/2008. Right panel : Empirical quantiles of the ob-served default volumes vs. quantiles of realizations of variables from the fitted Paretodistribution.
Bibliography
Acharya, Viral, Lasse Pedersen, Thomas Philippon & Matthew Richardson (2009),
Measuring systemic risk. Working Paper, New York University.
Acharya, Viral & Tanyu Yorulmazer (2008), ‘Information contagion and bank herd-
ing’, Journal of Money, Credit and Banking 40, 215–231.
Adrian, Tobias & Markus Brunnermeier (2009), CoVaR. Working Paper, Princeton
University.
Aharony, Joseph & Itzhak Swary (1983), ‘Contagion effects of bank failures: Evidence
from capital markets’, Journal of Business 56(3), 305–317.
Aharony, Joseph & Itzhak Swary (1996), ‘Additional evidence on the information-
based contagion effects of bank failures’, Journal of Banking and Finance 20, 57–
69.
Arnsdorf, Matthias & Igor Halperin (2008), ‘BSLP: markovian bivariate spread-loss
model for portfolio credit derivatives’, Journal of Computational Finance 12, 77–
100.
Avesani, Renzo, Antonio Garcia Pascual & Jing Li (2006), A new risk indicator and
stress testing tool: A multifactor nth-to-default cds basket. IMF Working Paper
No. 105.
Azizpour, Shahriar & Kay Giesecke (2008a), Premia for correlated default risk. Work-
ing Paper, Stanford University.
89
BIBLIOGRAPHY 90
Azizpour, Shahriar & Kay Giesecke (2008b), Self-exciting corporate defaults: Conta-
gion vs. frailty. Working Paper, Stanford University.
Berkowitz, Jeremy, Peter Christoffersen & Denis Pelletier (2009), Evaluating value-
at-risk models with desk-level data. Management Science, forthcoming.
Bhansali, Vineer, Robert Gingrich & Francis Longstaff (2008), ‘Systemic credit risk:
What is the market telling us’, Financial Analysts Journal 64(July/August), 16–
24.
Box, George & George Tiao (1992), Bayesian Inference in Statistical Analysis, Wiley,
New York.
Brigo, Damiano, Andrea Pallavicini & Roberto Torresetti (2006), Calibration of cdo
tranches with the dynamical generalized-poisson loss model. Working Paper,
Banca IMI.
Brown, Craig & Serdar Dinc (2005), ‘The politics of bank failures: Evidence from
emerging markets’, Quarterly Journal of Economics 120(4), 1413–1444.
Brown, Craig & Serdar Dinc (2009), Too many to fail? Evidence of regulatory re-
luctance in bank failures when the banking sector is weak. Review of Financial
Studies, forthcoming.
Chan-Lau, Jorge & Toni Gravelle (2005), The END: A new indicator of financial and
nonfinancial corporate sector vulnerability. IMF Working Paper No. 231.
Chen, Long, Pierre Collin-Dufresne & Robert S. Goldstein (2008), On the relation be-
tween credit spread puzzles and the equity premium puzzle. Review of Financial
Studies, forthcoming.
Chen, Zhiyong & Paul Glasserman (2008), ‘Fast pricing of basket default swaps’,
Operations Research 56(2), 286–303.
Christoffersen, Peter (1998), ‘Evaluating interval forecasts’, International Economic
Review 39, 842–862.
BIBLIOGRAPHY 91
Cole, Rebel & Qiongbing Wu (2009), Predicting bank failures using a simple dynamic
hazard model. Working Paper, DePaul University.
Collin-Dufresne, Pierre, Robert Goldstein & Jean Helwege (2009), How large can
jump-to-default risk premia be? Modeling contagion via the updating of beliefs.
Working Paper, Carnegie Mellon University.
Cont, Rama & Andreea Minca (2008), Recovering portfolio default intensities implied
by CDO quotes. Working Paper, Columbia University.
Cooperman, Elisabeth, Winson Lee & Glenn Wolfe (1992), ‘The 1985 Ohio thrift cri-
sis, the FSLIC’s solvency and rate contagion for retail CDs’, Journal of Finance
47(3), 919–941.
Das, Sanjiv, Darrell Duffie, Nikunj Kapadia & Leandro Saita (2007a), ‘Common
failings: How corporate defaults are correlated’, Journal of Finance 62, 93–117.
Das, Sanjiv, Darrell Duffie, Nikunj Kapadia & Leandro Saita (2007b), ‘Common
failings: How corporate defaults are correlated’, Journal of Finance 62, 93–117.
Dellacherie, Claude & Paul-Andre Meyer (1982), Probabilities and Potential, North
Holland, Amsterdam.
Delloye, Martin, Jean-David Fermanian & Mohammed Sbai (2006), ‘Dynamic frailties
and credit portfolio modeling’, Risk 19(1), 101–109.
Ding, Xiaowei, Kay Giesecke & Pascal Tomecek (2009), ‘Time-changed birth processes
and multi-name credit derivatives’, Operations Research 57(4), 990–1005.
Duffie, Darrell, Andreas Eckner, Guillaume Horel & Leandro Saita (2009), ‘Frailty
correlated default’, Journal of Finance 64, 2089–2123.
Duffie, Darrell, Leandro Saita & Ke Wang (2007), ‘Multi-period corporate default pre-
diction with stochastic covariates’, Journal of Financial Economics 83(3), 635–
665.
BIBLIOGRAPHY 92
Duffie, Darrell & Nicolae Garleanu (2001), ‘Risk and valuation of collateralized debt
obligations’, Financial Analysts Journal 57(1), 41–59.
Eckner, Andreas (2009), ‘Computational techniques for basic affine models of portfolio
credit risk’, Journal of Computational Finance 15, 63–97.
Eisenberg, Larry & Thomas Noe (2001), ‘Systemic risk in financial systems’, Man-
agement Science 47(2), 236–249.
Elsinger, Helmut, Alfred Lehar & Martin Summer (2006), ‘Risk assessment for bank-
ing systems’, Management Science 52(9), 1301–1314.
Engle, Robert F. & Jeffrey R. Russell (1998), ‘Autoregressive conditional duration: A
new model for irregularly spaced transaction data’, Econometrica 66, 1127–1162.
Engle, Robert & Simone Manganelli (2004), ‘Caviar: Conditional autoregressive value
at risk by regression quantiles’, Journal of Business and Economic Statictics
22(4), 367–381.
Eraker, Bjorn, Michael Johannes & Nicholas Polson (2003), ‘The impact of jumps in
volatility and return’, Journal of Finance 58, 1269–1300.
Errais, Eymen, Kay Giesecke & Lisa Goldberg (2009), Affine point processes and
portfolio credit risk. Working Paper, Stanford University.
Estrella, Arturo & Mary R. Trubin (2006), ‘The yield curve as a leading indicator:
Some practical issues’, Current Issues in Economics and Finance 12(5). Federal
Reserve Bank of New York.
Giampieri, Giacomo, Mark Davis & Martin Crowder (2005), ‘A hidden markov model
of default interaction’, Quantitative Finance 5, 27–34.
Giesecke, Kay (2004), ‘Correlated default with incomplete information’, Journal of
Banking and Finance 28, 1521–1545.
Giesecke, Kay, Lisa Goldberg & Xiaowei Ding (2009), A top-down approach to multi-
name credit. Working Paper, Stanford University.
BIBLIOGRAPHY 93
Glasserman, Paul & Nicolas Merener (2003), ‘Numerical solution of jump-diffusion
LIBOR market models’, Finance and Stochastics 7, 1–27.
Hamilton, David (2005), Moodys senior ratings algorithm and estimated senior rat-
ings. Moodys Investors Service.
Hawkes, Alan G. (1971), ‘Spectra of some self-exciting and mutually exciting point
processes’, Biometrika 58(1), 83–90.
Huang, Xin, Hao Zhou & Haibin Zhu (2009), ‘A framework for assessing the sys-
temic risk of major financial institutions’, Journal of Banking and Finance
33(11), 2036–2049.
Jorion, Philippe & Gaiyan Zhang (2007), ‘Good and bad credit contagion: Evidence
from credit default swaps’, Journal of Financial Economics 84(3), 860–883.
Kass, Robert & Adrian Raftery (1995), ‘Bayes factors’, Journal of the American
Statistical Association 90, 773–795.
Koopman, Siem Jan, Andre Lucas & A. Monteiro (2008), ‘The multi-stage latent
factor intensity model for credit rating transitions’, Journal of Econometrics
142(1), 399–424.
Kou, Steven & Xianhua Peng (2009), Default clustering and valuation of collateralized
debt obligations. Working Paper, Columbia University.
Kupiec, Paul H. (1995), ‘Techniques for verifying the accuracy of risk measurement
models’, Journal of Derivatives 3(2), 73–84.
Lancaster, Tony (1979), ‘Econometric methods for the duration of unemployment’,
Econometrica 47(4), 939–956.
Lando, David & Mads Stenbo Nielsen (2009), Correlation in corporate defaults: Con-
tagion or conditional independence? Working Paper, Copenhagen Business
School.
BIBLIOGRAPHY 94
Lane, William, Stephen Looney & James Wansley (1986), ‘An application of the Cox
proportional hazards model to bank failure’, Journal of Banking and Finance
10, 511–531.
Lehar, Alfred (2005), ‘Measuring systemic risk: A risk management approach’, Jour-
nal of Banking and Finance 29(10), 2577–2603.
Lewis, P. & G. Shedler (1979), ‘Simulation of nonhomogeneous poisson processes by
thinning’, Naval Logistics Quarterly 26, 403–413.
Longstaff, Francis & Arvind Rajan (2008), ‘An empirical analysis of collateralized
debt obligations’, Journal of Finance 63(2), 529–563.
Lopatin, Andrei & Timur Misirpashaev (2008), ‘Two-dimensional Markovian model
for dynamics of aggregate credit loss’, Advances in Econometrics 22, 243–274.
McCullagh, Peter & John Nelder (1989), Generalized Linear Models, Chapman and
Hall, London.
McDonald, Cynthia & Linda Van de Gucht (1999), ‘High-yield bond default and call
risks’, Review of Economics and Statistics 81(3), 409–419.
Meyer, Paul-Andre (1971), Demonstration simplifee d’un theoreme de Knight, in
‘Seminaire de Probabilites V, Lecture Notes in Mathematics 191’, Springer-
Verlag Berlin, pp. 191–195.
Mortensen, Allan (2006), ‘Semi-analytical valuation of basket credit derivatives in
intensity-based models’, Journal of Derivatives 13, 8–26.
Ogata, Yosihiko (1978a), ‘The asymptotic behavior of maximum likelihood estimators
of stationary point processes’, Annals of the Institute of Statistical Mathematics
30(A), 243–261.
Ogata, Yosihiko (1978b), ‘The asymptotic behavior of maximum likelihood estimators
of stationary point processes’, Annals of the Institute of Statistical Mathematics
30(A), 243–261.
BIBLIOGRAPHY 95
Papageorgiou, Evan & Ronnie Sircar (2007), ‘Multiscale intensity models and name
grouping for valuation of multi-name credit derivatives’, Applied Mathematical
Finance 15(1), 73–105.
Prahl, Jurgen (1999), A fast unbinned test of event clustering in poisson processes.
Working Paper, Universitat Hamburg.
Schwarcz, Steven L. (2008), ‘Systemic risk’, Georgetown Law Journal 97(1), 193.
SEC (2008), Summary report of issues identified in the commission staff’s exami-
nations of select credit rating agencies. Report, US Securities and Exchange
Commission.
Seetharaman, P. B. & Pradeep K. Chintagunta (2003), ‘The proportional hazard
model for purchase timing: A comparison of alternative specifications’, Journal
of Business and Economic Statistics 21(3), 368–382.
Shumway, Tyler (2001), ‘Forecasting bankruptcy more accurately: A simple hazard
model’, Journal of Business 74, 101–124.
Staum, Jeremy (2009), Systemic risk components as deposit insurance premia. Work-
ing Paper, Northwestern University.
Upper, Christian & Andreas Worms (2004), ‘Estimating bilateral exposures in the
German interbank market: Is there a danger of contagion?’, European Economic
Review 48, 827–849.
Whalen, Gary (1991), ‘A proportional hazards model of bank failure: an examina-
tion of its usefulness as an early warning model tool’, Federal Reserve Bank of
Cleveland Economic Review pp. 21–31.
Wheelock, David & Paul Wilson (2000), ‘Why do banks disappear? The determi-
nants of US bank failures and acquisitions’, Review of Economics and Statistics
82, 127–138.