30
Inhomogeneous Dependence Modelling with Time Varying Copulae Enzo Giacomini 1 , Wolfgang K. H¨ ardle 1 and Vladimir Spokoiny 2 1 CASE – Center for Applied Statistics and Economics Humboldt-Universit¨ at zu Berlin, Spandauer Straße 1, 10178 Berlin, Germany 2 Weierstrass Institute for Applied Analysis and Stochastics Mohrenstraße 39, 10117 Berlin, Germany Abstract Measuring dependence in a multivariate time series is tantamount to modelling its dynamic structure in space and time. In the context of a multivariate normally distributed time series, the evolution of the covariance (or correlation) matrix over time describes this dynamic. A wide variety of applications, though, requires a modelling framework different from the multivariate normal. In risk management the non-normal behaviour of most financial time series calls for non-gaussian dependency. The correct modelling of non-gaussian dependencies is therefore a key issue in the analysis of multivariate time series. In this paper we use copulae functions with adaptively estimated time varying parameters for modelling the distribution of returns, free from the usual normality assumptions. Further, we apply copulae to estimation of Value- at-Risk (VaR) of a portfolio and show its better performance over the RiskMetrics approach, a widely used methodology for VaR estimation. JEL classification: C 14, C16, C60, C61 Keywords: Value-at-Risk, time varying copulae, adaptive estimation, nonparametric estimation 1

Inhomogeneous dependence modeling with time-varying copulae

Embed Size (px)

Citation preview

Inhomogeneous Dependence Modelling with Time Varying Copulae

Enzo Giacomini 1 , Wolfgang K. Hardle 1 and Vladimir Spokoiny 2

1 CASE – Center for Applied Statistics and EconomicsHumboldt-Universitat zu Berlin,

Spandauer Straße 1, 10178 Berlin, Germany2 Weierstrass Institute for Applied Analysis and Stochastics

Mohrenstraße 39, 10117 Berlin, Germany

Abstract

Measuring dependence in a multivariate time series is tantamount to modelling its dynamic

structure in space and time. In the context of a multivariate normally distributed time series,

the evolution of the covariance (or correlation) matrix over time describes this dynamic. A wide

variety of applications, though, requires a modelling framework different from the multivariate

normal. In risk management the non-normal behaviour of most financial time series calls for

non-gaussian dependency. The correct modelling of non-gaussian dependencies is therefore a

key issue in the analysis of multivariate time series. In this paper we use copulae functions

with adaptively estimated time varying parameters for modelling the distribution of returns,

free from the usual normality assumptions. Further, we apply copulae to estimation of Value-

at-Risk (VaR) of a portfolio and show its better performance over the RiskMetrics approach, a

widely used methodology for VaR estimation.

JEL classification: C 14, C16, C60, C61

Keywords: Value-at-Risk, time varying copulae, adaptive estimation, nonparametric estimation

1

1 Introduction

Time series of financial data are high dimensional and have typically a non-gaussian behavior. The

standard modelling approach based on properties of the multivariate normal distribution therefore

often fails to reproduce the stylized facts (i.e. fat tails, asymmetry) observed in returns from

financial assets.

A correct understanding of the time varying multivariate (conditional) distribution of returns is

vital to many standard applications in finance like portfolio selection, asset pricing and Value-at-

Risk. Empirical evidence from asymmetric return distributions have been reported in the recent

literature. Longin and Solnik (2001) consider the distribution of joint extremes from international

equity returns and reject multivariate normality in their lower orthant, Ang and Chen (2002) test

for conditional correlation asymmetries in U.S. equity data rejecting multivariate normality at daily,

weekly and monthly frequencies, Hu (2006) models the distribution of index returns with mixture

of copulae, finding asymmetries in the dependence structure across markets. Cont (2001), Granger

(2003) give a concise survey on stylized empirical facts.

Modelling distributions with copulae has drawn attention from many researchers as it avoids the

”procrustean bed” of normality assumptions, producing better fits of the empirical characteristics of

financial returns. A natural extension is to apply copulae in a dynamic framework with conditional

distributions modelled by copulae with time varying parameters. The question though is how to

steer the time varying copulae parameters. This question is in the focus of this paper.

A possible approach is to estimate the parameter from structurally invariant periods. There is a

broad field of econometric literature on structural breaks. Tests for unit-root in macroeconomic

series against stationarity with structural break at a known change point has been investigated

by Perron (1989) and for unknown change point by Zivot and Andrews (1992), Stock (1994) and

Hansen (2001); Andrews (1993) tests for parameter instability in nonlinear models; Andrews and

Ploberger (1994) construct asymptotic optimal tests for multiple structural breaks. In a different

2

2001 2002 2003 2004 20050

1

2

3

Figure 1: Time varying dependence parameter and global parameter (horizontal line) estimatedwith Clayton copula. Portfolio of stocks from Allianz, Munchener Ruckversicherung, BASF, Bayer,DaimlerChrysler and Volkswagen

set up, Quintos et al. (2001) test for constant tail index coefficient in asian equity data against

break at unkwnown point.

Time varying copulae and structural breaks are combined in Patton (2006). The dependence

structure across exchange rates is modelled with time varying copulae, the parameter is specified

to evolve as an ARMA type process. Tests for structural break in the copula parameter at known

change point are performed and strong evidence of break is found. In similar fashion, Rodriguez

(2007) models the dependences across sets of asian and latin-american stock indexes using switching

copulae, where the time varying copula parameter follows regime-switching dynamics. Common to

these papers is that they employ a fixed (parametric) structure for the pattern of changes in the

copula parameter.

In this paper we follow a semiparametric approach, since we are not specifying the parameter chang-

ing scheme. We rather locally select the time varyig copula parameter. The choice is performed

via an adaptive estimation under the assumption of local homogeneity: for every time point there

exists an interval of time homogeneity in which the copula parameter can be well approximated by

a constant. This interval is recovered from the data using local change point analysis. This does

not imply that the model follows a change point structure: the adaptive estimation also applies

3

when the parameter smoothly varies from one value to another, see Spokoiny (2007).

Figure 1 shows the time varying copula parameter determined by our procedure for a portfolio

composed of daily prices of six german equities and the “global” copula parameter, shown by

a constant horizontal line. The absence of parametric specification for time variations in the

dependence structure - the dependence dynamics is adaptively obtained from the data - allows for

more flexibility in estimating dependence shifts across time.

The Value-at-Risk (VaR) of a portfolio is determined by the dependence among assets composing

the portfolio. Using copulae with adaptively estimated dependence parameters we estimate the

VaR from DAX portfolios over time. As benchmark procedure we choose RiskMetrics, a widely

used methodology based on conditional normal distributions with a GARCH specification for the

covariance matrix. Backtesting underlines the improved performance of the proposed adaptive time

varying copulae fitting.

This paper is organized as follows: section 2 presents the basic copulae definitions, section 3

discusses the VaR and its estimation procedure. The adaptive copula estimation is exposed in

section 4 and applied on simulated data in section 5. In section 6 the performance on real data

of the VaR estimation based on adaptive time varying copulae is compared with the RiskMetrics

approach and evaluated by backtesting.

2 Copulae

Copulae connect marginals to joint distributions, providing a natural way for measuring the depen-

dence structure between random variables. Copulae are present in the literature since Sklar (1959),

although concepts related to copulae originate in Hoeffding (1940) and Frechet (1951). Since that,

copula functions have been studied widely in the statistical literature see Nelsen (1998), Mari and

Kotz (2001) and Franke et al. (2004). The application of copulae in finance and econometrics is

4

recent, see Embrechts et al. (1999), Embrechts et al. (2003b) and Patton (2004). Cherubini et al.

(2004) and McNeil et al. (2005) provide an overview of copulae for practical problems in finance

and insurance. Further applications can be found in Hardle et al. (2002) and Embrechts et al.

(2003a).

Assuming absolutely continuous distributions throughout this paper, we have from Sklar’s theorem

that for a d -dimensional distribution function F with marginals F1 . . . , Fd , there exists a unique

copula C : [0, 1]d → [0, 1] satisfying

F (x1, . . . , xd) = C{F1(x1), . . . , Fd(xd)} (2.1)

for every x1, . . . , xd ∈ Rd . Conversely, for a random vector X = (X1, . . . , Xd)> with distribution

FX where Fj is the distribution of Xj , the copula of X is defined as

CX(u1, . . . , ud) = FX{F−11 (u1), . . . , F−1

d (ud)} (2.2)

where uj = Fj(xj) . A prominent copula is the Gaussian

CGaΨ (u1, . . . , ud) = FY {Φ−1(u1), . . . ,Φ−1(ud)}

where Φ(u) stands for the one-dimensional standard normal cdf and FY is the distribution of

Y = (Y1, . . . , Yd)> ∼ Nd(0,Ψ) where Ψ is a correlation matrix. The Gaussian copula represents

the dependence structure of the multivariate normal distribution. In contrast, the Clayton copula

given by

Cθ(u1, . . . , ud) =

d∑

j=1

u−θj

− d + 1

−θ−1

, (2.3)

for θ > 0 , expresses asymmetric dependence structures. Joe (1997) provides a summary of many

copula families and a detailed description of their properties.

5

For estimating the copula parameter, consider a sample {xt}Tt=1 of realizations from the random

vector X = (X1, . . . , Xd)> with univariate marginal distributions Fj(xj) , j = 1, . . . , d . Using

(2.1) the log-likelihood reads as

L(θ;x1, . . . , xT ) =T∑

t=1

log c{F1(xt,1), . . . , F1(xt,d); θ}. (2.4)

where c(u1, . . . , ud) = ∂dC(u1,...,ud)∂u1...∂ud

is the density of the copula C . In the canonical maximum like-

lihood (CML) method for maximizing (2.4), θ maximizes the pseudo log-likelihood with empirical

marginal cdf’s

L(θ) =T∑

t=1

log c{F1(xt,1), . . . , Fd(xt,d); θ}

where

Fj(x) =1

T + 1

T∑t=1

1{Xt,j≤x}

Note that the previous expression differs from the usual empirical cdf by the denominator T + 1 .

That is to guarantee that Fj lies strictly inside of the unit cube and avoid infinite values the copula

density takes on the boundaries, see McNeil et al. (2005). Joe (1997) and Cherubini et al. (2004)

provide a detailed exposition from inference methods for copula.

3 Value-at-Risk and Copulae

The dependency (over time) between asset returns is especially important in risk management

since the profit and loss (P&L) function determines the Value-at-Risk. More precisely, Value-

at-Risk of a portfolio is determined by the multivariate distribution of risk factor increments. If

w = (w1, . . . , wd)> ∈ Rd denotes a portfolio of positions on d assets and St = (St,1, . . . , St,d)>

a non-negative random vector representing the prices of the assets at time t , the value Vt of the

6

portfolio w is given by

Vt =d∑

j=1

wjSt,j .

The random variable

Lt = (Vt − Vt−1) (3.1)

called profit and loss (P&L) function, expresses the change in the portfolio value between two

subsequent time points. Defining the log-returns Xt = (Xt,1, . . . , Xt,d)> where Xt,j = log St,j −

log St−1,j and log S0,j = 0 , j = 1, . . . , d , (3.1) can be written as

Lt =d∑

j=1

wjSt−1,j {exp(Xt,j)− 1} . (3.2)

The distribution function of Lt is given by Ft,Lt(x) = Pt(Lt ≤ x) . The Value-at-Risk at level α

from a portfolio w is defined as the α -quantile from Ft,Lt :

VaRt(α) = F−1t,Lt

(α). (3.3)

It follows from (3.2) and (3.3) that Ft,Lt depends on the specification of the d -dimensional distri-

bution of the risk factors Xt . Thus, modelling their distribution over time is essential to obtain

the quantiles (3.3).

The RiskMetrics technique, a widely used methodology for VaR estimation, assumes that risk

factors Xt follow a conditional multivariate normal distribution L(Xt|Ft−1) =N(0,Σt) , where

Ft−1 = σ(X1, . . . , Xt−1) is the σ -field generated by the first t− 1 observations, and estimates the

covariance matrix Σt for one-period return as

Σt = λ0Σt−1 + (1− λ0)Xt−1X>t−1

where the parameter λ0 is the so-called decay factor. λ0 = 0.94 provides the best backtesting

7

results for daily returns according to Morgan/Reuters (1996).

In the copulae based approach one first corrects the contemporaneous volatility in the log-returns

process:

Xt,j = σt,jεt,j (3.4)

where εt = (εt,1, . . . , εt,d)> are standardised innovations for j = 1, . . . , d and

σ2t,j = E[X2

t,j | Ft−1] (3.5)

is the conditional variance given Ft−1 . The innovations εt have joint distribution Fεt and εt,j have

continuous marginal distributions Ft,j , j = 1, . . . , d . The distribution of innovations is described

by

Fεt(x1, . . . , xd) = Cθ{Ft,1(x1), . . . , Ft,d(xd)} (3.6)

where Cθ is a copula belonging to a parametric family C = {Cθ, θ ∈ Θ} . For details on the above

model specification see Chen and Fan (2006a), Chen and Fan (2006b) and Chen et al. (2006). For

the Gaussian copula with Gaussian marginals we recover the conditional Gaussian RiskMetrics

framework.

To obtain the Value-at-Risk in this set up, the dependence parameter and distribution function

from residuals are estimated from a sample of log-returns and used to generate P&L Monte Carlo

samples. Their quantiles at different levels are the estimators for the Value-at-Risk, see Embrechts

et al. (1999). The whole procedure can be summarized as follows:

For a portfolio w ∈ Rd and a sample {xj,t}Tt=1 , j = 1, . . . , d of log-returns, the Value-at-Risk

at level α is estimated according to the following steps, see Hardle et al. (2002), Giacomini and

Hardle (2005):

1. determination of innovations {εt}Tt=1 by e.g. deGARCHing

8

2. specification and estimation of marginal distributions Fj(εj)

3. specification of a parametric copula family C and estimation of the dependence parameter θ

4. generation of Monte Carlo sample of innovations ε and losses L

5. estimation of VaR(α) , the empirical α -quantile of FL .

4 Modelling with Time Varying Copulae

Very similar to the RiskMetrics procedure, one can perform a moving (fixed length) window estima-

tion of the copula parameter. This procedure though does not fine tune local changes in dependen-

cies. In fact, the joint distribution Ft,εt from (3.6) is modelled as Ft,εt = Cθt{Ft,1(·), . . . , Ft,d(·)}

with probability measure Pθt . The moving window of fixed width will estimate a θt for each t ,

but has clear limitations. The choice of a small window results in a high pass filtering and hence, in

a very unstable estimate with huge variability. The choice of a large window leads to a poor sensi-

tivity of the estimation procedure and in a high delay in reaction on the change of the dependence

measured by the parameter θt .

In order to choose an interval of homogeneity we employ a local parametric fitting approach as

introduced by Polzehl and Spokoiny (2006), Belomestny and Spokoiny (2007) and Spokoiny (2007).

The basic idea is to select for each time point t0 an interval It0 = [t0 −mt0 , t0] of length mt0 in

such a way that the time varying copula parameter θt can be well approximated by a constant

value θ . The question is of course how to select mt0 in an online situation from historical data.

The aim should be to select It0 as close as possible to the so-called ”oracle” choice interval. The

”oracle” choice is defined as the largest interval I = [t0 −m∗t0 , t0] , for which the small modelling

bias condition (SMB):

∆I(θ) =∑t∈I

K(Pθt , Pθ) ≤ ∆ (4.1)

9

holds. Here θ is constant and

K(Pϑ, Pϑ′) = Eϑ logp(y, ϑ)p(y, ϑ′)

denotes the Kullback-Leibler divergence. In such an oracle choice interval, the parameter θt0 =

θt|t=t0can be “optimally” estimated from I = [t0 − m∗

t0 , t0] . The error and risk bounds are

calculated in Spokoiny (2007). It is important to mention that the concept of local parametric

approximation allows to treat in a unified way the case of “switching regime” models with sponta-

neous changes of parameters and the “smooth transition” case when the parameter varies smoothly

in time.

The “oracle” choice of the interval of homogeneity depends of course on the unknown time varying

copula parameter θt . The next section presents an adaptive (data driven) procedure which “mim-

ics” the “oracle” in the sense that it delivers the same accuracy of estimation as the “oracle” one.

The trick is to find the largest interval in which the hypothesis of a local constant copula parameter

is supported. The Local Change Point (LCP) detection procedure, originates from Mercurio and

Spokoiny (2004) and sequentially tests the hypothesis: θt is constant (i.e. θt = θ ) within some

interval I (local parametric assumption).

4.1 LCP procedure

The LCP procedure for a given point t0 starts with a family of nested intervals I0 ⊂ I1 ⊂ I2 ⊂

. . . ⊂ IK of the form Ik = [t0 −mk, t0] . The sequence mk determines the length of these interval

”candidates”. Every interval leads to an estimate θk of the copula parameter θt0 from the interval

Ik . The procedure selects one interval I out of the given family and therefore, the corresponding

estimate θ = θI.

The idea of the procedure is to sequentially screen each interval Tk = Ik \ Ik−1 = [t0−mk, t0−mk−1]

10

t0 −mk t0 −mk−1 t0 −mk−2 t0︸ ︷︷ ︸Tk

︸ ︷︷ ︸Tk−1

︸ ︷︷ ︸Ik−2︸ ︷︷ ︸

Ik−1︸ ︷︷ ︸Ik

Figure 2: Choice of the intervals Ik and Tk

and check each point τ ∈ Tk as a possible change point location, see next section for more details.

The family of intervals Ik and Tk are illustrated in figure 2. The interval Ik is accepted if no

change point is detected within T1, . . . ,Tk . If the hypothesis of homogeneity is rejected for an

interval-candidate Ik the procedure stops and selects the latest accepted interval. The formal

description reads as follows.

Start the procedure with k = 1 and

1. test the hypothesis H0,k of no changes within Tk using the larger testing interval Ik+1 ;

2. if no change points were found in Tk , then Ik is accepted. Take the next interval Tk+1

and repeat the previous step until homogeneity is rejected or the largest possible interval

IK = [t0 −mK , t0] is accepted;

3. if H0,k is rejected for Tk , the estimated interval of homogeneity is the last accepted interval

I = Ik−1 .

4. if the largest possible interval IK is accepted we take I = IK .

We estimate the copula dependence parameter θ at time instant t0 from observations in I ,

assuming the homogeneous model within I , i.e. we define θt0 = θI. We also denote by Ik the

largest accepted interval after k steps of the algorithm and by θk the corresponding estimate of

the copula parameter.

11

It is worth mentioning that the objective of the above described estimation algorithm is not to detect

the points of change for the copula parameter, but rather to determine the current dependency

structure from historical data by selecting an interval of time homogeneity. This distinguishes our

approach from other procedures for estimating a time varying parameter by change point detection.

A visible advantage of our approach is that it equally applies to the case of spontaneous changes

in the dependency structure and in the case of smooth transition in the copula parameter. The

obtained dependence structure can be used for different purposes in financial engineering, the most

prominent being the calculation of the VaR, see also section 6.

The theoretical results from Spokoiny and Chen (2007) and Spokoiny (2007) indicate that the

proposed procedure provides the rate optimal estimation of the underlying parameter when this

smoothly varies with time. It has also been shown that the procedure is very sensitive to structural

breaks and provides the minimal possible delay in detection of changes, where the delay depends

on the size of change in terms of Kullback-Leibler divergence.

4.1.1 Test of homogeneity against a change point alternative

In the homogeneity test against change point alternative we want to check every point of an interval

T (recall figure 2), here called tested interval, on a possible change in the dependency structure at

this moment. To perform this check, we assume a larger testing interval I of form I = [t0−m, t0] , so

that T is an internal subset within I . The null hypothesis H0 means that ∀t ∈ I , θt = θ , i.e., the

observations in I follow the model with dependence parameter θ . The alternative hypothesis H1

claims that ∃τ ∈ T such that θt = θ1 for t ∈ J = [τ, t0] and θt = θ2 6= θ1 for t ∈ Jc = [t0−m, τ [ ,

i.e. the parameter θ changes spontaneously in some point τ ∈ T . Figure 3 depicts I , T and the

subintervals J and Jc determined by the point τ ∈ T .

Let LI(θ) and LJ(θ1) + LJc(θ2) be the log-likelihood functions corresponding to H0 and H1

respectively. The likelihood ratio test for the single change point with known fixed location τ can

12

t0 −m τ

Jc' $ J' $t0︸ ︷︷ ︸

T︸ ︷︷ ︸I

Figure 3: Testing interval I , tested interval T , subintervals J and Jc for a point τ ∈ T

be written as:

TI,τ = maxθ1,θ2

{LJ(θ1) + LJc(θ2)} −maxθ

LI(θ)

= LJ(θJ) + LJc(θJc)− LI(θI)

= LJ + LJc − LI .

The test statistic for unknown change point location is defined as

TI = maxτ∈T

TI,τ .

The change point test compares this test statistic with a critical value zI which may depend on

the interval I . One rejects the hypothesis of homogeneity if TI > zI . The estimator of the change

point is then defined as

τ = argmaxτ∈T

TI,τ .

4.1.2 Parameters of the LCP procedure

In order to apply the LCP testing procedure for local homogeneity, we have to specify some pa-

rameters. This includes: selection of interval candidates Ik , or equivalently, the respective tested

intervals Tk for each of them and choice of critical values zI . One possible parameter set that has

been succesfully employed in other simulations is presented below.

13

Selection of interval candidates I and internal points TI : it is useful to take the set of

numbers mk defining the length of Ik and Tk in form of a geometric grid. We fix the value m0

and define mk = [m0ck] for k = 1, 2, . . . ,K and c > 1 where [x] means the integer part of x .

We set Ik = [t0 −mk, t0] and Tk = [t0 −mk, t0 −mk−1] for k = 1, 2, . . . ,K , see figure 2.

Choice of the critical values zI . The algorithm is in fact a multiple testing procedure. Mercurio

and Spokoiny (2004) suggested to select the critical value zk to provided the overall first type error

probability of rejecting the hypothesis of homogeneity in the homogeneous situation. Here, we

follow another proposal from Spokoiny and Chen (2007) which focuses on estimation losses caused

by the “false alarm”, in our case obtainig a homogeneity interval too small, rather than on its

probability.

In the homogeneous situation with θt ≡ θ for all t ∈ Ik+1 , the desirable behavior of the procedure

is that after the first k steps the selected interval Ik coincides with Ik and the corresponding

estimate θk coincides with θk , that means there is no ”false alarm”. In the contrary, in case of

“false alarm” the selected interval Ik is smaller than Ik , and hence, the corresponding estimate

θk has larger variability than θk . This means that the “false alarm” at the early steps of the

procedure is more critical than at the final steps, as it may lead to selecting an estimate with

very high variance.The difference between θk and θk can naturally be measured by the value

LIk(θk, θk) = LIk

(θk)− LIk(θk) normalized by the risk R(θ∗) of the non-adaptive estimate θk :

R(θ∗) = maxk≥1

Eθ∗∣∣LIk

(θk, θ∗)

∣∣1/2.

The conditions we impose read as:

Eθ∗∣∣LIk

(θk, θk)∣∣1/2 ≤ ρR(θ∗), k = 1, . . . ,K, θ∗ ∈ Θ (4.2)

The critical values zk are selected as minimal values providing these constraints. In total we have

K conditions to select K critical values z1, . . . , zK . These values zk can be sequentially selected

14

by Monte Carlo simulation where one simulates under H0 : θt = θ∗ , ∀t ∈ IK . The parameter

ρ defines how conservative the procedure is. Small ρ leads to larger critical values and hence,

to a conservative and non-sensitive procedure while an increase in ρ results in a more sensitive

procedure at cost of stability. For details, see Spokoiny and Chen (2007) or Spokoiny (2007).

5 Simulated Examples

In this section we apply the LCP procedure on simulated data with dependence structure given

by the Clayton copula. We generate sets from 6 dimensional data with a sudden jump in the

dependence parameter given by

θt =

ϑa if − 390 ≤ t ≤ 10

ϑb if 10 < t ≤ 210

for different values of the pair (ϑa, ϑb) : one of them is fixed at 0.1 (close to the independence

parameter) while the other is set to larger values.

The LCP procedure is set up with a geometric grid defined by m0 = 20 and c = 1.25 . The critical

values z1, . . . , zK selected according to (4.2) for different ρ and θ∗ are displayed in table 1. The

default choice for ρ , based on our experience, is to set ρ = 0.5 . For fixed ρ variation in θ∗ has

very little influence in the obtained critical values, therefore we use zk obtained with θ∗ = 1.0 and

ρ = 0.5 in our implementation.

Figure 4 shows the pointwise median and quantiles of the estimated parameter θt for distinct

values of (ϑa, ϑb) based on 100 simulations. The detection delay δ at rule r ∈ [0, 1] to jump of

size γ = θt − θt−1 at t is expressed by

δ(t, γ, r) = min{u ≥ t : θu = θt−1 + rγ} − t (5.1)

15

0 50 100 150 200

0.5

1

0 50 100 150 200

0.5

1

0 50 100 150 200

0.5

1

0 50 100 150 200

0.5

1

0 50 100 150 200

0.5

1

0 50 100 150 200

0.5

1

Figure 4: Pointwise median (full), 0.25 and 0.75 quantiles (dotted) from θt . True parameterθt (dashed) with ϑa = 0.10 , ϑb = 0.50 , 0.75 and 1.00 (left, top to bottom) and ϑb = 0.10 ,ϑa = 0.50 , 0.75 and 1.00 (right, top to bottom). Based on 100 simulations from Clayton copula,estimated with LCP, m0 = 20 , c = 1.25 and ρ = 0.5

16

θ∗ = 0.5 θ∗ = 1.0 θ∗ = 1.5k ρ = 0.2 ρ = 0.5 ρ = 1.0 ρ = 0.2 ρ = 0.5 ρ = 1.0 ρ = 0.2 ρ = 0.5 ρ = 1.01 3.64 3.29 2.88 3.69 3.29 2.84 3.95 3.49 2.962 3.61 3.14 2.56 3.43 2.91 2.35 3.69 3.02 2.783 3.31 2.86 2.29 3.32 2.76 2.21 3.34 2.80 2.094 3.19 2.69 2.07 3.04 2.57 1.80 3.14 2.55 1.865 3.05 2.53 1.89 2.92 2.22 1.53 2.95 2.65 1.496 2.87 2.26 1.48 2.92 2.17 1.19 2.83 2.04 0.947 2.51 1.88 1.02 2.64 1.82 0.56 2.62 1.79 0.318 2.49 1.72 0.35 2.33 1.39 0.00 2.35 1.33 0.009 2.18 1.23 0.00 2.03 0.81 0.00 2.10 0.60 0.00

10 0.92 0.00 0.00 0.82 0.00 0.00 0.79 0.00 0.00

Table 1: Critical values zk(ρ, θ∗) for m0 = 20 and c = 1.25 . Clayton copula, based on 5000simulations

and represents the number of steps necessary for the estimated parameter to reach the r -fraction

of a jump in the real parameter. Detection delays are proportional to probability of error of type

II , i.e., probability of accepting homogeneity in case of jump. Thus, tests with higher power

correspond to lower detection delays. The Kullback-Leibler divergences for upward and downward

jumps are proportional to the power of the respective homogeneity tests. Figure 5 shows that for

Clayton copulae the Kullback-Leibler divergence is higher for upward than for downward jumps.

The descriptive statistics for detection delays to jumps at t = 11 for different jumps are in table 2.

The mean detection delay decreases with γ = ϑb−ϑa and are higher for downward than for upward

jumps. Figure 6 displays the mean detection delays against jump size for upward and downward

jumps.

The LCP procedure also is applied on simulated data with a smooth transition in the dependence

parameter given by

17

(ϑa, ϑb) r mean std dev. max min0.25 9.06 7.28 56 0

(0.50, 0.10) 0.50 13.64 9.80 60 00.75 21.87 14.52 89 30.25 5.16 4.24 21 0

(0.75, 0.10) 0.50 8.85 5.55 25 00.75 16.72 10.37 64 30.25 4.47 2.94 12 0

(1.00, 0.10) 0.50 7.94 4.28 22 00.75 14.79 7.38 62 50.25 8.94 6.65 36 0

(0.10, 0.50) 0.50 14.21 9.06 53 00.75 21.43 12.15 68 00.25 9.00 4.80 25 0

(0.10, 0.75) ) 0.50 14.30 5.96 40 30.75 21.00 10.97 75 60.25 7.39 3.67 19 0

(0.10, 1.00) 0.50 13.10 4.13 22 20.75 20.13 7.34 55 10

Table 2: Statistics for detection delay δ calculated as in (5.1) at rule r , based on 100 simulationsfrom Clayton copula, m0 = 20 , c = 1.25 and ρ = 0.5

0 2 4 6 8 10

10

20

00

ϑ

Figure 5: Kullback-Leibler divergences K(0.10, ϑ) (full) and K(ϑ, 0.10) (dashed), Clayton copula

18

0.5 0.75 1

5

15

25

ϑa

0.5 0.75 1

5

15

25

ϑb

Figure 6: Mean detection delays (dots) at rule r = 0.75 , 0.50 and 0.25 from top to bottom. Left:ϑb = 0.10 (upward jump). Right: ϑa = 0.10 (downward jump), based on 100 simulations fromClayton copula, m0 = 20 , c = 1.25 and ρ = 0.5

θt =

ϑa if − 350 ≤ t ≤ 50

ϑa + t−50100 (ϑb − ϑa) if 50 < t ≤ 150

ϑb if 150 < t ≤ 350

Figure 7 depicts the pointwise median and quantiles of the estimated parameter θt and the true

parameter θt .

0 100 200 300

0.5

1

0 100 200 300

0.5

1

Figure 7: Pointwise median (full), 0.25 and 0.75 quantiles (dotted) from θt and true parameterθt (dashed). Based on 100 simulations from Clayton copula, estimated with LCP, m0 = 20 ,c = 1.25 and ρ = 0.5

19

6 Empirical Results

In this section the Value-at-Risk from portfolios of DAX stocks are estimated using RiskMetrics

(RM) and the time varying copula procedures moving window (MW) and Local Change Point

(LCP).

Two different groups of 6 stocks listed on DAX are used to compose the portfolios. In the group 1

there are stocks from DaimlerChrysler, Volkswagen, Allianz, Munchener Ruckversicherung, Bayer

and BASF. The stocks from group 2 are from Siemens, ThyssenKrupp, Schering, Henkel, E.ON

and Lufthansa. The portfolio values are calculated using the closing daily price from the stocks

from 01.01.2000 to 31.12.2004. There are 1270 observations from each stock (data available in

http://sfb649.wiwi.hu-berlin.de/fedc).

Stocks from group 1 belong to three different industries: automotive (Volkswagen and Daim-

lerChrysler), insurance (Allianz and Munchener Ruckversicherung) and chemical-pharmaceutical

(Bayer and BASF) while group 2 is composed of stocks from five distinct industries: electrotech-

nical (Siemens), energy (E.ON), metalurgical (ThyssenKrupp), airlines (Lufthansa) and chemical-

pharmaceutical (Schering and Henkel).

The VaR estimation with time varying copulae follows the approach described in section 3. The

selected copula belongs to the Clayton family (2.3) because it has a natural interpretation and is

well advocated in risk management applications. In line with the stylized facts for financial returns,

Clayton copulae model asymmetric dependences and joint quantile exceedances in lower orthants,

which are essential for VaR calculations, see McNeil et al. (2005).

The log-returns corresponding to stock j are modelled as in (3.4) and (3.5) by

Xt,j = σt,jεt,j

20

and the innovations εt are distributed according to (3.6). We estimate the conditional variances

σ2t,j by exponential smoothing techniques for every time point t :

σ2t,j = (eλ − 1)

∑s<t

e−λ(t−s)X2s,j

where λ = 1/20 . Our experience tells that different specifications for conditional mean and condi-

tional variance from log-returns have negligible influence in the copula parameter estimation.

The empirical distribution of the obtained residuals εt,j is used for the copula estimation. In the

moving window approach the size of the estimating window is fixed as 250 ; for the LCP procedure,

following subsection 4.1.2, we set ρ = 0.5 and the family of interval candidates as a geometric grid

with m0 = 20 and c = 1.25 . We have chosen these parameters from our experience in simulations.

For details on robustness of the reported results with respect to the choice of the parameters m0

and c refer to Spokoiny (2007).

The performance of the VaR estimation is evaluated based on backtesting. At each time t the

estimated Value-at-Risk at level α for a portfolio w is compared with the realization lt of the

corresponding P&L function, an exceedance occuring for each lt smaller than V aRt(α) . The ratio

of the number of exceedances to the number of observations gives the exceedance ratio:

αw(α) =1T

T∑t=1

1{lt<V aRt(α)}

As the first 250 observations are used for estimation, t = 1 corresponds to 01.01.2001 and T =

1020 to 31.12.2004.

We compute the exceedance ratios to levels α for a set W = {w∗, wn;n = 1, . . . , 100} of 101

portfolios. Each portfolio wn is a realization of a random variable uniformly distributed on the

simplex S = {(x1, . . . , x6) ∈ R6 :∑6

i=1 xi = 1, xi ≥ 0} . The portfolio w∗ = (w∗1, . . . , w

∗6)> is

equally weighted, w∗i = 1

6 , i = 1, . . . , 6 . For the level α we calculate the average exceedance ratio

21

2001 2002 2003 2004 20050

1

2

3

Figure 8: Estimated copula parameter θt for group 1, LCP method, m0 = 20 , c = 1.25 andρ = 0.5 , Clayton copula

(AW) , the squared sum of differences to level (SW) and the relative distance to level (DW) from

portfolios of W as

AW =1|W|

∑w∈W

αw

SW =∑w∈W

(αw − α)2

DW =1α

{ ∑w∈W

(αw − α)2}1/2

The time varying copula and RiskMetrics performances in VaR estimation are compared based on

DW .

The dependence parameter estimated with LCP for stocks from group 1 is shown in figure 8. The

P&L for the equally weighted portfolio w∗ and the VaR at level 0.05 estimated with LCP, MW

and RM methods are in figure 9. Table 3 shows exceedance ratios for portfolios w∗ , w1 and w2

in the upper part. The lower part shows average exceedance ratio, sum of squared differences to

level and relative distance to level for portfolios of W at levels α = 0.05 and 0.01 .

Figure 10 shows the dependence parameter estimated with LCP for stocks from group 2. The P&L,

22

RM MW LCP

α(×102)5.00 1.00 5.00 1.00 5.00 1.00

αw∗ 6.21 1.28 5.81 0.69 5.52 0.69αw1 6.40 1.28 6.40 0.69 6.31 0.79αw2 5.32 1.58 5.91 0.99 5.62 0.99

(×102)AW 5.92 1.42 5.52 0.64 5.41 0.66SW 0.94 0.20 0.51 0.15 0.38 0.14

DW 1.92 4.52 1.43 3.83 1.23 3.74

Table 3: Exceedance ratios for portfolios w∗ , w1 and w2 , average, sum of squared differences andrelative distance to level across levels and methods, group 1

2001 2002 2003 2004 2005

−20

0

20

2001 2002 2003 2004 2005

−20

0

20

2001 2002 2003 2004 2005

−20

0

20

Figure 9: P&L (dots), Value-at-Risk at level α = 0.05 (line), exceedances (crosses), estimatedwith LCP (above), MW (middle) and RM (below), for equally weighted portfolio w∗ , group 1

23

2001 2002 2003 2004 2005

0

0.5

1

Figure 10: Estimated copula parameter θt for group 2, LCP method, m0 = 20 , c = 1.25 andρ = 0.5 , Clayton copula

the estimated VaR at level 0.05 and exceedance times for the equally weighted portfolio and LCP,

MW and RM methods are in figure 11. Table 4 shows exceedance ratios for portfolios w∗, w1 and

w2 , the average exceedance ratio, the sum of squared differences to level and relative distance to

level, across methods and for levels α = 0.05 and 0.01 .

The results should be interpreted considering the composition of group 1 and 2. The higher

concentration in group 1 is reflected in the higher values from the estimated dependence parameter

in comparison with values obtained for group 2, as seen in figures 8 and 10. Based on DW , the

LCP procedure outperforms the MW (second) and RM (third) methods in VaR estimation at

both levels in group 1. For stocks from group 2 the time varying copula methods outperforms

RM at level 0.01 while this does better at level 0.05 . Clayton copula modelling performs better

at low quantiles for both levels of concentration. The performance at higher quantiles improves

when dependence increase, reflecting the fact that Kullback-Leibler divergence between Clayton

and Gaussian copulae increases with dependence.

7 Conclusion

In this paper we modeled the dependence structure across german equity returns using time varying

copulae with adaptively estimated parameters. In contrast to Patton (2006) and Rodriguez (2007),

24

RM MW LCP

α(×102)5.00 1.00 5.00 1.00 5.00 1.00

αw∗ 5.42 1.58 4.53 0.39 4.53 0.30αw1 6.01 1.67 5.22 0.69 5.02 0.69αw2 5.22 1.48 4.93 0.69 4.63 0.59

(×102)AW 5.43 1.50 4.53 0.55 4.43 0.51SW 0.34 0.28 0.55 0.22 0.58 0.26

DW 1.17 5.32 1.49 4.73 1.53 5.10

Table 4: Exceedance ratios for portfolios w∗ , w1 and w2 , average, sum of squared differences andrelative distance to level across levels and methods, group 2

2001 2002 2003 2004 2005

−2

0

2

2001 2002 2003 2004 2005

−2

0

2

2001 2002 2003 2004 2005

−2

0

2

Figure 11: P&L (dots), Value-at-Risk at level α = 0.05 (line), exceedances (crosses), estimatedwith LCP (above), MW (middle) and RM (below) for equally weighted portfolio w∗ , group 2

25

neither did we specify the dynamics nor assumed regime switching models for the copula parameter,

assuring more flexibility in time varying dependence modelling. The parameter choice was data

driven, the estimating intervals determined from the data by the LCP procedure and the whole

estimation provided oracle accuracy.

We used time varying copulae to estimate the Value-at-Risk from different portfolios of german

stocks. The method was compared by means of backtesting with RiskMetrics, a widely used VaR

estimation approach based on multivariate normality. Two groups of stocks were selected, with

more and less industry concentration.

For the concentrated portfolios, the adaptive copula estimation achieved the best performance in

the VaR estimation among the three methods. In average, however, all methods overestimated

the VaR: in terms of capital requirement, a financial institution would be requested to keep more

capital aside than necessary to guarantee the desired confidence level. In the weak correlated

portfolios, RiskMetrics better estimated the VaR at higher ( 0.05 ) quantiles while time varying

copula methods performed better at lower quantiles. The RiskMetrics method overestimated the

VaR at both levels, while time varying copula underestimated at low levels.

The standard approach, based on normal distribution failed to model dependence at low orthants

across the selected german stocks: the correlation structure contains nonlinearities that can not be

captured by the multivariate normal distribution. This is in accordance with results from Longin

and Solnik (2001), Ang and Chen (2002) and Patton (2006) where empirical evidence for asymmetric

dependences in international markets, U.S. equities and exchange rates are found indicating that

correlations increase in market downturns.

Our results based on time varying copulae suggest that similar effects may be found in german equity

markets, i.e. dependence across assets increase with decreasing returns. In addition, our adaptive

copula estimation procedure provides a specification free method to obtain the dynamic behaviour

of asymmetries in correlations across assets, allows new insights in understanding dependences

26

among markets and economies.

8 Acknowledgements

Financial support from the Deutsche Forschungsgemeinschaft via SFB 649 “Okonomisches Risiko”,

Humboldt-Universitat zu Berlin is gratefully acknowledged.

References

Andrews, D. W. K. (1993). Tests for parameter instability and structural change with unknown

change point. Econometrica, 61:821–856.

Andrews, D. W. K. and Ploberger, W. (1994). Optimal tests when a nuisance parameter is present

only under the alternative. Econometrica, 62:1383–1414.

Ang, A. and Chen, J. (2002). Asymmetric correlations of equity portfolios. Journal of Financial

Economics, 63:443–494.

Belomestny, D. and Spokoiny, V. (2007). Spatial aggregation of local likelihood estimates with

applications to classification. The Annals of Statistics, to appear.

Chen, X. and Fan, Y. (2006a). Estimation and model selection of semiparametric copula-based

multivariate dynamic models under copula misspecification. Journal of Econometrics, 135:125–

154.

Chen, X. and Fan, Y. (2006b). Estimation of Copula-Based Semiparametric Time Series Models.

Journal of Econometrics, 130:307–335.

Chen, X., Fan, Y., and Tsyrennikov, V. (2006). Efficient Estimation of Semiparametric Multivariate

Copula Models. Journal of the American Statistical Association, 101:1228–1240.

27

Cherubini, U., Luciano, E., and Vecchiato, W. (2004). Copula Methods in Finance. Wiley, Chich-

ester.

Cont, R. (2001). Empirical properties of asset returns: stylized facts and statistical issues. Quan-

titative Finance, 1:223–236.

Embrechts, P., Hoeing, A., and Juri, A. (2003a). Using Copulae to Bound the Value-at-Risk for

Functions of Dependent Risks. Finance and Stochastics, 7(2):145–167.

Embrechts, P., Lindskog, F., and McNeil, A. (2003b). Modelling Dependence with Copulas and

Applications to Risk Management. Handbook of Heavy Tailed Distributions in Finance, 8:329–

384.

Embrechts, P., McNeil, A., and Straumann, D. (1999). Correlation and Dependence in Risk Man-

agement: Properties and Pitfalls. Correlation, Risk Management: Value at Risk and Beyond.

Franke, J., Hardle, W., and Hafner, C. (2004). Statistics of Financial Markets. Springer-Verlag,

Heidelberg.

Frechet, M. (1951). Sur les tableaux de correlation dont les marges sont donnees. Annales de

l’Universite de Lyon, Sciences Mathematiques et Astronomie, 14:5–77.

Giacomini, E. and Hardle, W. (2005). Value-at-Risk Calculations with Time Varying Copulae.

Bulletin of the International Statistical Institute, 55th Session Sydney, 55.

Granger, C. (2003). Time Series Concept for Conditional Distributions. Oxford Bulletin of Eco-

nomics and Statistics, 65:689–701.

Hansen, B. E. (2001). The New Econometrics of Structural Change: Dating Breaks in U.S. Labor

Productivity. Journal of Economic Perspectives, 15:117–128.

Hoeffding, W. (1940). Maßstabinvariante Korrelationstheorie. Schriften des mathematischen Sem-

inars und des Instituts fur angewandte Mathematik der Universitat Berlin, 5:181–233.

28

Hardle, W., Kleinow, T., and Stahl, G. (2002). Applied Quantitative Finance. Springer-Verlag,

Heidelberg.

Hu, L. (2006). Dependence patterns across financial markets: a mixed copula approach. Applied

Financial Economics, 16:717–729.

Joe, H. (1997). Multivariate Models and Dependence Concepts. Chapman & Hall, London.

Longin, F. and Solnik, B. (2001). Extreme Correlation on International Equity Markets. The

Journal of Finance, 56:649–676.

Mari, D. and Kotz, S. (2001). Correlation and Dependence. Imperial College Press, London.

McNeil, A. J., Frey, R., and Embrechts, P. (2005). Quantitative Risk Management. Concepts,

techniques and tools. Princeton University Press, Princeton, NJ.

Mercurio, D. and Spokoiny, V. (2004). Estimation of Time Dependent Volatility via Local Change

Point Analysis with Applications to Value-at-Risk. Annals of Statistics, 32:577–602.

Morgan/Reuters, J. (1996). RiskMetrics Technical Document. http://www.riskmetrics.com/

rmcovv.html, New York.

Nelsen, R. (1998). An Introduction to Copulas. Springer-Verlag, New York.

Patton, A. (2004). On the Out-of-Sample Importance of Skewness and Asymmetric Dependence

for Asset Allocation. Journal of Financial Econometrics, 2:130–168.

Patton, A. (2006). Modelling asymmetric exchange rate dependence. International Economic

Review, 47:527–556.

Perron, P. (1989). The great crash, the oil price shock and the unit root hypothesis. Econometrica,

57:1361–1401.

Polzehl, J. and Spokoiny, V. (2006). Propagation-separation approach for likelihood estimation.

Probability Theory and Related Fields, 135:335–362.

29

Quintos, C., Fan, Z., and Philips, P. C. B. (2001). Structural Change Tests in Tail Behaviour and

the Asian Crisis. Review of Economics Studies, 68:633–663.

Rodriguez, J. C. (2007). Measuring Financial Contagion: A Copula Approach. Journal of Empirical

Finance, 14(3):401–423.

Sklar, A. (1959). Fonctions de Repartition a n Dimensions et Leurs Marges. Publ. Inst. Statist.

Univ. Paris, 8:229–231.

Spokoiny, V. (2007). Local Parametric Methods in Nonparametric Estimation. Springer-Verlag,

Berlin, Heidelberg.

Spokoiny, V. and Chen, Y. (2007). Multiscale local change point detection with applications to

value-at-risk. Weierstrass Institute Berlin, preprint.

Stock, J. H. (1994). Unit Roots, Structural Breaks and Trends. Handbook of Econometrics, 4:2739–

2841.

Zivot, E. and Andrews, D. W. K. (1992). Further evidence on the great crash, the oil price shock

and the unit root hypothesis. Journal of Business & Economic Statistics, 10:251–270.

30