Upload
others
View
1
Download
0
Embed Size (px)
Citation preview
Operational riskWith the advent of Basel II in 2006, Operational Risk became a part of the
Pillar 1 capital charge for banks. This whitepaper discusses the challenges
in building an advanced Operational Risk capital model
Bastiaan Reinink
2
1. IntroductionWith the advent of Basel II in 2006, Operational Risk became a part of the Pillar 1 capital charge
for banks. Basel II gave three flavors for calculating an Operational Risk charge. The Basic Indicator
Approach and the Standardized Approach, both giving Operational Risk capital as a percentage of
gross income and the Advanced Measurement Approach (AMA), allowing a banks to build their
own models for capital calculations.
Together with some restrictions (diversification and capital reduction through risk mitigation are
only allowed after approval by the regulator), the two most important demands made are that a
model needs to be sufficiently granular and that it uses four data sources:
1. Internal data
2. External data
3. Scenarios
4. Business Environment and Internal Control Factors
Since the publishing of Basel II, the Bank of International Settlements (BIS) has published a
number of documents for further guidance on the modeling of Operational Risk in an AMA
setting. Basel III does not impose changes in the area of Operational Risk.
In 2014 Solvency II will also make Operational Risk a part of Pillar 1 for insurers. Solvency
II however gives very little guidance as to how to calculate the exact capital charge for Op-
erational Risk, beyond stating that it needs to be based on volumes in terms of premiums
and technical provisions.
This paper describes the challenges associated with finding a modeling solution that fits
an institution’s demands for an advanced Operational Risk capital model.
2. Why an advanced model?There are several reasons to choose a more advanced Operational Risk capital model
over the simpler approaches as described in Basel II.
The Basic Indicator and Standardized Approach are very simple; they use gross in-
come as a crude risk indicator and so do not give a realistic representation of an
institution’s risk profile. Specifically designed for an institution, an advanced model
can give the broad representation and insight that is required. It also emphasizes
the low-frequency, high-impact nature of some Operational Risks. This allows a
more comprehensive management of Operational Risk, bringing into focus the ar-
eas where the largest gains can be achieved.
Finally, an advanced approach allows an institution to translate good risk man-
agement practices into a reduction in the Operational Risk capital charge.
White paper: Operational Risk December 2012
3
3. Qualities of Operational RisksOperational Risk losses come from a wide variety of sources. Contained in it are
such diverse sub-risks as fraud, damage to buildings and processes going wrong.
Even if losses do fall within the same broad category, their loss-generating process
can differ immensely; skimming and a cyber-attack would both be classified un-
der “External Fraud” but other than that they have very little in common. Worse
still, each Operational Risk loss is in a very real sense unique; fraud committed
by person X will be very different from fraud committed by person Y. Only at a
deeper level can common drivers, such as lacking controls, be identified. These
underlying drivers however are difficult if not impossible to quantify and will only
give a very limited explanations for Operational Risk losses.
Added to that is that Operational Risk events are in general rare; even for broad
categories of risks, the amount of data availably will be limited. This means that
the combination of categories is required to get to enough volume to do statisti-
cal analyzes and modeling. And even when grouping losses together, some units
of measure will still be only sparsely populated.
Also, Operational Risk loss distributions can be extremely heavy tailed; it is not
uncommon to see a maximum (observed) loss that is hundreds, thousands or
even more times larger than the average loss. To adequately model such extreme
distributions heavy demands are made on available data. It also means that un-
certainty around outcomes will be huge; a currently observed largest loss of EUR
100 million might have to give way to a EUR 1 billion loss tomorrow, which is sure
to impact results severely.
Finally, Operational Risk losses are not independent; underlying drivers such as
the level of controls will impact multiple categories. This means that some sort
of dependence structure needs to be incorporated in an Operational Risk model.
For the bulk of losses standard methods may suffice, but the interesting question
is how to capture the dependence between the truly large (and capital driving)
losses.
White paper series RiskQuest
4
4. Little convergenceSince Basel II there has been a steady move towards more comprehensive modeling of Operational
Risk. Over the years a number of standard practices have evolved, but there is surprisingly little con-
vergence towards a single model that is accepted industry wide.
A Loss Distribution Approach, where the frequency of losses and the severity (amount) of losses is
modeled separately, is now the most common high level model architecture. Within this architec-
ture however there are widely varying possibilities for the deeper level modeling.
For example, there is broad debate whether more focus should be given to observed losses, to
internally developed scenarios, or whether a combination of the two serves best.
5. Available dataUnlike for Market Risk capital modeling, there is no large body of high quality, long-history
“market” data available for Operational Risk. And unlike for Credit Risk there is no pool of loss
generating instruments over which statistics are easy to quantify.
Instead, there are four data sources (as identified by Basel II) that have to be used in model-
ing the Operational Risk capital charge. Each of these has its own strengths and draw-backs:
Internal data is the most relevant data source for an institution but in general very little of
it is available. Even if an institution started gathering data years back, it is impossible to fill
broad sub-risk categories to an acceptable level. Also, due to changes in business makeup
and improvements in risk management older data tends to become obsolete at a progres-
sive pace.
External data in general is richer, but requires serious scrutiny to assess the applicability
to the own institution. It can contain extreme biases, especially in the case of publicly
gathered data.
Scenarios take large amounts of time and effort to create and maintain. Moreover, due
to the heavy-tailedness of Operational Risk losses demands on experts are heavy, as
estimates of losses need to be made at very high quantiles (the 1-in-100 years loss at a
minimum, 1-in-1000 or worse preferably).
Business Environment and Internal Control Factors (BEICFs) are very diffuse; different
institutions have widely different definitions, making the creation of best practices
nearly impossible. BEICFS are generally not consistently quantified, making their di-
rect use in modeling very difficult.
White paper: Operational Risk December 2012
5
6. The challengeOperational Risk consists of broad categories of very different sub-risks, which
would require a highly granular model to capture adequately. Operational Risk
loss distributions are heavy-tailed, demanding large amounts of data to model
correctly, especially if dependence between large losses is to be captured ad-
equately. Data however has to come from a number of sources, each with their
own severe limitations.
The challenge for each institution with Operational Risk modeling ambitions then
is finding the best way of using the available data sources to create a level of gran-
ularity that satisfies internal and external requirements, whilst giving sufficient
attention to the potential heavy-tailedness of each unit of measure and capturing
the dependence between large losses.
A trade-off needs to be made between the time, effort and money that an insti-
tution is willing to invest in obtaining high-quality data and how sophisticated a
model can be built. An institution that has been gathering high quality internal
data for a long time can capitalize on that. For another institution however a
more natural choice would be to put in the effort to roll out scenarios to a deep
level.
There is no one-size-fits all. But with some clear thought a fitting solution can be
found for any institution.
White paper series RiskQuest
This report is prepared by RiskQuest for general guidance on matters of interest only, and is not intended to provide specific advice on any matter, nor is it intended to be comprehensive. No representation or warranty (express or implied) is given as to the accuracy or completeness of the information contained in this publication, and, to the extent permitted by law, RiskQuest does not accept or assume any liability, responsibility or duty of care for any consequences of you or anyone else acting, or refraining to act, in reliance on the information contained in this publication or for any decision based on it. If specific advice is required, or if you wish to receive further information on any matters referred to in this paper, please speak directly with your contact at RiskQuest or those listed in this publication. Our general conditions apply to services rendered from us, to our quotations, offers, propositions and calculations.
© 2012 RiskQuest. All rights reserved.
Weesperzijde 33, Amsterdam
+31 20 693 29 48
RiskQuest is an Amsterdam based consultancy firm specialised in risk models for the financial sector. The importance of these models in measuring risk has strongly increased, supported by external regulations such as Basel II/III and Solvency II.
Advanced risk models form the basis of our service offer. These models may be employed in a frontoffice environment (acceptance, valuation & pricing) or in a mid-office context (risk management and measurement).
The business areas that we cover are lending, financial markets and insurance. In relation to the models, we provide advice on: Strategic issues; Model development; Model valida-tion; Model use.