34
A practitioner’s guide to the Advanced Measurement Approach to operational risk under Basel II Prepared by Tim Jenkins, Jason Slade and Arthur Street Presented to the Institute of Actuaries of Australia 2005 Biennial Convention 8 May – 11 May 2005 This paper has been prepared for the Institute of Actuaries of Australia’s (Institute) Biennial Convention 2005. The Institute Council wishes it to be understood that opinions put forward herein are not necessarily those of the Institute and the Council is not responsible for those opinions. Copyright of this paper is owned by PricewaterhouseCoopers The Institute will ensure that all reproductions of the paper acknowledge the Author/s as the author/s, and include the above copyright statement: The Institute of Actuaries of Australia Level 7 Challis House 4 Martin Place Sydney NSW Australia 2000 Telephone: +61 2 9233 3466 Facsimile: +61 2 9233 3446 Email: [email protected] Website: www.actuaries.asn.au 1

A practitioner’s guide to the Advanced Measurement ...s3.amazonaws.com/zanran_storage/ · A practitioner’s guide to the Advanced Measurement Approach to operational risk under

  • Upload
    others

  • View
    7

  • Download
    0

Embed Size (px)

Citation preview

Page 1: A practitioner’s guide to the Advanced Measurement ...s3.amazonaws.com/zanran_storage/ · A practitioner’s guide to the Advanced Measurement Approach to operational risk under

A practitioner’s guide to the

Advanced Measurement Approach to operational risk under Basel II

Prepared by Tim Jenkins, Jason Slade and Arthur Street

Presented to the Institute of Actuaries of Australia 2005 Biennial Convention 8 May – 11 May 2005

This paper has been prepared for the Institute of Actuaries of Australia’s (Institute) Biennial Convention 2005. The Institute Council wishes it to be understood that opinions put forward herein are not necessarily those of the

Institute and the Council is not responsible for those opinions.

Copyright of this paper is owned by PricewaterhouseCoopers

The Institute will ensure that all reproductions of the paper acknowledge the Author/s as the author/s, and include the above copyright statement:

The Institute of Actuaries of Australia Level 7 Challis House 4 Martin Place

Sydney NSW Australia 2000 Telephone: +61 2 9233 3466 Facsimile: +61 2 9233 3446

Email: [email protected] Website: www.actuaries.asn.au

1

Page 2: A practitioner’s guide to the Advanced Measurement ...s3.amazonaws.com/zanran_storage/ · A practitioner’s guide to the Advanced Measurement Approach to operational risk under

A practitioner’s guide to the Advanced Measurement Approach to operational risk under Basel II

A practitioner’s guide to the Advanced Measurement Approach to operational risk under Basel II

Tim Jenkins, Jason Slade and Arthur Street 1

Abstract The paper introduces actuaries intending to practice in this field to some of the issues and methods involved in implementing the Basel II Advanced Measurement Approach to operational risk in a bank. Although it is unavoidable that judgment-based assessments supported by only limited data must be used, Basel II nevertheless requires a bank to follow the discipline of applying formal actuarial and statistical methods. This is made all the more difficult because the bank must arrive at its operational risk capital requirement (i.e. a measure of the adverse tail of the aggregate loss distribution) with a very high degree of confidence. Recognising that the modelling involved carries an unavoidable element of uncertainty, especially in the adverse tails of loss distributions, the paper discusses how to approach the task of maintaining the necessary balance between theory and practice, a requirement that well suits the practical training and experience of the actuary.

1 1 [email protected], [email protected], [email protected]

Page 3: A practitioner’s guide to the Advanced Measurement ...s3.amazonaws.com/zanran_storage/ · A practitioner’s guide to the Advanced Measurement Approach to operational risk under

A practitioner’s guide to the Advanced Measurement Approach to operational risk under Basel II

1 Introduction In June 2004, after extensive consultation, the Basel Committee on Banking Supervision of the Bank for International Settlements (‘the Committee’) released its report titled International Convergence of Capital Measurement and Capital standards, also known as Basel II (‘the Revised Framework’). This Revised Framework builds upon and retains key elements of the banking capital adequacy framework of the 1988 Accord; the basic structure of the 1996 Market Risk Amendment regarding the treatment of market risk; and the definition of eligible capital. Basel II looks to an improved banking capital adequacy framework that rests on three pillars:

1. specific risk based minimum capital requirements 2. supervisory practice 2, over a bank’s total risk, including business and

strategic risk, and 3. disclosure of risk measures, methods and management.

It places emphasis on fostering continuous improvement in a bank’s risk management capabilities; enhanced supervision; and greater market discipline. The intention is to raise risk consciousness and to focus attention to the links between risk, capital required and management behaviour. The existing Accord is based on the concept of a capital ratio where the numerator represents the amount of capital a bank has available and the denominator is a measure (referred to as ‘risk-weighted assets) of the risks faced by the bank. The resulting capital ratio must be no less than 8%. Under the Revised Framework the definition of the numerator and the minimum ratio of 8% are unchanged, but the measurement of the risks facing the bank that is reflected in the definition of risk-weighted assets will be substantially different.

The 1988 Accord and 1996 Amendment cover two types of risk explicitly in the definition of risk-weighted assets, namely credit risk and market risk, where the latter includes interest rate risk, equity position risk and foreign exchange risk. There is no change to the treatment of market risk under the Revised Framework. On the other hand there is substantial change to the treatment of credit risk and, for the first time, there is explicit treatment of operational risk that will result in a measure of operational risk being included in the denominator of a bank’s capital ratio3.

While banks may use basic or standardised approaches, a feature of the Revised Framework is potentially greater use of risk and capital assessments based on a bank’s own systems.

2 The Supervisor in Australia is APRA.

2

3 Total risk-weighted assets are determined by multiplying the capital requirement for market risk and operational risk by 12.5 (i.e. the reciprocal of the minimum capital ratio of 8%) and adding the result to the risk-weighted assets determined under the rules for credit risk.

Page 4: A practitioner’s guide to the Advanced Measurement ...s3.amazonaws.com/zanran_storage/ · A practitioner’s guide to the Advanced Measurement Approach to operational risk under

A practitioner’s guide to the Advanced Measurement Approach to operational risk under Basel II

Under the basic approach to operational risk, a bank must hold capital at a uniform 15% of annual gross income (averaged over the previous 3 years), while under the standardised approach the aggregate of business line specific calculations is used where the percentage varies by business line.

Table 1: Basel standardised approach

Business line Capital as % of annual gross income

Corporate finance 18% Trading & sales 18% Retail banking 12% Commercial banking 15% Payment & settlement 18% Agency services 15% Asset management 12% Retail brokerage 12%

By contrast, under the Advanced Measurement Approach (AMA) for operational risk a bank may use its own method of assessing its exposure to operational risk, so long as it is sufficiently comprehensive and systematic, and has demonstrable integrity. In particular, a bank pursuing this approach must have a measurement system that attaches a verifiable relative importance to each of internal loss data, external loss data, scenario analysis, and its business environment and control systems. The operational risk and capital measurement system must be closely integrated into the day-to-day risk management process of the bank and be capable of supporting an allocation of economic capital for operational risk in a manner that creates incentives to improve operational risk management in the business lines. The benefit of the advanced rather than the basic or standardised approaches is that capital will better reflect a bank’s own risk profile, leading to a sharper and more relevant connection between risk, capital and management behaviour. Implementation of the advanced approaches such as the AMA for operational risk is planned for year-end 2007, with an extended period of testing and parallel running ahead of that. The use of an advanced approach must be approved by the supervisor. The aim of this paper to introduce actuaries intending to practice in this field to some of the methods involved in implementing the Advanced Measurement Approach. Although the Revised Framework makes it clear that the Committee is not specifying the approach or distributional assumptions to be used, it is apparent from the history of the Revised Framework’s development that what is contemplated involves an application of actuarial science to the derivation of an aggregate loss distribution from the respective distributions of the frequency and severity of operational loss. Good quality past loss data is scarce and often irrelevant to the prevailing position of a business line, leading to a situation in which quantitative data must be combined with judgment-based assessments of relevant loss distribution parameters made by people who know the business and the prevailing outlook for it.

3

Page 5: A practitioner’s guide to the Advanced Measurement ...s3.amazonaws.com/zanran_storage/ · A practitioner’s guide to the Advanced Measurement Approach to operational risk under

A practitioner’s guide to the Advanced Measurement Approach to operational risk under Basel II

Notwithstanding that the resulting ‘data’ includes a substantial element of judgment, the Revised Framework nevertheless requires us to follow the discipline of applying formal actuarial and statistical method to it. This is made all the more difficult because we must arrive at a capital requirement (i.e. a measure of the worst case outcome) with a very high degree of confidence. It is as well, therefore, to be clear from the outset about the purpose of statistical modelling in operational risk. Its purpose is twofold: to provide a conceptual and philosophical framework that draws upon what we would do if our data were strictly quantitative; and to provide a systematic calculation and information management framework to keep our approach organised and to improve the quality and relevance of data over time. However, such modelling has an unavoidable element of uncertainty and we cannot get carried away with its apparent precision, especially at the worst case outcome end of the distributions. Such models will not give a result with a known range of statistical error. Accordingly, a balance between theory and practice must be maintained in this application of actuarial science, a situation that should suit the practical training and experience of the actuary. 2 Requirements for the Advanced Measurement Approach The Basel II AMA requirements are outlined in the Revised Framework, especially ¶ 664-683, and include the principles described below. Operational risk is defined as the risk of loss resulting from inadequate or failed internal processes, people and systems or from external events. The definition includes legal risk, but excludes strategic and reputation risk. The loss event types within this scope are defined in the Revised Framework (Annex 7). Regulatory capital must allow for a minimum 3 year (in due course, 5 year) observation period of reliable internal loss data, as well as allow for relevant external loss data, especially with regard to infrequent yet severe losses. The bank must also use scenario analysis based on expert opinion and informed by external loss data. The bank’s risk assessment must capture a forward-looking view of factors in its business environment and control systems that can change its operational risk profile. The bank must reasonably estimate expected loss (EL) and unexpected loss (UL) based on attaching a verifiable relative importance and relevance to each of the sources of insight (i.e. internal and external loss data, scenario analysis and factors reflecting the bank-specific business environment and internal control systems). Regulatory capital will be based on the sum of EL and UL unless the bank can demonstrate that it has measured and otherwise already accounted for its EL, in which case capital can be based on UL alone. The measurement system must be sufficiently granular to capture the major drivers of operational risk affecting the shape of the tail of the loss estimates (i.e. the worst case

4

Page 6: A practitioner’s guide to the Advanced Measurement ...s3.amazonaws.com/zanran_storage/ · A practitioner’s guide to the Advanced Measurement Approach to operational risk under

A practitioner’s guide to the Advanced Measurement Approach to operational risk under Basel II

outcomes), and the bank must be able to demonstrate that its approach captures these worst case loss events at a confidence level of 99.9% over a 1 year time horizon. A bank will be allowed to recognise the risk mitigating effect of insurance, but only up to a limit of 20% of its total operational risk capital and even then subject to a number of important conditions. Operational risk estimates for different types of risk event and business line must be added for the purpose of calculating overall regulatory capital, unless risk correlation assumptions take uncertainty (particularly in times of stress) into account and can be validated. The measurement system must also be closely integrated into the day-to-day processes of the bank and be capable of supporting an allocation of economic capital for operational risk across business lines in a manner that creates incentives to improve business line operational risk management. The system’s specifications, parameters, processes and data must be credible, transparent and verifiable; and the system must be internally consistent (and avoid double counting of risk mitigants recognised elsewhere). The bank must maintain rigorous procedures for model development and validation. While the Basel Committee recognises that analytical approaches for operational risk are still evolving, it does not appear to have recognised that the precision inherent in its requirements is incapable of being achieved because of the difficulties involved in working with what is necessarily fuzzy data. . 3 Issues in the loss distribution approach to operational risk The Revised Framework implies the use of the loss distribution approach to measuring operational risk. Frequency and severity of loss are modelled separately and then combined to arrive at the distribution of aggregate loss in a year. This is done by type of risk event and by business line, and then further combined across risk event types within business lines and across business lines to arrive at business line and enterprise-wide loss distributions respectively. There are two main complications inherent in this. The first relates to the number of risk event types. Unlike more established analytical risk categories, operational risk springs from many sources4. Because of differences in exposure, the experience arising from each event type differs from line to line, so that each business line needs to have its own loss data collected, recorded and maintained by risk type. Moreover, each of these risk event/business line cells needs assumptions about the underlying frequency and severity distributions, leading to its own assumed aggregate loss distribution.

5

4 Such as internal and external fraud, flaws in employment practices and workplace safety, breakdowns in the execution of transactions and processes, business disruption, defective or wrongly sold products, bad documentation, poor business practices and so on.

Page 7: A practitioner’s guide to the Advanced Measurement ...s3.amazonaws.com/zanran_storage/ · A practitioner’s guide to the Advanced Measurement Approach to operational risk under

A practitioner’s guide to the Advanced Measurement Approach to operational risk under Basel II

The product of the number of risk event types and the number of business units gives rise to a large number of such cells and hence to difficulties in consistently and reliably classifying and interpreting loss data, to potential complexity in risk and capital management, and to exposure to surprises as new and different types of failure occur. The number of cells can also make it difficult to uncover interdependencies. Another challenge is that of remapping loss data, and maintaining its integrity and applicability, each time a business line is part of an organisational restructure. The second complication relates to lack of objective and relevant data. Even with 3 or 5 years of loss history, loss incident databases, especially internal ones, may be too recent or too inconsistent in their classification of risk events to provide a basis for model assumptions by themselves. Also, they are often insufficiently relevant to current circumstances or future outlook to be the only source of insight about future loss experience. For these reasons, substantial reliance is also placed on incidents observed in external databases and on scenarios based on the expert judgment of business managers. This substantial reliance on subjective judgment gives rise to difficulties of consistency between business lines and over time for the same business line, of blind spots caused by blinkered vision, and of conscious or subconscious bias (e.g. having a business line viewed in a favourable light). As discussed earlier, subjectivity also means that the data on which the mathematical foundations of a model depend includes a substantial qualitative element, and its results are therefore subject to a degree of fuzziness. This fuzziness makes it difficult to know how reliable the results are, particularly when trying to assess the worst case outcomes at the high confidence levels required. In practice these issues are addressed by keeping a focus on materiality, by using a balanced mix of data and by placing an emphasis on validating it. These are discussed in turn below. A focus on materiality is essential in order to avoid being overwhelmed by data and to keep attention on the main drivers of risk. This means that we usually need to concentrate on some fraction of the risk cells by performing high level risk profiling to decide which event types are material for a business line. Materiality might be decided using a set of rules based on information from a number of sources: such as a survey of management on exposures, a review of internal loss experience, and a consideration of external loss data. A balanced mix of internal loss data, external loss data and scenario analysis would then be used to estimate the operational loss distributions in material risk cells and to determine the opening operational risk capital for each business line and for the bank as a whole. The weight given to internal data, external data and scenario analysis needs to reflect the assessed credibility of each. Immaterial cells would be watched but probably not measured. The type of scenario analysis often used as part of the ‘data’ is a method that relies upon expert business judgment to arrive at assessments of the parameters (e.g. the

6

Page 8: A practitioner’s guide to the Advanced Measurement ...s3.amazonaws.com/zanran_storage/ · A practitioner’s guide to the Advanced Measurement Approach to operational risk under

A practitioner’s guide to the Advanced Measurement Approach to operational risk under Basel II

mean and a percentile loss frequency and severity that is relatively high but still capable of being readily contemplated, such as the 90 percentile) that define the distributions in each cell. This type of scenario analysis might be called ‘assessment-based quantification’. It allows a mathematical framework to be used, but with the caveat that the ‘data’ is not strictly quantitative. There are, however, two underlying assumptions with this type of scenario analysis. Firstly, that business lines are clear about their obligations to internal and external customers and hence about the resulting risks that they are responsible for controlling; and secondly, that the shape of the loss distributions is well enough known to be able to extrapolate from an assessment of a relatively high confidence level (say the 90 percentile) to a very high confidence level (i.e. the 99.9 percentile). Scenario analysis can also describe a somewhat different technique in which systems thinking is applied to the business value chain in order that scenarios can be developed and stress tests applied to consider to what extent rare but extreme events could happen. This method, which might be called ‘scenario-based stress testing’, does not easily lend itself to being incorporated into the same framework as loss data, but it is very useful (along with external loss data) when data is scarce and when losses are infrequent but potentially severe. Operational risk capital would be broadly updated between periodic reviews by using information from risk and control indicators to adjust the assumed loss distribution parameters. Capital and the materiality of risk cells would be reassessed from first principles at occasional periodic reviews, or in circumstances such as a major organisational restructure or if the emerging loss experience suggested it to be necessary. Given the fuzzy nature of the ‘data’, it is important to recognise the need to validate (to the extent that this is possible) the judgments that are a necessary part of measuring operational risk. Each step that involves judgment should have someone suitable responsible for it, have a validation process and have an independent party responsible for validation. Validation should include checks and balances that aim for reasonable consistency and the dampening of bias. All steps in validating data should be documented, and the measurement process should be periodically reviewed. Operational risk and capital measurement should be part of the governance of operational risk management. Governance needs to provide a sound framework that encourages a culture that properly considers risk and reward, facilitates the communication of risk to top management and the Board, and provides an independent risk management function responsible for the development and monitoring of risk policy.

7

Page 9: A practitioner’s guide to the Advanced Measurement ...s3.amazonaws.com/zanran_storage/ · A practitioner’s guide to the Advanced Measurement Approach to operational risk under

A practitioner’s guide to the Advanced Measurement Approach to operational risk under Basel II

4 The mathematical foundation A mathematical model of operational risk will have significant model and parameter uncertainty. Assuming that this uncertainty and the other issues discussed in the previous section can be addressed, we move on to consider the mathematical foundations. The Consultative Document that preceded the Revised Framework described the approach as one in which the bank estimates the probability distribution functions (for each risk/business cell) for the impact of risk and the frequency of its occurrence separately. From these impact and frequency distributions the bank then computes the combined distribution for operational losses. The general approach to establishing the initial operational risk capital is to use Monte Carlo simulation to model loss frequency and severity in each risk cell, and then to use a copula to combine the different loss types. The techniques involved in Monte Carlo simulation and in simulating combined risk with a copula are summarised in Appendices A and B respectively. In Appendix C, the simulation method in the case of a t-copula is described and the concept of tail dependence and how it is measured is introduced. In this paper, tail dependence is modelled using the t-copula, with 3 degrees of freedom. To gain an understanding of the important elements involved in applying the simulation we will illustrate the techniques used. It will be clear from the discussion earlier, that we must work with data that is scant, poor or based on a subjective assessment. Recognising the limitations that this imposes, we take a rudimentary analytical approach and use simple tools with few parameters. The mathematical approach we take for each risk type begins with the compound

Poisson process, namely the stochastic process where { is a

sequence of independent identically distributed random variables, and is a Poisson process with parameter

∑=

≥=)(

10,)(

tN

kk tYtX }kY

)(tNλ and independent of{ . In simple terms, for each

risk type, loss events occur in accordance with a Poisson process (which can be thought of as a process like buses turning up at bus stop) and each individual loss event has some randomly determined ‘severity’ associated with it.

}kY

Some important properties of the compound Poisson process are summarised in Appendix D. As described in the Appendix, it is sometimes convenient to assume that the sequence { (the severity of the compound Poisson process) follows an exponential distribution, which is characterised by a single parameter. However, this assumption is only appropriate when the standard deviation of the loss severity is of the same order as the mean severity and, in order to overcome this limitation, for the purpose of this paper we will use a lognormal distribution to represent severity. While other distributions could be used, and indeed may be necessary, in order to represent an actual operational activity, the lognormal is sufficiently versatile to illustrate the subject. We will therefore proceed by assuming that the logarithm of the severity of an individual loss is normally distributed with mean

}kY

µ and variance . 2σ

8

Page 10: A practitioner’s guide to the Advanced Measurement ...s3.amazonaws.com/zanran_storage/ · A practitioner’s guide to the Advanced Measurement Approach to operational risk under

A practitioner’s guide to the Advanced Measurement Approach to operational risk under Basel II

In our illustration we will consider a bank that faces only two types of risk5. These are: FM risks (frequent losses with moderate impact)

• These are assumed to happen around 5 times per year and each loss amount is generally around $2m, but can be higher. Occurrence is modelled using a Poisson distribution, recognising that there is a range of possible occurrences. Impact is modelled assuming a Lognormal distribution (with a mean of $2m, and a standard deviation taken to be $2m to recognise the lightly skewed nature of the risk).

RS risks (rare losses but with severe impacts)

• These are assumed to happen around once in every 10 years and each loss amount is generally around $100m, but can be much higher. Occurrence is again modelled by a Poisson distribution and impact assuming a Lognormal distribution (with a mean of $100m, and a standard deviation in this case of $200m, recognising the more heavily skewed nature of the risk). Figure 1a below shows the cumulative distribution functions of the aggregate annual loss from a single FM risk. Figure 1b shows just the worst case outcomes end of the distribution.

Figure 1a CDF of single FM risk

9

5 These represent major loss events; the continual noise generated from minor loss events (e.g. those with expected losses less than say $1m) is ignored in our illustration.

Page 11: A practitioner’s guide to the Advanced Measurement ...s3.amazonaws.com/zanran_storage/ · A practitioner’s guide to the Advanced Measurement Approach to operational risk under

A practitioner’s guide to the Advanced Measurement Approach to operational risk under Basel II

Figure 1b CDF of single FM risk at high cumulative probabilities

Table 2 shows the annual aggregate loss corresponding to a range of selected cumulative probabilities. 50% of the time the annual loss will be less than $8.9m and 90% of the time it will be less than $18.3m. (Note these figures are aggregate losses, so that the $18.3m of aggregate loss could be made up of 6 losses of about $3m, or 9 losses of about $2m, or any other combination for that matter). The ‘worst case’ aggregate losses at the 99% and 99.9% confidence levels are $29.9m and $42.8m respectively.

Table 2: Single FM risk

Cumulative probability

Aggregate loss $m

.5 8.9 .75 13.3 .9 18.3 .95 21.8 .99 29.9 .995 33.3 .999 42.8

Figure 2a below shows the cumulative distribution functions of the aggregate annual loss from a single RS risk. Figure 2b shows just the worst case outcomes end of the distribution.

10

Page 12: A practitioner’s guide to the Advanced Measurement ...s3.amazonaws.com/zanran_storage/ · A practitioner’s guide to the Advanced Measurement Approach to operational risk under

A practitioner’s guide to the Advanced Measurement Approach to operational risk under Basel II

Figure 2a CDF of single RS risk

Figure 2b CDF of single RS risk at high cumulative probabilities

Table 3 shows the annual aggregate loss amount from the RS risk exposure corresponding to selected cumulative probabilities. It can be seen that for 90% of the time the aggregate annual loss is zero (i.e. the loss event does not

11

Page 13: A practitioner’s guide to the Advanced Measurement ...s3.amazonaws.com/zanran_storage/ · A practitioner’s guide to the Advanced Measurement Approach to operational risk under

A practitioner’s guide to the Advanced Measurement Approach to operational risk under Basel II

occur). However, the ‘worst case’ aggregate loss at the 99% and 99.9% confidence levels are $232m and $856m respectively. Another way of looking at this, is to say once every 10 times this type of loss event occurs its severity is greater than $232m and once every 100 times it occurs its severity is greater than $856m. Modelling the severity of loss events from the RS risk type using a lognormal distribution (with a mean of $100m and a standard deviation of $200m) in this way leads to the capital requirement being very sensitive to the degree of confidence targeted. In moving from a confidence level of 99% to 99.9%, the associated capital requirement more than trebles.

Table 3: Single RS risk

Cumulative probability

Aggregate loss $m

.5 0 .75 0 .9 0 .95 43.5 .99 232.5 .995 366.3 .999 856.5

It can be seen that the two types of risk have very different distributions, not just in size, but also in their shape. Both are, however, broadly exemplify types of operational risk faced by banks and so are important for setting operational risk capital. Combining risk types We will consider how the capital requirements for a bank vary with the bank’s exposure to different combinations of these risk types, working up to a combination of 10 FM risks and 2 RS risks. Combining two FM types We will begin by considering two FM risks together and then will move on to consider two RS risks. For the purposes of establishing capital requirements, we are primarily interested in the worst case outcomes (or unexpected losses). The graphical technique we use to depict the effect of the two risks in combination is a scatter diagram of the joint simulation accompanied by a density table to quantify that scatter (with relatively greater rounding of the dominant cells). Figures 3a and 3b show results for two FM risks where there is no correlation between them and where there is no tail dependence (i.e. the two FM risks are completely independent).

12

Page 14: A practitioner’s guide to the Advanced Measurement ...s3.amazonaws.com/zanran_storage/ · A practitioner’s guide to the Advanced Measurement Approach to operational risk under

A practitioner’s guide to the Advanced Measurement Approach to operational risk under Basel II

Figure 3a Two uncorrelated FM risks without tail dependence

Scatter from simulation

Figure 3b Two uncorrelated FM risks without tail dependence

Density of scatter

Figures 3c and 3d below show results where there is no correlation between the risks but there is tail dependence (meaning the risk types tend to become more interrelated in extreme circumstances). 13

Page 15: A practitioner’s guide to the Advanced Measurement ...s3.amazonaws.com/zanran_storage/ · A practitioner’s guide to the Advanced Measurement Approach to operational risk under

A practitioner’s guide to the Advanced Measurement Approach to operational risk under Basel II

Figure 3c Two uncorrelated FM risks with tail dependence

Scatter from simulation

Figure 3d Two uncorrelated FM risks with tail dependence

Density of scatter

It can be noted from these figures that even where two FM risks are uncorrelated, tail dependence between them induces more frequent occurrence of high aggregated losses from both risks at the same time. 14

Page 16: A practitioner’s guide to the Advanced Measurement ...s3.amazonaws.com/zanran_storage/ · A practitioner’s guide to the Advanced Measurement Approach to operational risk under

A practitioner’s guide to the Advanced Measurement Approach to operational risk under Basel II

Figures 4a-d develop this illustration further by examining the effect of tail dependence between two risks that have a correlation coefficient of 0.5.

Figure 4a Two correlated FM risks ( 5.=ρ ) without tail dependence

Scatter from simulation

Figure 4b Two correlated FM risks ( 5.=ρ ) without tail dependence

Density of scatter

15

Page 17: A practitioner’s guide to the Advanced Measurement ...s3.amazonaws.com/zanran_storage/ · A practitioner’s guide to the Advanced Measurement Approach to operational risk under

A practitioner’s guide to the Advanced Measurement Approach to operational risk under Basel II

Figure 4c Two correlated FM risks ( 5.=ρ ) with tail dependence

Scatter from simulation

Figure 4d Two correlated FM risks ( 5.=ρ ) with tail dependence

Density of scatter

As before, tail dependence has a material effect on unexpected loss and this effect is amplified by the risks being correlated.

16

Page 18: A practitioner’s guide to the Advanced Measurement ...s3.amazonaws.com/zanran_storage/ · A practitioner’s guide to the Advanced Measurement Approach to operational risk under

A practitioner’s guide to the Advanced Measurement Approach to operational risk under Basel II

Combining two RS risks Figures 5a-d show the results of combining two uncorrelated RS risks both without and with tail dependence

Figure 5a Two uncorrelated RS risks without tail dependence

Scatter from simulation

Figure 5b Two uncorrelated RS risks without tail dependence

Density of scatter

17

Page 19: A practitioner’s guide to the Advanced Measurement ...s3.amazonaws.com/zanran_storage/ · A practitioner’s guide to the Advanced Measurement Approach to operational risk under

A practitioner’s guide to the Advanced Measurement Approach to operational risk under Basel II

Figure 5c Two uncorrelated RS risks with tail dependence

Scatter from simulation

Figure 5d Two uncorrelated RS risks with tail dependence

Density of scatter

The frequency of joint unexpected losses in a year from RS events is remote, however when they do happen jointly their impact is very severe. Tail dependence increases

18

Page 20: A practitioner’s guide to the Advanced Measurement ...s3.amazonaws.com/zanran_storage/ · A practitioner’s guide to the Advanced Measurement Approach to operational risk under

A practitioner’s guide to the Advanced Measurement Approach to operational risk under Basel II

the frequency of this remote event happening, making it necessary to at least consider the impact of tail dependence when setting risk capital. Figures 6a-d below illustrate the impacts with and without tail dependence where the two risks are 50% correlated.

Figure 6a Two correlated RS risks ( 5.=ρ ) without tail dependence

Scatter from simulation

Figure 6b Two correlated RS risks ( 5.=ρ ) without tail dependence

Density of scatter

19

Page 21: A practitioner’s guide to the Advanced Measurement ...s3.amazonaws.com/zanran_storage/ · A practitioner’s guide to the Advanced Measurement Approach to operational risk under

A practitioner’s guide to the Advanced Measurement Approach to operational risk under Basel II

Figure 6c Two correlated RS risks ( 5.=ρ ) with tail dependence

Scatter from simulation

Figure 6d Two correlated RS risks ( 5.=ρ ) with tail dependence

Density of scatter

20

Page 22: A practitioner’s guide to the Advanced Measurement ...s3.amazonaws.com/zanran_storage/ · A practitioner’s guide to the Advanced Measurement Approach to operational risk under

A practitioner’s guide to the Advanced Measurement Approach to operational risk under Basel II

As would be expected, the joint effect of correlation and tail dependence between the two RS risks increases the incidence of extreme joint occurrences in a year, and these

interdependencies need to be considered when setting risk capital. Combining ten FM risks and two RS risks Finally, as foreshadowed earlier, we now consider a more realistic situation - involving a bank that faces multiple risks - by combining 10 different FM risks and 2 different RS risks together and examining the effects of correlation and tail dependence in a portfolio of risks. To allow for the effects of correlations between the risks we have assumed serial pairwise correlation (i.e. risk A is related to risk B, B is related to C, etc). This technique allows for a reducing degree of correlation as risks in the series get further ‘apart’. We have considered six different scenarios for levels of interrelatedness between the risks:

1, 2 No pairwise correlation (both without and with tail dependence) 3, 4 50% pairwise correlation (both without and with tail dependence) 5, 6 90% pairwise correlation (both without and with tail dependence)

Figure 7 below shows the cumulative distribution function of the combined aggregate losses at the worst case end of the distributions for each of these scenarios. (Scenarios 1 to 6 are depicted left to right).

Figure 7 CDF of portfolio of 10 FM & 2 RS risks

at high cumulative probabilities

21

Page 23: A practitioner’s guide to the Advanced Measurement ...s3.amazonaws.com/zanran_storage/ · A practitioner’s guide to the Advanced Measurement Approach to operational risk under

A practitioner’s guide to the Advanced Measurement Approach to operational risk under Basel II

Table 4: Capital requirement for the portfolio of risks

Scenario Coefficient of serial pairwise

correlation

Tail dependence

Capital requirement at 99% confidence

level $b

Capital requirement at

99.9% confidence level

$b 1

0.0

No

0.48

1.28

2 3

0.0 0.5

Yes No

0.52 0.52

1.48 1.43

4

0.5 Yes 0.56 1.63

5 6

0.9 0.9

No Yes

0.65 0.68

1.85 1.93

This table demonstrates how both the shape of the risks in the extremes and the interaction between risks have a significant impact on the resulting capital requirement. In our example, the capital requirement increases 3-fold in moving from 99% to 99.9% confidence. In addition there is a 50% difference in the capital requirements between independent risks and highly correlated risks with tail dependence. This difference would of course be even greater if we had used 100% correlation, which is the default position under the Revised Framework if the risk correlation assumptions cannot be validated. It is also interesting to note that in this illustration the effect on the capital requirement of tail dependence is similar to that of partial correlation (with a coefficient 0.5). To provide additional context, Figure 8 below shows the corresponding combined aggregate loss distribution when the portfolio consists of only the two RS risks. Comparing this with Figure 7 gives a sense of what proportion of the capital requirement comes from these two rare and severe risk types alone. For example, in scenario 4, the two RS risks account for about $1.3b of the total capital requirement (at the 99.9% confidence level) of $1.6b.

22

Page 24: A practitioner’s guide to the Advanced Measurement ...s3.amazonaws.com/zanran_storage/ · A practitioner’s guide to the Advanced Measurement Approach to operational risk under

A practitioner’s guide to the Advanced Measurement Approach to operational risk under Basel II

Figure 8 CDF of portfolio of two RS risk portfolio

at high cumulative probabilities

To complete the picture, Figure 9 shows the combined aggregate loss distribution from combining just the 10 FM risks. Again, under scenario 4, the 10 FM risks contribute to only around $300m of the $1.6 billion capital requirement.

Figure 9 CDF of portfolio of 10 FM risk portfolio

at high cumulative probabilities

23

Page 25: A practitioner’s guide to the Advanced Measurement ...s3.amazonaws.com/zanran_storage/ · A practitioner’s guide to the Advanced Measurement Approach to operational risk under

A practitioner’s guide to the Advanced Measurement Approach to operational risk under Basel II

Figures 7, 8 and 9 show the importance of properly identifying the bank’s exposure to RS type risks when assessing the bank’s overall operational risk capital requirement under the AMA. In this example, properly representing tail dependence and correlation between the two RS risks is as important as including the 10 FM risks. Armed with the insights from this investigation, we now consider their application to the practical task of complying with AMA under the Revised Framework.

5 Practical application As set out in Section 3, the Revised Framework implies the use of the loss distribution approach to measuring operational risk. Under this approach, for each type of risk event in each business line, the frequency and severity of loss are modelled separately and then combined to arrive at separate aggregate loss distributions. These aggregate loss distributions are then to be combined across risk event types and business line to arrive at business line and enterprise-wide loss distributions respectively. The number of different risk event/business line combinations (risk cells) is large (e.g. for an 8 business line bank, with 20 different subcategories of risk events, there would be 160 different risk cells). Having to compute an aggregate loss distribution for each risk cell creates a major problem, for the reasons described earlier in Section 3. In particular, it is difficult to measure the individual risk cells consistently and reliably (due to the lack of appropriate data and the high reliance on judgements) and uncover all of the interdependencies between risk cells. The mathematical examples in the previous section provided some important insights into what drives the AMA operational risk capital requirement under the Revised Framework in circumstances where a few RS risks are present along with a larger number of FM risks. One of the major points to come out is the degree to which the capital requirement is extremely sensitive to the shape of the tails, especially of the RS risks, the parameters for which can only be estimated in a largely subjective fashion. In the example, two-thirds of the capital requirement relates to increasing the confidence level targeted (i.e. moving up the tail) form 99% to 99.9%. This demonstrates the importance of any extrapolation that is based on subjective assessments made at lower confidence levels, such as at the 90% level. 1 Care should be taken to ensure that the process of ascribing aggregate loss

distributions to the risk cells is not overly mechanistic in nature and that all assumptions made are transparent (so that it is easy to understand the level of subjectivity and its impact on the resultant capital requirement).

Another related point is that the capital requirement is primarily driven by the RS type risks. In our examples (irrespective of whether correlations and/or tail dependence are

24

Page 26: A practitioner’s guide to the Advanced Measurement ...s3.amazonaws.com/zanran_storage/ · A practitioner’s guide to the Advanced Measurement Approach to operational risk under

A practitioner’s guide to the Advanced Measurement Approach to operational risk under Basel II

allowed for) the two RS risks account for around 80% of the capital required (at the 99.9% level). 2 For the purposes of measuring capital requirements, the major effort should be

concentrated on assessing the aggregate loss distributions for the RS event types.

By definition, these types of events are rare and there would be very little (if any) appropriate data. Accordingly, there is instead heavy reliance on judgement. There may also be a natural tendency among practitioners to focus most of their efforts on measuring the FM (frequent but moderate) type risks, where the level of available data is greater and therefore the aggregate loss distribution likely to be more reliable, but this could be largely a misdirection of effort. It is important to note, however, that the suggestion to concentrate on the RS more than the FM risk cells is purely for the purposes of measuring capital under the AMA. In the ordinary course of events, it is the FM risks that should require management’s day to day attention, and that provide the greater opportunity to improve an organisation’s operational effectiveness and to reduce more foreseeable operational losses.

3 The judgement used to assess the RS type aggregate loss distributions should be

applied as thoroughly as possible. Bearing in mind that the 90th percentile worst case for a 1 in 10 year event is only likely to happen once in a 100 years, it is also necessary to recognise that subjective assessments are likely to be extremely prone to both uncertainty and unintended bias.

The implied aggregate loss distributions for the RS risk cells should be “played back” to the business line/risk event experts to ensure that the judgement they applied does not appear unreasonable (looking at different aspects of the aggregate loss distribution in detail can be used to help tune the judgement applied). Finally, the judgements made and the resulting aggregate loss distribution that are adopted should be validated (to the extent possible) by appropriate experts (independent of the process).

4 Subject of course to the particular mix of risks in each case, the corollary of

point 2) is that the FM risks do not drive the capital requirement to anywhere near the same extent, and comparatively less effort should be put into assessing the tails of these risk cells.

The final major point coming out is that tail dependence and correlation have a significant impact. In the example provided, the capital requirement increases from a base amount of $1.28 billion (no correlations, no tail dependence) to $1.93 billion (90% correlated with tail dependence).

25

Importantly, the impact of moving from no correlations to a 50% coefficient of pairwise correlation adds about 15% of the base amount, and including allowance for

Page 27: A practitioner’s guide to the Advanced Measurement ...s3.amazonaws.com/zanran_storage/ · A practitioner’s guide to the Advanced Measurement Approach to operational risk under

A practitioner’s guide to the Advanced Measurement Approach to operational risk under Basel II

tail dependence adds a further 15%, taking the capital required from $1.28 billion to $1.63 billion.

To elaborate on this point further, in this example the issue of understanding correlations and tail dependence among the RS risk cells is more important than whether or not allowance is made for the 10 FM risks at all. 5 When assessing the RS risk cells, significant attention should be given to the

degree of inter-relatedness between the risks particularly in the extreme scenarios.

As a practical example, RS risk cells with a high degree of inter-relatedness might be considered as a whole whereas others that appear to be largely unrelated might be considered separately.

6 As an extension to 5) and to help ensure that all of the major RS events have been covered and that appropriate allowance has been made for their inter-relatedness, a “whole of bank” top down view should be considered - perhaps supported by ‘scenario based stress testing’.

7 Most important of all, it should always be remembered that the level of capital

required to address operational risks is the last line of defence against such risks. Risk measurement is not the same thing as risk management. The situations that could lead to the occurrence of large losses from RS type events should be explored as best they can and appropriate business monitoring and risk mitigation strategies employed.

Pulling all of this together, it is our view that the majority of a bank’s capital requirement will generally be driven by its exposure to a handful of key RS risks. It is therefore important that a bank should identify each of these key risks, rank them so that the attention given to quantifying them is commensurate with their importance, understand its exposure to them (particularly in the extremes) and what drives this exposure, and explore how the key risks are related to each other. In doing this, it is important that there is full transparency of the judgments made concerning their potential occurrence and impact. Equally, it is much less important for the purposes of establishing a bank’s capital requirement to focus on the tails of the FM type risks, although for day to day risk management purposes understanding and addressing these risks is of course of critical importance.

26

Page 28: A practitioner’s guide to the Advanced Measurement ...s3.amazonaws.com/zanran_storage/ · A practitioner’s guide to the Advanced Measurement Approach to operational risk under

A practitioner’s guide to the Advanced Measurement Approach to operational risk under Basel II

Appendix A

Monte-Carlo simulation Monte-Carlo simulation is based on the following theorem (and the converse).

Theorem A1: Let Y have a uniform distribution U (0, 1). Let be a continuous distribution function such that

)(xF0)( =aF and 1)( =bF . Then the

random variable X where X = is a continuous random variable with distribution function

)(1 YF −

)(xF . Proof: The distribution function of X is . However,

is equivalent to Y])([)( 1 xYFPxXP ≤=≤ −

xYF ≤− )(1 )(xF≤ so )]([)( xFYPxXP ≤=≤ . However, Y is U(0, 1) so 10,)( <<=≤ yyyYP . Hence 1)(0),()]([)( <<=≤=≤ xFXFxFYPxXP

)(xF, so that the distribution

function of X is . Theorem A2 (converse): Let X have the continuous distribution function . Then the random variable Y whereY

)(xF)(XF= , has a distribution that isU . )1,0(

Proof: The distribution function of Y is 10],)([)( <<≤=≤ yyXFPyYP

)(1 yF −≤ ([)( 1FXPyYP −≤=≤.

However, is equivalent to so . yXF ≤)( X )]y Since )()( xFxXP =≤ , we see that , yyFFyFXPyYP ==≤=≤ −− )]([)]([)( 11

10 << y . In other words Y is distributedU . )1,0( For example, to simulate m observations from the Rayleigh distribution

, we proceed as follows. )2/exp(1)( 2xxF −−=

1. Solveu for x, giving )(xF= )1ln(2 ux −−= , where 1-u is a random variable with uniform distribution

2. Generate a sequence of random numbers u from U muu ,...,, 21 )1,0( 3. The vector x = uln2− then has a Rayleigh distribution.

Suppose now that we wish to simulate m observations from a discrete random variable x that takes the values k with probability mkkPpk ,...,1][ === x . In this case is a function that has steps at each k, and its inverse has steps at , so that the procedure for simulating the random sequence becomes:

)(xFpk =) kpF ++ ...( 1 ix

27

Page 29: A practitioner’s guide to the Advanced Measurement ...s3.amazonaws.com/zanran_storage/ · A practitioner’s guide to the Advanced Measurement Approach to operational risk under

A practitioner’s guide to the Advanced Measurement Approach to operational risk under Basel II

Set if kxi = kik ppupp ++<≤++ − ...... 111

For example, setting ,...1,0!

== − kk

epk

kλλ we obtain a random sequence with a

Poisson distribution. Some cumulative distribution functions are not readily invertible, notably the Normal distribution, and the approach requires modification. One such modified method for the Normal distribution is as follows6. Consider the random variable

πωωω <−=+ ||)(cossincos φttt φryx where the random variables x and y are N(0,1) and independent. Transforming

)2/)(exp(21),( 22 yxyxf +−=π

to polar coordinates using the appropriate Jacobian, and noting that φryφrx sincos ==

we get

ππ

ϕ <>−= ||0)2/exp(2

),( 2 φrrrrf

This is the product of separable expressions involving the random variables r andφ , which are therefore independent, with

πϕ

21)(,)2/exp()( 2 =−= frrrf

From this it follows that if the random variables r and are independent then r has a Rayleigh distribution, is uniform in the interval (

φφ ππ ,− ) and the random variables

φrφrx sincos == y are N(0,1) and independent. Clearly, )12( −= uφ π and inverting f(r) as before we have vr ln2−= , where v is a random variable (independent of u) with uniform distribution in the interval (0,1). It follows that

)12(sinln2sin),12(cosln2cos −−==−−== uvφruvφrx ππ y are N(0,1) and independent, and either may be used to simulate the standard Normal distribution. There are, of course, readily accessible means of simulating the standard Normal distribution in practice.7

6 See (6.64), (6.72), example 6-12 and (8.158) in Probability, Random Variables and Stochastic Processes by A. Papoulis, 1991, McGraw-Hill.

28 7 For example, by using NORMSINV(RAND()) in Microsoft Excel.

Page 30: A practitioner’s guide to the Advanced Measurement ...s3.amazonaws.com/zanran_storage/ · A practitioner’s guide to the Advanced Measurement Approach to operational risk under

A practitioner’s guide to the Advanced Measurement Approach to operational risk under Basel II

Appendix B

Simulating combined risk with a copula If we know the joint distribution function of a vector X of risks we can easily derive the unique marginal distribution of each njX j ,...,2,1, = . If, on the other hand, we know the marginal distributions and the dependency structure between them and wish to determine the joint distribution, we are limited by the fact that there is no unique derivation. Furthermore, when allowing for dependency between the risks when setting capital requirements, we require a method that takes into account the possibility that dependency may become more marked at extreme outcomes. The copula provides a flexible way of obtaining a joint distribution that reflects both the limitation and the requirement. A copula C is a multivariate distribution function whose marginal distributions are distributed U(0, 1). There exists a copula C that defines the dependence structure between by the relationshipnXX ,...,1 ))(,...),((),...,( 111 nnn xFxFCxxF = , where is the marginal distribution function of and F is a joint distribution function of . That is, the univariate marginal distribution functions and the multivariate dependence structure can be separated, with the latter being represented by a copula and the joint distribution of can be defined using it.

jF

jX

nXX ,...,1

nX,X ...,1

The converse is also true, namely that the univariate marginal distribution functions and the multivariate dependence structure can be combined, resulting in a joint distribution of . nXX ,...,1

Suppose now that an overall risk X whose distribution is not known to us has risk components and that we have sufficient information to model each of them separately. Suppose also that we then need to combine these component risks in order

to obtain a model for .

jX

∑=n

jXX1

If we can generate a series of independent random vectors from a suitable copula C, then (bearing in mind theorem A1) each generated from this series of vectors is an independent random sample of X and we can therefore simulate the distribution of X.

)(...)( 11

11 nn uFuF −− ++

Finding a model for the overall total risk X therefore comes down to selecting a suitable copula and generating random samples of the overall risk in this way. The method is illustrated in Appendix C using the Student t-distribution.

29

Page 31: A practitioner’s guide to the Advanced Measurement ...s3.amazonaws.com/zanran_storage/ · A practitioner’s guide to the Advanced Measurement Approach to operational risk under

A practitioner’s guide to the Advanced Measurement Approach to operational risk under Basel II

Appendix C

Simulating combined risk with a t-copula

The function of a copula is to combine known marginal distributions and the dependency structure between them into a joint distribution (or in the case at hand into the distribution of the sum of component risks). In this Appendix we illustrate the simulation of overall risk by a copula. We have chosen the Student t-distribution copula for this illustration because of the need to represent tail dependence in setting a capital requirement. The Normal copula, for example, has no tail dependence, whereas the t-copula does (with the Normal copula being a special case of it when the degrees of freedom approach infinity).

If a random vector X is represented by ZµXsvd += , where and

are independent, then X has an n-variate

2~ vs χ

),0(~ ΣZ nN −vt distribution (for )

with mean

2>v

µ and covariance matrix Σ2−v

v .

The copula of X is then , ))(),...,(()( 1

11

,, nvvRvRv ututtC −−=u

where jjii

jiji ΣΣ

ΣR = and denotes the distribution function of Rvt , sv RZ and s

and are independent. ),0(~ RZ NR

This leads to the following algorithm8 for random vector generation from the t-copula, and hence to independent random samples from the overall sum of the risks:

1. Find the Cholesky decomposition A of R 9 2. Simulate n independent random variates )1,0(~,...,1 Nzz n

3. Simulate a random variate independent of 2~ vs χ nzz ,...,1

4. Set Azxsv

=

5. Set u , giving C )(xvt= )(, uRv

6. Calculate to get an independent random sample of the sum of the different risks.

)(...)( 11

11 nn uFuF −− ++

To illustrate how tail dependence is measured, the coefficient of upper tail dependence λ between two marginal distribution functions is given by . In the case of the standard bivariate t-

distribution with linear correlation

)(&)( 2211 xFxF)}(|)(lim 1

111

221uFxuF

uU−−

→>>=λ

ji

P{x

ρ the measure of tail dependence is given by

9 The Cholesky decomposition of R is the unique lower-triangle matrix A such that RAA T = . Given this and if are independent, then µ)1,0(~,...,1 Nzz n ),(~ RµAz nN+

8 This is based on algorithm 5.2 in Modelling Dependence with Copulas and Applications to Risk Management by P.Embrechts, F. Lindskog & A. McNeil, 2001, www.math.ethz.ch/finance.

30

Page 32: A practitioner’s guide to the Advanced Measurement ...s3.amazonaws.com/zanran_storage/ · A practitioner’s guide to the Advanced Measurement Approach to operational risk under

A practitioner’s guide to the Advanced Measurement Approach to operational risk under Basel II

)1/11(2 1 jijivU vt ρρλ +−+= + . The coefficient of upper tail dependence for the bivariate t-distribution is illustrated in the table:

Coefficient of tail dependence λ

Degrees of freedom v

Coefficient of correlation ρ

0 0.1 0.5 0.9 1 2 0.18 0.22 0.39 0.72 1 3 0.12 0.14 0.31 0.67 1 4 0.08 0.10 0.25 0.63 1 10 0.01 0.01 0.08 0.46 1

0 0 0 0 1 The case with infinite degrees of freedom corresponds to the Normal distribution. There are, of course, many other copulas that could be used, including many capable of representing greater tail dependence than the t-copula.

31

Page 33: A practitioner’s guide to the Advanced Measurement ...s3.amazonaws.com/zanran_storage/ · A practitioner’s guide to the Advanced Measurement Approach to operational risk under

A practitioner’s guide to the Advanced Measurement Approach to operational risk under Basel II

Appendix D

The Compound Poisson Process The mathematical approach taken in the paper uses the compound Poisson process,

namely the stochastic process where { is a sequence of

independent identically distributed random variables having the distribution function F and characteristic function

∑=

≥=)(

1

0,)(tN

kk tYtX }kY

ϕ ; and is a Poisson process with parameter )(tN 0>λ and independent of{ . That is, ‘events’ occur in accordance with a Poisson process and each event has some randomly determined ‘severity’ associated with it.

}kY

Some well-known properties of the compound Poisson process are summarised here for ease of reference:

1. The mean and variance are given by ttXE Fλµ=)]([

F

and , where ttX FF λµσ )()](var[ 22 += µ and are the mean and variance, respectively, of Y .

2Fσ

1

2. The characteristic function is given by

∞<<∞−−−= uututX ))),(1(exp()()( ϕλϕ .

3. The distribution function for is given by )(tX

∑∞

=

=≤0

)( ),(!

)(})(Pr{n

ntn

xFn

etxtXλλ

where and }...Pr{)( 1)( xYYxF n

n ≤++= 0,0;0,1)()0( <=≥= xxxF

4. The sum of two independent compound Poisson processes is itself a compound Poisson process. The Poisson ‘event’ rate parameter of

is )()()( 21 tXtXtX += 21 λλ + ; and the characteristic function of the

associated ‘value’ is given by )()()( 221

21

21

1 uuu ϕλλ

λϕλλ

λϕ+

++

=

)(1 u

,

which corresponds to that of a random variable Y which assumes a value from Y(1) (with characteristic function ϕ ) with probability

21

1

λλλ+

and a value from Y(2) (with characteristic function )(2 uϕ )

with probability 21

2

λλλ+

Because it leads to the only analytically tractable form of the compound Poisson distribution, it is sometimes assumed that the sequence { follows an exponential

distribution, namely

}kY

0,1}Pr{ >−=≤ − xexY xk

υ . In turn ∑ then follows a gamma =

n

kkY

1

32

Page 34: A practitioner’s guide to the Advanced Measurement ...s3.amazonaws.com/zanran_storage/ · A practitioner’s guide to the Advanced Measurement Approach to operational risk under

A practitioner’s guide to the Advanced Measurement Approach to operational risk under Basel II

33

distribution, with 0,!

)(1)(1

0

)( ≥−= ∑−

=

nk

exxFn

k

xkn

υυ . The distribution of the resulting

compound Poisson process is therefore:

0,!

)(!

)(!

)(})(Pr{0

1

00

≥−=≤ ∑ ∑∑∞

=

=

−∞

=

−−

nk

exn

etn

etxtXn

n

k

xk

n

tntn υλλ υλλ

This approach only allows two parametersλ and υ to be inferred from the data. Drawing upon the properties of the Poisson, exponential, gamma and compound Poisson distributions, some relevant measures under the exponential severity approach are summarised in the following table (using a period of 1 year):

Exponential distribution of loss severity Variable Mean Variance Coefficient

of variation Skewness

Number of events λ λ λ1 λ1 Individual severity υ 2υ 1 2

Aggregate loss λυ λυ 22 λ2

While the exponential severity case serves as a basic reference point, and might suit some risk cells where the standard deviation of loss severity is of the same order as the mean severity, such a rudimentary approach is not versatile enough to deal with most risk cells. To address this, we need to introduce a further parameter to achieve a richer specification. In particular, in the area of operational risk we need to use a distribution for loss severity that allows for a relatively heavy tail. For the purpose of this paper, we have responded to this need by using a lognormal distribution to represent severity. While other distributions could be used, and indeed may be necessary in order to represent an actual operational activity, the lognormal is sufficiently versatile for our purposes, namely to illustrate the subject. We have therefore proceeded by assuming that the logarithm of the severity of an individual loss is normally distributed with meanµ and variance . Based upon the properties of the lognormal distribution, this gives the following measures:

Lognormal distribution of loss severity

Variable Mean Variance Coefficient of variation

Skewness

Number of events

λ λ λ1 λ1

Individual severity

22σµ+e 22

(2 σσµ ee + -1) 12

−σe

1

22

2

σ

e

e

Aggregate loss λσµ 22+e λσµ )(2 2+e λσ 2

e