Download as pdf or txt
Download as pdf or txt
You are on page 1of 32

Ruin Theory Revisited: Stochastic Models for

Operational Risk
Paul Embrechts Gennady Samorodnitsky
Roger Kaufmann School of ORIE
Department of Mathematics Cornell University
ETH Zurich Ithaca, NY
CH8092 Zurich USA
Switzerland
April 2, 2004

Abstract
The new Basel Capital Accord has opened up a discussion con-
cerning the measurement of operational risk for banks. In our paper
we do not take a stand on the issue of whether or not a quantitatively
measured risk capital charge for operational risk is desirable; how-
ever, given that such measurement will come about, we review some
of the tools which may be useful towards the statistical analysis of
operational loss data. We also discuss the relevance of these tools for
foreign reserves risk management of central banks.

Keywords: central banks, extreme value theory, heavy tails, operational


risk, risk management, ruin probability, timechange.

1 Introduction
In [9], the following definition of operational risk is to be found: The risk
of losses resulting from inadequate or failed internal processes, people and

Research supported by Credit Suisse Group, Swiss Re and UBS AG through RiskLab,
Switzerland.

Research partially supported by NSF grant DMS0071073 at Cornell University.

1
2

systems or from external events. In its consultative document on the New


Basel Capital Accord (also referred to as Basel II or the Accord), the Basel
Committee for Banking Supervision continues its drive to increase market
stability in the realms of market risk, credit risk and, most recently, opera-
tional risk. The approach is based on a three pillar concept where Pillar 1
corresponds to a Minimal Capital Requirement, Pillar 2 stands for a Su-
pervisory Review Process and finally Pillar 3 concerns Market Discipline.
Applied to credit and operational risk, within Pillar 1, quantitative mod-
elling techniques play a fundamental role, especially for those banks opting
for an advanced, internal measurement approach. It may well be discussed
to what extent a capital charge for operational risk (estimated at about 12%
of the current economic capital) is of importance; see Danelson et al. [23],
Goodhart [46] and Pezier [60, 61] for detailed, critical discussions on this and
further issues underlying the Accord.
In our paper we start from the premise that a capital charge for oper-
ational risk will come about (eventually starting in 2007), and we examine
some quantitative techniques which may eventually become useful in dis-
cussing the appropriateness of such a charge, especially for more detailed
internal modelling. Independent of the final regulatory decision, the methods
discussed in our paper have a wider range of applications within quantita-
tive risk management for central banks, the financial (including insurance)
industry and supervising authorities. In particular, we are convinced that
these methods will play a role in the construction of quantitative tools for
integrated risk management, including foreign reserves risk management for
central banks. First of all, in an indirect way, central banks have a keen in-
terest in the stability of the global banking system and as such do follow
up on the quality/diversity of tools used by the financial industry. Some of
3

these tools used for the modelling of operational risk are discussed in the
present paper. Secondly, whether for the demand on foreign reserves one ad-
heres to the intervention model or an asset choice model (see Batten [10]),
central banks face risk management decisions akin to commercial banks, al-
beit under different political and economical constraints. Finally, the role and
function of central banks is no doubt under discussion (see Goodhart [45]),
and therefore risk management issues which were less relevant some years
ago may become important now. In particular, operational risk ought to be
of great concern to any central bank. As discussed in Batten [10], a central
bank typically confronts two types of economic phenomena expected and
unexpected to which it makes policy responses. Faced with unanticipated
economic but also external (e.g. catastrophic environmental) events to which
it may wish to respond, it must hold additional reserves. Furthermore, as
any institution, central banks face operational losses owing to system failure
and fraud, for instance. How such losses impact on foreign reserve policy very
much depends on the portfolio model chosen (Batten [10]).
In Table 1, taken from Crouhy et al. [19], we have listed some typical
types of operational risks. It is clear from this table that some risks are
difficult to quantify (like incompetence under people risk), whereas others
lend themselves much easier to quantification (as for instance execution error
under transaction risk). As already alluded to above, most of the techniques
discussed in this paper will have a bearing on the latter types of risk. In the
terminology of Pezier [61], this corresponds to the ordinary operational risks.
Clearly, the modelling of the latter type of risks is insufficient to base a full
capital charge concept on.
The paper is organised as follows. In Section 2 we first look at some
stylised facts of operational risk losses before formulating, in a mathematical
4

1. People risk: Incompetence


Fraud
2. Process risk:
A. Model risk Model/methodology error
Marktomodel error
B. Transaction risk Execution error
Product complexity
Booking error
Settlement error
Documentation/contract risk
C. Operational control risk Exceeding limits
Security risks
Volume risk
3. Technology risk: System failure
Programming error
Information risk
Telecommunications failure

Table 1: Types of operational risks (Crouhy et al. [19]).

form, the capital charge problem for operational risk (Pillar 1) in Section 3.
In Section 4 we present a possible theory together with its limitations for
analysing such losses, given that a sufficiently detailed loss database is avail-
able. We also discuss some of the mathematical research stemming from
questions related to operational risk. Most of our discussion will use lan-
guage close to Basel II and commercial banking. At several points, we shall
highlight the relevance of the tools presented for risk management issues for
central banks.

2 Data and Preliminary Stylised Facts


Typically, operational risk losses are grouped in a number of categories (as in
Table 1). In Pezier [61], these categories are further aggregated to the three
levels of nominal, ordinary and exceptional operational risks. Within each
5

category, losses are (or better, have to be) welldefined. Below we give an
example of historical loss information for three different loss types. These
losses correspond to transformed real data. As banks gather data, besides
reporting current losses, an effort is made to build up databases going back
about 10 years. The latter no doubt involves possible selection bias, a problem
one will have to live with till more substantial data warehouses on operational
risk become available. One possibility for the latter could be crossbank
pooling of loss data in order to find the main characteristics of the underlying
loss distributions against which a particular banks own loss experience can
be calibrated. Such data pooling is wellknown from nonlife insurance or
credit risk management. For Basel II, one needs to look very carefully into
the economic desirability of such a pooling arrangement from a regulatory,
risk management point of view. Whereas this would be most useful for the
very rare large losses (exceptional losses), at the same time, such losses are
often very specific to the institution and hence from that point of view make
pooling somewhat questionable.
For obvious reasons, operational risk data are hard to come by. This
is to some extent true for commercial banks, but considerably more so for
central banks. One reason is no doubt the issue of confidentiality, another
the relatively short period over which historical data have been consistently
gathered. From the quantifiable real data we have seen in practice, we sum-
marise below some of the stylised facts; these seem to be accepted throughout
the industry for several operational risk categories. By way of example, in
Figures 1, 2 and 3 we present loss information on three types of operational
losses, which are for the purpose of this paper referred to as Types 1, 2 and 3.
As stated above, these data correspond to modified real data. Figure 4 pools
these losses across types. For these pooled losses, Figure 5 contains quarterly
6

loss numbers. The stylised facts observed are:

loss amounts very clearly show extremes, whereas

loss occurrence times are definitely irregularly spaced in time, and also
show (especially for Type 3, see also Figure 5) a tendency to increase
over time. This nonstationarity can partly be explained by the already
mentioned selection bias.
type 1 type 2

20
40

15
30

10
20
10

5
0

1992 1994 1996 1998 2000 2002 1992 1994 1996 1998 2000 2002

Figure 1: Operational risk losses, Figure 2: Operational risk losses,


Type 1, n = 162. Type 2, n = 80.

type 3 pooled operational losses


40
8

30
6

20
4

10
2
0

1992 1994 1996 1998 2000 2002 1992 1994 1996 1998 2000 2002

Figure 3: Operational risk losses, Figure 4: Pooled operational risk


Type 3, n = 175. losses, n = 417.

Any serious attempt of analytic modelling will at least have to take the above
stylised facts into account. The analytic modelling referred to is not primar-
7

industrial fire data


40

600
30

400
20

200
10
0

0
1992 1994 1996 1998 2000 2002 0 2000 4000 6000 8000

Figure 5: Quarterly loss numbers Figure 6: Fire insurance loss data,


for the pooled operational risk n = 417.
losses.

ily aimed at calculating a riskcapital charge, but more at finding a sensible


quantitative summary that goes beyond the purely descriptive. Similar ap-
proaches are wellknown from the realm of reliability (see for instance Bed-
ford and Cooke [11]), (nonlife and re)insurance (Hogg and Klugman [51])
and total quality control (as in Does et al. [26]).
In order to show some similarities with property insurance loss data, in
Figure 6 we present n = 417 losses from a fire insurance loss database. For
the full set of data, see Embrechts et al. [36], Figure 6.2.12.
Clearly, the large losses are a main concern, and hence, extreme value
theory (EVT) can play a major role in analysing such data. Similar remarks
have been made before concerning operational risk; see for instance Cruz [20]
and Medova [55]. At this point, we would like to clarify a misconception which
seems to persist in the literature; see for instance Pezier [61]. In no way will
EVT be able to predict exceptional operational risk losses such as those
present in the Barings case, for instance. Indeed, the introduction to Em-
brechts et al. [36] states very clearly that EVT is not a magical tool that
can produce estimates out of thin air, but is instead one that tries to make
the best use of whatever data exist on extreme phenomena. Moreover, and
8

indeed equally important, EVT formulates very clearly under what condi-
tions estimates on extreme events can be worked out. Especially with regard
to exceptional losses (Pezier [61]), there is very little that statistical theory,
including EVT, can contribute. On the other hand, EVT is very useful when
it comes to analysing extreme events such as catastrophic storms or floods,
where these events occur within a specific physical or environmental model
and where numerous observations on normal events exist; see Finkenstadt
and Rootzen [40]. Clearly a case like Barings falls outside the range of EVTs
applicability. On the other hand, when data on sufficient normal and a few
extreme events within a welldefined class of risks exist, then EVT offers a
very powerful statistical tool allowing one to extrapolate from the normal
to the extreme. Numerous publications within financial risk management ex-
emplify this; see for instance Embrechts [30]. Specific applications of EVT
to risk management questions for central banks can be found in De Brandt
and Hartman [25] and Hartman et al. [50]. Relevant problems where EVT
technology could be applied are discussed in Borio et al. [14]. These papers
mainly concern spillover of crises between financial markets, contagion, sys-
temic risk and financial stability. An example of an EVT analysis related to
the interest rate crisis of 2000 in Turkey involving interventions at the cur-
rency level (the lira) by the Turkish central bank is discussed in Gencay and
Selcuk [44]. Embrechts [31] discusses the broader economic issues underlying
the application of EVT to financial risk management.
In the next sections we concentrate on the calculation of an operational
risk charge based on EVT and related actuarial techniques.
9

3 The Problem
In order to investigate the kind of methodological problems one faces when
trying to calculate a capital charge or reserve for (quantifiable) operational
risks, we introduce some mathematical notation.
A typical operational risk database will consist of realisations of random
variables

Ykt,i : t = 1, . . . , T , i = 1, . . . , s and k = 1, . . . , N t,i

where

T stands for the number of years (T = 10, say);

s corresponds to the number of loss types (for instance s = 6), and

N t,i is the (random) number of losses in year t of type i.

Note that in reality Ykt,i is actually thinned from below, i.e.

Ykt,i = Ykt,i I{Y t,i dt,i }


k

where dt,i is some lower threshold below which losses are disregarded. Here
IA () = 1 whenever A, and 0 otherwise. Hence, the total loss amount
for year t becomes
s X
N t,i s
X X
Lt = Ykt,i = Lt,i , t = 1, . . . , T . (1)
i=1 k=1 i=1

Denote by FLt , FLt ,i the distribution functions of Lt , Lt,i , i = 1, . . . , s. One


of the capital charge measures discussed by the industry (Basel II) is the
ValueatRisk (VaR) at significance (typically 0.001 0.0025 for
operational risk losses) for next years operational loss variable LT +1 . Hence

ORVaRT1
+1
= FLT +1 (1 ) ,
10

where FLT +1 denotes the (generalised) inverse of the distribution function


FLT +1 , also referred to as its quantile function. For a discussion of generalised
inverses, see Embrechts et al. [36], p. 130. Figure 7 provides a graphical
definition of this.
fL (x)
T+1

100 %

x
T+1
ORVaR1

Figure 7: Calculation of operational risk VaR.

It is clear that, with any realistically available number T years worth


of data, an insample estimation of VaR at this low significance level is
very difficult indeed. Moreover, at this aggregated loss level, across a wide
range of operational risk types, any statistical theory (including EVT) will
have difficulties in coming up with a scientifically sensible estimate. However,
within quantitatively welldefined subcategories, such as the examples in
Figures 14, one could use EVT and come up with a model for the far tail of
the loss distribution and calculate a possible outofsample tail fit. Based on
these tail models, one could estimate VaR and risk measures that go beyond
VaR, such as Conditional VaR (CVaR)

ORCVaRT1,i
+1
= E LT +1,i | LT +1,i > ORVaRT1,i
+1
, i = 1, . . . , s ,

or more sophisticated coherent risk measures; see Artzner et al. [2]. Further-
more, based on extreme value methodology, one could estimate a conditional
loss distribution function for the operational risk category (categories) under
investigation,

FTi +1,ui (ui + x) = P (LT +1,i ui x | LT +1,i > ui ) , x 0, i = 1, . . . , s ,


11

where ui is typically a predetermined high loss level specific to loss category i.


For instance one could take ui = ORVaRT1,i
+1
. Section 4.1 provides more
details on this.
We reiterate the need for extensive data modelling and pooling before
risk measures of the above type can be calculated with a reasonable degree
of accuracy. In the next section we offer some methodological building blocks
which will be useful when more quantitative modelling of certain operational
risk categories is demanded. The main benefit we see lies in a bank internal
modelling, rather than providing a solution towards a capital charge calcula-
tion. As such, the methods we introduce have already been tested and made
operational within a banking environment; see Ebnother [28] and Ebnother
et al. [29].

4 Towards a Theory
Since certain operational risk data are in many ways akin to insurance losses,
it is clear that methods from the field of (nonlife) insurance can play a fun-
damental role in their quantitative analysis. In this section we discuss some
of these tools, also referred to as Insurance Analytics. For a discussion of the
latter terminology, see Embrechts [32]. A further comparison with actuar-
ial methodology can, for instance, be found in Duffy [27]. As mentioned in
the Introduction, we have also made some references to EVT applications to
specific risk management issues for central banks; the comments made below
also apply to these.

4.1 Extreme Value Theory (EVT)

Going back to the fire insurance data (denoted X1 , . . . , Xn ) in Figure 6,


a standard EVT analysis is as follows:
12

(EVT-1) Plot the empirical mean excess function


Pn +
k=1 (Xk u)
ebn (u) = P n
k=1 I{Xk >u}

as a function of u and look for (almost) linear behaviour beyond some


threshold value. For the fire insurance data, ebn (u) is plotted in Figure 8.
A possible threshold choice is u = 1, i.e. for this case, a value low in
the data.

(EVT-2) Use the socalled Peaks over threshold (POT) method to fit an
EVT model to the data above u = 1; plot the data (dots) and the
fitted model (solid line) on loglog scale. Linearity supports Pareto
type power behaviour of the loss distribution P (X1 > x) = x h(x);
see Figure 9. Here h is a socalled slowly varying function, i.e. for all
h(tx)
x > 0, limt h(t)
= 1. For h c, a constant, a loglog plot would be
linear.

(EVT-3) Estimate risk measures such as a 99% VaR and 99% CVaR
and calculate 95% confidence intervals around these risk measures; see
Figure 9.
industrial fire data: mean excess plot
400


300


Mean Excess


200











100

0 100 200 300 400 500


Threshold

Figure 8: Empirical mean excess function for the fire loss data.
13
industrial fire data: log log plot

0.5000













0.0500




95

1F(x) (on log scale)








0.0050

99


0.0005

1 10 100 1000
x (on log scale)

Figure 9: Empirical and fitted distribution tails on loglog scale, including


estimates for VaR and CVaR for the fire loss data.

b = 1.04 with a corresponding 99% VaR value


The estimates obtained are
of 120 and an estimated 99% CVaR of 2890 (note the huge difference).
Figure 9 contains the socalled profile likelihood curves with maximal values
in the estimated VaR and CVaR. A 95% confidence interval around the 99%
VaR 120 is given by (69, 255). The right vertical axis gives the confidence
interval levels. The interval itself is obtained by cutting the profile likelihood
curves at the 95% point. A similar construction (confidence interval) can be
b (=1.04) close to 1, a very
obtained for the CVaR; owing to a value of
large 95% confidence interval is obtained which hence puts great uncertainty
on the point estimates obtained. An value less than 1 would correspond to
an infinite mean model. A value between 1 and 2 yields an infinite variance,
finite mean model. By providing these (very wide) confidence intervals in this
case, EVT already warns the user that we are walking very close (or even
too close) to the edge of the available data.
The software used, EVIS (Extreme Values in SPlus) was developed by
Alexander McNeil and can be downloaded from http://www.math.ethz.ch/
mcneil. It is no doubt a main strength of EVT that, under precise under-
lying model assumptions, confidence intervals for the risk measures under
14

consideration can be worked out. The techniques used belong to the realm of
maximum likelihood theory. We would however like to stress under precise
model assumptions. In Embrechts et al. [35] a simulation study by McNeil
and Saladin [54] is reported which estimates, in the case of independent and
identically distributed (iid) data, the number of observations needed in order
to achieve a preassigned accuracy. For instance, in the iid case and a Pareto
loss distribution with tail index 2 (a realistic assumption), in order to achieve
a reasonable accuracy for the VaR at = 0.001, a minimum number of 100
observations above a 90% threshold u is needed (corresponding to an original
1,000 observations).1
The basic result underlying the POT method is that the marked point
process of excesses over a high threshold u, under fairly general (though very
precise) conditions, can be well approximated by a compound Poisson process
(see Figure 10):
N (u)
X
Yk T k
k=1

where (Yk ) iid have a generalised Pareto distribution and N (u) denotes the
number of exceedances of u by (Xk ). The exceedances of u form (in the
limit) a homogeneous Poisson process and both are independent. See Lead-
better [52] for details. A consequence of the Poisson property is that inter
exceedance times of u are iid exponential. Hence such a model forms a good
first guess. More advanced techniques can be introduced taking, for instance,
nonstationarity and covariate modelling into account; see Embrechts [30],
ChavezDemoulin and Embrechts [16] and Coles [17] for a discussion of these
techniques. The asymptotic independence between exceedance times and ex-
cesses makes likelihood fitting straightforward.
1
See also Embrechts et al. [36], pp. 194, 270 and 343 for the need to check conditions for
the underlying data before an EVT analysis can be used. EVIS allows for several diagnostic
15
YN(u)
Xt
Y1
N(u)
u
Xn
X1
t
1 T1 TN(u) n
Figure 10: Stylised presentation of the POT method.
Turning to the mean excess plots for the operational risk data from Fig-
ures 13 (for the typespecific data) and Figure 4 (for the pooled data),
we clearly see the typical increasing (nearly linear) trends indicating heavy
tailed, Paretotype losses; see Figures 1114 and compare them with Fig-
ure 8. As a first step, we can carry out the above extreme value analysis
for the pooled data, though a refined analysis, taking nonstationarity into
account, is no doubt necessary. Disregarding the possible nonstationarity of
the data, one could be tempted to use the POT method and fit a generalised
Pareto distribution to the pooled losses above u = 0.4, say. Estimates for the
99% VaR and the 99% CVaR, including their 95% confidence intervals, are
given in Figure 15. For the VaR we get a point estimate of 9.1, and a 95%
confidence interval of (6.0, 18.5). The 99% CVaR beyond 9.1 is estimated
as 25.9, and the lower limit for its 95% confidence interval is 11.7. Since, as
in the fire insurance case, the tails are very heavy (b
= 1.63), we get a very
large estimate for the upper confidence limit for CVaR.
As already discussed before, the data in Figure 4 may contain a transition
from more sparse data over the first half of the period under investigation
to more frequent losses over the second half. It also seems that the early
checks on these conditions.
16

type 1: mean excess plot type 2: mean excess plot


12


10

6


8
Mean Excess

Mean Excess



6

4





4

2


2

0 2 4 6 8 0 1 2 3
Threshold Threshold

Figure 11: Mean excess plot for op- Figure 12: Mean excess plot for op-
erational risk losses, Type 1. erational risk losses, Type 2.

type 3: mean excess plot pooled operational losses: mean excess plot
2.5


14








12




2.0



10





Mean Excess

Mean Excess



1.5







1.0





0.5

0 1 2 3 4 5 0 2 4 6 8
Threshold Threshold

Figure 13: Mean excess plot for op- Figure 14: Mean excess plot for
erational risk losses, Type 3. pooled operational risk losses.
pooled operational losses: log log plot









0.0500









95
1F(x) (on log scale)




0.0050


99


0.0005

0.5 1.0 5.0 10.0 50.0 100.0


x (on log scale)

Figure 15: Empirical and fitted distribution tails for pooled operational losses
on loglog scale, including estimates for VaR and CVaR.
17

losses (in Figure 4 for instance) are not only more sparse, but also heav-
ier. Again, this may be due to the way in which operational loss databases
are built up. When gathering data for years some distance in the past, one
only remembers the larger losses. Our EVT analysis should be adjusted for
such a switch in size and/or intensity. Though ChavezDemoulin and Em-
brechts [16] contains the relevant methodology, one should however realise
that for such more advanced modelling, many more observations are needed.
In the next section, we make a more mathematical (actuarial) excursion
in the realm of insurance risk theory. Risk theory provides models for ran-
dom losses in a dynamic setting and yields techniques for the calculation of
reserves given solvency constraints; see for instance Daykin et al. [24]. This
theory has been applied in a variety of contexts and may also yield a rele-
vant toolkit, especially in combination with EVT, if central banks were to
manage their foreign reserves more actively and hence be more exposed to
market, credit, and operational risk. In particular, ruin theory provides a
longerterm view on solvency for a dynamic loss process. We discuss some of
the key aspects of ruin theory with a specific application to operational risk
below.

4.2 Ruin Theory Revisited

Given that (1) yields the total operational risk loss of s different subcate-
gories during a given year, it can be seen as resulting from a superposition
of several (namely s) compound processes. So far, we are not aware of any
studies which establish detailed features of individual processes or their in-
terdependencies. Note that in Ebnother [28] and Ebnother et al. [29], condi-
tions on the aggregated process are imposed: independence, or dependence
through a common Poisson shock model. For the moment, we summarise (1)
18

in a stylised way as follows:


N (t)
X
Lt = Yk ,
k=1

where N (t) is the total number of losses over a time period [0, t] across all
s categories and the Yk s are the individual losses. We drop the additional
indices.
From an actuarial point of view, it would now be natural to consider an
initial (risk) capital u and a premium rate c > 0 and define the cumulative
risk process
Ct = u + ct Lt , t 0. (2)

In Figure 16 we have plotted such a risk process for the pooled operational
risk losses shown in Figure 4. Again, the regime switch is clearly seen,
splitting the time axis into roughly pre and past1998.
pooled operational losses: risk process
120
100
80
60

1992 1994 1996 1998 2000 2002

Figure 16: Risk process Ct with u = 50, c = 28 and the loss process from
Figure 4.

Given a small  > 0, for the process in (2), a risk capital u can then be
calculated putting the socalled ruin probability, i.e. the probability for the
surplus process Ct to become negative, over a given time horizon [T , T ] equal
to , small:
 

u ; T , T = P inf (u + ct Lt ) < 0 = . (3)
T tT
19

For c = 0, T = T +, T = T + 1

u = ORVaRT1
+1
.

The level of insolvency 0 is just chosen for mathematical convenience. One


could, for instance, see c as a premium rate paid to an external insurer taking
(part of) the operational risk losses or as a rate paid to (or accounted for by)
a bank internal office. The rate c paid and the capital u calculated would
then be incorporated into the units overall risk capital.
Classical actuarial ruin theory concerns estimation of (u; T , T ) in gen-
eral and (u, T ) = (u; 0, T ), 0 < T in particular, and for a wide range
of processes. The standard assumption in the famous CramerLundberg
model is that (N (t)) is a homogeneous Poisson() process, independent of
the losses (Yk ) iid with distribution function G and mean < . Under the
socalled netprofit condition (NPC), c/ > , one can show that, for small
claims Yk , a constant R (0, ) (the socalled adjustment or Lundberg
constant) and a constant C (0, 1) exist, so that:

(u) = (u, ) < eRu , u 0, (4)

and
lim eRu (u) = C . (5)
u

The small claims condition leading to the existence of R > 0 can be expressed
in terms of E(eRYk ) and typically holds for distribution functions with expo-
nentially bounded tails. The constant C can be calculated explicitly; see for
instance Grandell [47], Asmussen [3] and Rolski et al. [63] for details. For the
case T = 0, T = and a process for which (4) holds, we can solve for u
in (3), obtaining
1 1
u = log ,
R 
20

a quantity which can statistically be estimated given sufficient loss data.


For operational risk losses, the small claims condition underlying the so
called CramerLundberg estimates (4) and (5) are typically not satisfied.
Operational risk losses are heavytailed (power tail behaviour) as can be
seen from Figures 1114. Within the CramerLundberg model, the infinite
horizon (T = ) ruin estimate for (u) = (u, ) becomes (see Embrechts
and Veraverbeke [39], Embrechts et al. [36]):
Z
c 1
(u) ( ) (1 G(x)) dx , u . (6)
u

Hence the ruin probability (u) is determined by the tail of the loss distribu-
tion 1 G(x) for x large, meaning that ruin (or a given limit excess) is caused
by typically one (or a few) large claim(s). For a more detailed discussion on
this path leading to ruin, see Embrechts et al. [36], Section 8.3 and the
references given there. The asymptotic estimate (6) holds under very general
conditions of heavy tailedness, the simplest one being 1 G(x) = x h(x)
for h slowly varying and > 1. In this case (6) becomes

(u) C u1 h(u) , u , (7)

where C = [( 1)( c )]1 . Hence ruin decays polynomially (slow) as


a function of the initial (risk) capital u. Also for lognormal claims, esti-
mate (6) holds. In the actuarial literature, the former result was first proved
by von Bahr [7], the latter by Thorin and Wikstad [65]. The final version for
socalled subexponential claims is due to Embrechts and Veraverbeke [39]. In
contrast to the small claims regime estimates (4) and (5), the heavytailed
claims case (6) seems to be robust with respect to the underlying assump-
tions of the claims process. The numerical properties of (6) are however far
less satisfactory.
21

Besides the classical CramerLundberg model, an estimate similar to (6)


also holds for the following processes:

Replace the homogeneous Poisson process (N (t)) by a general renewal


process; see Embrechts and Veraverbeke [39]. Here the claim inter
arrival times are still independent, but have a general distribution func-
tion, not necessarily an exponential one.

Generalisations to risk processes with dependent interclaim times, al-


lowing for possible dependence between the arrival process and the
claim sizes are discussed in Asmussen [3], Section IX.4. The general-
isations contain the socalled Markovmodulated models as a special
case; see also Asmussen et al. [6]. In these models, the underlying in-
tensity model follows a finite state Markov chain, enabling for instance
the modelling of underlying changes in the economy in general or the
market in particular.

Ruin estimates for risk processes perturbed by a diffusion, or by more


general stochastic processes, are for instance to be found in Furrer [41],
Schmidli [64] and Veraverbeke [66].

A very general result of type (7) for the distribution of the ultimate
supremum of a random walk with a negative drift is derived in Mikosch
and Samorodnitsky [56]. Mathematically, these results are equivalent
with ruin estimation for a related risk model.

For all of these models an estimate of type (7) holds. Invariably, the derivation
is based on the socalled one large claim heuristics; see Asmussen [3],
p. 264. These heuristics may eventually play an important role in the analysis
of operational risk data.
22

As discussed above, as yet there is no clear stochastic model available for


the general operational risk process (1). Consequently, it would be useful to
find a way to obtain a broad class of risk processes for which (7) holds. A
solution to this problem is presented in Embrechts and Samorodnitsky [38]
through a combination of the one large claim heuristics and the notion
of operational time (timechange). Below we restrict our attention to the
infinite horizon case (u). First of all, estimate (7) is not fine enough for
accurate numerical approximations. It rather gives a benchmark estimate of
ruin (insolvency), deliminating the heavytailed (one claim causes ruin)
situation from the lighttailed estimates in (4) and (5) where most (small)
claims contribute equally and ruin is remote, i.e. has an exponentially small
probability. For a discussion on numerical ruin estimates of type (7), see
Asmussen and Binswanger [4], and Asmussen et al. [5]; the keywords here
are rare event simulation.
Suppose that we are able to estimate ruin over an infinite horizon for
a general stochastic (loss) process (Lt ), a special case of which is the classi-
cal CramerLundberg total claim process in (2) or the risk processes listed
above. Suppose now that, for this general loss process (Lt ), we have a ruin es-
timate of form (7). From (Lt ), more general risk processes can be constructed
using the concept of timechange ((t)). The latter is a positive, increasing
stochastic process that typically (but not exclusively) models economic or
market activity. The more general process (L(t) ) is the one we are really
interested in, since its added flexibility could allow us to model the stylised
facts of operational risk data as discussed in Section 2. The step from the
classical CramerLundberg model (Lt ) to the more general process (L(t) )
is akin to the step from the classical BlackScholesMerton model to more
general stochastic volatility models. We can now look at this general time
23

changed process and define its corresponding infinite horizon ruin function:
 
(u) = P sup (L(t) ct) > u .
t0

We then ask for conditions on the process parameters involved, as well as for
conditions on ((t)), under which
(t)
lim = 1, (8)
t (t)
meaning that, asymptotically, ruin is of the same order of magnitude in the
timechanged (more realistic) process as it is for the original (more stylised)
process. These results can be interpreted as a kind of robustness charac-
terisation for general risk processes so that the polynomial ruin probability
estimate (7) holds. In Embrechts and Samorodnitsky [38], besides general
results for (8) to hold, specific examples are discussed. Motivated by the ex-
ample of transaction risk (see Table 1), Section 3 in the latter paper discusses
the case of mixing through Markov chain switching models, also referred to
as Markovmodulated or Markov renewal processes. In the context of oper-
ational risk, it is natural to consider a class of timechange processes ((t))
in which time runs at a different rate in different time intervals, depending
on the state of a certain underlying Markov chain. The Markov chain stays
a random amount of time in each state, with a distribution that depends
on that state. Going back to the transaction risk case, one can think of the
Markov chain states as resulting from an underlying market volume (inten-
sity) index. These changes in volumes traded may for instance have an effect
on back office errors. The results obtained in Embrechts and Samorodnit-
sky [38] may be useful to characterise interesting classes of loss processes
where ruin behaves as in (7). Recall from Figure 5 the fact that certain oper-
ational risk losses show periods of high (and low) intensity. Future dynamic
models for subcategories of operational risk losses will have to take these
24

characteristics into account. The discussion above mainly aims to show that
tools for such problems are at hand and await the availability of more detailed
loss databases.
Some remarks should be made at this point. Within classical insurance
risk theory, a full solution linking heavytailedness of the claim distribution
to the longtailedness of the corresponding ruin probability is discussed in
Asmussen [3]. Alternative models leading to similar distributional conclusions
can be found in the analysis of teletraffic data; see for instance Resnick and
Samorodnitsky [62] and Finkenstadt and Rootzen [40]. Whereas the basic
operational risk model in (1) may be of a more general nature than the ones
discussed above, support seems to exist for the supposition that under fairly
general conditions, the tail behaviour of P (LT +1 > x) will be powerlike. Fur-
thermore, the notion of timechange may seem somewhat artificial. This tech-
nique has however been around in insurance mathematics for a long time and
is used to transform a complicated loss process into a more standard one; see
for instance Cramer [18] or Buhlmann [15]. Within finance, these techniques
were introduced through the fundamental work of Olsen and Associates on
time; see Dacorogna et al. [21]. Further work has been done by Ane and
Geman [1], Geman et al. [42, 43] and more recently BarndorffNielsen and
Shephard [8]; they use timechange techniques to transform a financial time
series with randomness in the volatility to a standard BlackScholesMerton
model. As stated above, the situation is somewhat akin to the relationship
between a Brownian motionbased model (such as the BlackScholesMerton
model) and more recent models based on general semimartingales. It is a
wellknown result, see Monroe [57], that any semimartingale can be written
as a timechanged Brownian motion.
25

4.3 Further Tools

In the previous section, we briefly discussed some (heavytailed) ruin type


estimates which, in view of the data already available on operational risk,
may become useful. From the realm of insurance, several further techniques
may be used. Below we mention some of them without entering into details.
Recall from (1) that a yearly operational risk variable will typically be of the
form:
N
X
L= Yk (9)
k=1
where N is some discrete random variable counting the total number of
claims within a given period across all s loss classes, say, and Yk denotes the
kth claim. Insurance mathematics has numerous models of type (9) starting
with the case where N is a random variable independent of the iid claims
(Yk ) with distribution function G, say. In this case, the probability of a loss
exceeding a certain level equals

X 
P (L > x) = P (N = k) 1 Gk (x) , (10)
k=1

where Gk denotes the kth convolution of G. Again, in the case that 1


G(x) = x h(x), and the moment generating function of N is analytic in 1,
it is shown in Embrechts et al. [36] that

P (L > x) E(N )x h(x) , x .

Several procedures exist for numerically calculating (10) under a wide range
of conditions. These include recursive methods such as the PanjerEuler
method for claim number distributions satisfying P (N = k) = (a + kb )P (N =
k 1) for k = 1, 2, . . . (see Panjer [58]), and Fast Fourier Transform methods
(see Bertram [12]). Grubel and Hermesmeier [48, 49] are excellent review pa-
pers containing further references. The actuarial literature contains numerous
26

publications on the subject; good places to start are Panjer and Willmot [59]
and Hogg and Klugman [51].
Finally, looking at (1), several aggregation operations are taking place, in-
cluding the superposition of the different loss frequency processes (N t,i )i=1,...,s

and the aggregation of the different loss size variables Ykt,i k=1,...,N t,i ,i=1,...,s .
For the former, techniques from the theory of point processes are available;
see for instance Daley and VereJones [22]. The issue of dependence modelling
within and across operational risk loss types will no doubt play a crucial role;
copula techniques, as introduced in risk management in Embrechts et al. [37],
can no doubt be used here.

5 Final Comments
As stated in the Introduction, tools from the realm of insurance as discussed
in this paper may well become relevant, conditional on the further devel-
opment and implementation of quantitative operational risk measurement
within the financial industry. Our paper aims at encouraging a better ex-
change of ideas between actuaries and risk managers. Even if one assumes
full replicability of operational risk losses within the several operational risk
subcategories, their interdependence will make detailed modelling difficult.
The theory presented in this paper is based on specific conditions and can
be applied in cases where testing has shown that these underlying assump-
tions are indeed fulfilled. The ongoing discussions around Basel II will show
at which level the tools presented will become useful. However, we strongly
doubt that a full operational risk capital charge can be based solely on sta-
tistical modelling.
Some of the tools introduced and mainly exemplified through their appli-
cation to the quantitative modelling of operational risk are no doubt useful
27

well beyond in more general risk management. Further research will have
to look more carefully at the risk management issues underlying the central
bank landscape. In particular, the issue of market liquidity under extreme
events, together with the more active management of foreign reserves and
the longer time view will necessitate tools that complement the existing ones
in risk management for commercial banks. Insurance analytics as presented
in this paper, and discussed more in detail in Embrechts [33], will no doubt
form part of this.

References
[1] Ane, T. and Geman, H. (2000). Order Flow, Transaction Clock and
Normality of Asset Returns. Journal of Finance, 55, pp. 225984.

[2] Artzner, P., Delbaen, F., Eber, J. M., and Heath, D. (1999). Coherent
Measures of Risk, Mathematical Finance, 9, pp. 20328.

[3] Asmussen, S. (2000). Ruin Probabilities. World Scientific.

[4] Asmussen, S. and Binswanger, K. (1997). Simulation of Ruin Probabil-


ities for Subexponential Claims. Astin Bulletin, 27(2), pp. 297318.

[5] Asmussen, S., Binswanger, K., and Hjgaard, B. (2000). Rare Events
Simulation for Heavytailed Distributions. Bernoulli, 6(2), pp. 30322.

[6] Asmussen, S., Henriksen, L. F., and Kluppelberg, C. (1994). Large


Claims Approximations for Risk Processes in a Markovian Environment.
Stochastic Processes and their Applications, 54, pp. 2943.
[7] von Bahr, B. (1975). Asymptotic Ruin Probabilities when Exponential
Moments Do Not Exist. Scandinavian Actuarial Journal, pp. 610.

[8] BarndorffNielsen, O. E. and Shephard, N. (2002). Financial Volatility


and Levybased Models. Book project.

[9] Basel Committee on Banking Supervision (2001). Consultative Docu-


ment. Overview of the New Basel Capital Accord.

[10] Batten, D. S. (1982). Central Banks Demand for Foreign Reserves under
Fixed and Floating Exchange Rates. Federal Reserve Bank of St. Louis.
28

[11] Bedford, T. and Cooke, R. M. (2001). Probabilistic Risk Analysis: Foun-


dations and Methods. Cambridge University Press.

[12] Bertram, J. (1981). Numerische Berechnung von Gesamtschadenvertei-


lungen. Blatter der DGVM, 15, pp. 17594.

[13] Blum, P., Dias, A., and Embrechts, P. (2002). The ART of Dependence
Modelling: The Latest Advances in Correlation Analysis. In: Alternative
Risk Strategies (ed. M. Lane). Risk Waters Group, London, pp. 33956.

[14] Borio, C., Furfine, C., and Lowe, P. (2001) Procyclicality of the Financial
System and Financial Stability; Issues and Policy Options. BIS Papers
No. 1.

[15] Buhlmann, H. (1970). Mathematical Methods in Risk Theory. Springer.

[16] ChavezDemoulin, V. and Embrechts, P. (2004). Smooth Extremal Mod-


els in Finance and Insurance. Journal of Risk and Insurance, to appear.

[17] Coles, S. (2001). An Introduction to Statistical Modeling of Extreme Val-


ues. Springer.

[18] Cramer, H. (1930). On the Mathematical Theory of Risk. Skandia Jubilee


Volume, Stockholm.

[19] Crouhy, C., Galai, D., and Mark, R. (2000). Risk Management. McGraw
Hill, New York.

[20] Cruz, M. G. (2002). Modelling, Measuring and Hedging Operational


Risk. Wiley.

[21] Dacorogna, M. M., Gencay, R., Muller, U. A., Olsen, R. B., and Pictet,
O. V. (2001). An Introduction to Highfrequency Finance. Academic
Press.

[22] Daley, D. J. and VereJones, D. (1988). An Introduction to the Theory


of Point Processes. Springer.

[23] Danelsson, J., Embrechts, P., Goodhart, C., Keating, C., Muennich, F.,
Renault, O., and Shin, H. S. (2001). An Academic Response to Basel II.
Financial Markets Group, London School of Economics, Special Paper
No. 130.

[24] Daykin, C. D., Pentikainen, T., and Pesonen, M. (1994). Practical Risk
Theory for Actuaries. Chapman and Hall/CRC, London.
29

[25] De Brandt, O. and Hartman, P. (2000). Systemic Risk: A Survey. Work-


ing Paper No. 35, European Central Bank, Frankfurt.

[26] Does, R. J. M. M., Roes, K. C. B., and Trip, A. (1999). Statistical Process
Control in Industry. Kluwer Academic.

[27] Duffy, P. (2002). Operational Risk. The Actuary, October 2002, pp. 24
25.

[28] Ebnother, S. (2001). Quantitative Aspects in Operational Risk. Diploma


thesis ETH Zurich.

[29] Ebnother, S., Vanini, P., McNeil, A. J., and AntolinezFehr, P. (2003).
Operational Risk: A Practitioners View. Journal of Risk, 5(3), pp. 115.

[30] Embrechts, P. (ed.) (2000). Extremes and Integrated Risk Management.


Risk Waters Group, London.

[31] Embrechts, P. (2003). Extremes in Economics and the Economics of


Extremes. In: Extreme Values in Finance, Telecommunications, and
the Environment (eds. B. Finkenstadt, H. Rootzen). Chapman and
Hall/CRC, London, pp. 16983.

[32] Embrechts, P. (2003). Insurance Analytics. Guest Editorial, British Ac-


tuarial Journal, 8(IV), pp. 63941.

[33] Embrechts, P. (2003). Insurance Analytics: Actuarial Tools for Financial


Risk Management. Opening Lecture at the XXXIV Astin Colloquium
in Berlin.

[34] Embrechts, P., Frey, R., and McNeil, A. J. (2004). Quantitative Risk
Management: Concepts, Techniques and Tools. Book manuscript, ETH
Zurich.

[35] Embrechts, P., Furrer, H., and Kaufmann, R. (2003). Quantifying Regu-
latory Capital for Operational Risk. Derivatives Use, Trading and Reg-
ulation, 9(3), pp. 21733.

[36] Embrechts, P., Kluppelberg, C., and Mikosch, T. (1997). Modelling Ex-
tremal Events for Insurance and Finance. Springer.

[37] Embrechts, P., McNeil, A. J., and Straumann, D. (2002). Correlation


and Dependence in Risk Management: Properties and Pitfalls. In: Risk
Management: Value at Risk and Beyond (ed. M. Dempster). Cambridge
University Press, pp. 176223.
30

[38] Embrechts, P. and Samorodnitsky, G. (2003). Ruin Problem and How


Fast Stochastic Processes Mix. The Annals of Applied Probability, 13,
pp. 136.

[39] Embrechts, P. and Veraverbeke, N. (1982). Estimates for the Probabil-


ity of Ruin with Special Emphasis on the Possibility of Large Claims.
Insurance: Mathematics and Economics, 1, pp. 5572.

[40] Finkenstadt, B. and Rootzen, H. (eds.) (2003). Extreme Values in


Finance, Telecommunications, and the Environment. Chapman and
Hall/CRC, London.

[41] Furrer, H. (1998). Risk Processes Perturbed by stable Levy Motion.


Scandinavian Actuarial Journal, pp. 5974.

[42] Geman, H., Madan, D. B., and Yor, M. (2001). Time Changes for Levy
Processes. Mathematical Finance, 11(1), pp. 7996.

[43] Geman, H., Madan, D. B., and Yor, M. (2002). Stochastic Volatil-
ity, Jumps and Hidden Time Changes. Finance and Stochastics, 6(1),
pp. 6390.

[44] Gencay, R. and Selcuk, F. (2001). Overnight Borrowing, Interest Rates


and Extreme Value Theory. Working Paper, Bilkent University.

[45] Goodhart, C. (1988). The Evolution of Central Banks. MIT Press.

[46] Goodhart, C. (2001). Operational Risk. Special Paper 131, Financial


Markets Group, London School of Economics.

[47] Grandell, J. (1991). Aspects of Risk Theory. Springer.

[48] Grubel, R. and Hermesmeier, R. (1999). Computation of Compound


Distributions I. Aliasing Errors and Exponential Tilting. Astin Bulletin,
29, pp. 197214.

[49] Grubel, R. and Hermesmeier, R. (2000). Computation of Compound


Distributions II. Discretization Errors and Richardson Extrapolation.
Astin Bulletin, 30, pp. 30931.

[50] Hartman, P., Straetmans, S., and de Vries, C.G. (2001). Asset Market
Linkages in Crisis Periods. Working Paper No. 71, European Central
Bank, Frankfurt.

[51] Hogg, R. V. and Klugman, S. A. (1984). Loss Distributions. Wiley.


31

[52] Leadbetter, M. R. (1991). On a Basis for Peaks over Threshold Mod-


eling. Statistics and Probability Letters, 12, pp. 35762.

[53] McNeil, A. J. (1997). Estimating the Tails of Loss Severity Distributions


Using Extreme Value Theory. Astin Bulletin, 27, pp. 11737.

[54] McNeil, A. J. and Saladin, T. (1997). The Peaks over Thresholds Method
for Estimating High Quantiles of Loss Distributions. Proceedings of
XXVIIth International Astin Colloquium, Cairns, Australia, pp. 2343.

[55] Medova, E. (2000). Measuring Risk by Extreme Values. Risk, November


2000, pp. 2026.

[56] Mikosch, T. and Samorodnitsky, G. (2000). The Supremum of a Negative


Drift Random Walk with Dependent Heavytailed Steps. The Annals of
Applied Probability, 10, pp. 102564.

[57] Monroe, I. (1978). Processes that Can Be Embedded in Brownian Mo-


tion. Annals of Probability, 6(1), pp. 4256.

[58] Panjer, H. (1981). Recursive Evaluation of a Family of Compound Dis-


tributions. Astin Bulletin, 12, pp. 2226.

[59] Panjer, H. and Willmot, G. (1992). Insurance Risk Models. Society of


Actuaries, Schaumburg, Illinois.

[60] Pezier, J. (2002). A Constructive Review of Basels Proposals on Oper-


ational Risk. Working Paper, ISMA Centre, University of Reading.

[61] Pezier, J. (2002). Operational Risk Management. Working Paper, ISMA


Centre, University of Reading.

[62] Resnick, S. and Samorodnitsky, G. (2000). A Heavy Traffic Limit The-


orem for Workload Processes with Heavytailed Service Requirements.
Management Science, 46, pp. 123648.

[63] Rolski, R, Schmidli, H., Teugels, J., and Schmidt, V. (1999). Stochastic
Processes for Insurance and Finance. Wiley.

[64] Schmidli, H. (1999). Perturbed Risk Processes: A Review. Theory of


Stochastic Processes, 5, pp. 14565.

[65] Thorin, O. and Wikstad, N. (1977). Calculation of Ruin Probabilities


when the Claim Distribution Is Lognormal. Astin Bulletin, 9, pp. 231
46.
32

[66] Veraverbeke, N. (1993). Asymptotic Estimates for the Probability of


Ruin in a Poisson Model with Diffusion. Insurance: Mathematics and
Economics, 13, pp. 5762.

You might also like