Ruin Theory Revisited: Stochastic Models For Operational Risk
Ruin Theory Revisited: Stochastic Models For Operational Risk
Operational Risk
Paul Embrechts Gennady Samorodnitsky
Roger Kaufmann School of ORIE
Department of Mathematics Cornell University
ETH Zurich Ithaca, NY
CH8092 Zurich USA
Switzerland
April 2, 2004
Abstract
The new Basel Capital Accord has opened up a discussion con-
cerning the measurement of operational risk for banks. In our paper
we do not take a stand on the issue of whether or not a quantitatively
measured risk capital charge for operational risk is desirable; how-
ever, given that such measurement will come about, we review some
of the tools which may be useful towards the statistical analysis of
operational loss data. We also discuss the relevance of these tools for
foreign reserves risk management of central banks.
1 Introduction
In [9], the following definition of operational risk is to be found: The risk
of losses resulting from inadequate or failed internal processes, people and
Research supported by Credit Suisse Group, Swiss Re and UBS AG through RiskLab,
Switzerland.
Research partially supported by NSF grant DMS0071073 at Cornell University.
1
2
these tools used for the modelling of operational risk are discussed in the
present paper. Secondly, whether for the demand on foreign reserves one ad-
heres to the intervention model or an asset choice model (see Batten [10]),
central banks face risk management decisions akin to commercial banks, al-
beit under different political and economical constraints. Finally, the role and
function of central banks is no doubt under discussion (see Goodhart [45]),
and therefore risk management issues which were less relevant some years
ago may become important now. In particular, operational risk ought to be
of great concern to any central bank. As discussed in Batten [10], a central
bank typically confronts two types of economic phenomena expected and
unexpected to which it makes policy responses. Faced with unanticipated
economic but also external (e.g. catastrophic environmental) events to which
it may wish to respond, it must hold additional reserves. Furthermore, as
any institution, central banks face operational losses owing to system failure
and fraud, for instance. How such losses impact on foreign reserve policy very
much depends on the portfolio model chosen (Batten [10]).
In Table 1, taken from Crouhy et al. [19], we have listed some typical
types of operational risks. It is clear from this table that some risks are
difficult to quantify (like incompetence under people risk), whereas others
lend themselves much easier to quantification (as for instance execution error
under transaction risk). As already alluded to above, most of the techniques
discussed in this paper will have a bearing on the latter types of risk. In the
terminology of Pezier [61], this corresponds to the ordinary operational risks.
Clearly, the modelling of the latter type of risks is insufficient to base a full
capital charge concept on.
The paper is organised as follows. In Section 2 we first look at some
stylised facts of operational risk losses before formulating, in a mathematical
4
form, the capital charge problem for operational risk (Pillar 1) in Section 3.
In Section 4 we present a possible theory together with its limitations for
analysing such losses, given that a sufficiently detailed loss database is avail-
able. We also discuss some of the mathematical research stemming from
questions related to operational risk. Most of our discussion will use lan-
guage close to Basel II and commercial banking. At several points, we shall
highlight the relevance of the tools presented for risk management issues for
central banks.
category, losses are (or better, have to be) welldefined. Below we give an
example of historical loss information for three different loss types. These
losses correspond to transformed real data. As banks gather data, besides
reporting current losses, an effort is made to build up databases going back
about 10 years. The latter no doubt involves possible selection bias, a problem
one will have to live with till more substantial data warehouses on operational
risk become available. One possibility for the latter could be crossbank
pooling of loss data in order to find the main characteristics of the underlying
loss distributions against which a particular banks own loss experience can
be calibrated. Such data pooling is wellknown from nonlife insurance or
credit risk management. For Basel II, one needs to look very carefully into
the economic desirability of such a pooling arrangement from a regulatory,
risk management point of view. Whereas this would be most useful for the
very rare large losses (exceptional losses), at the same time, such losses are
often very specific to the institution and hence from that point of view make
pooling somewhat questionable.
For obvious reasons, operational risk data are hard to come by. This
is to some extent true for commercial banks, but considerably more so for
central banks. One reason is no doubt the issue of confidentiality, another
the relatively short period over which historical data have been consistently
gathered. From the quantifiable real data we have seen in practice, we sum-
marise below some of the stylised facts; these seem to be accepted throughout
the industry for several operational risk categories. By way of example, in
Figures 1, 2 and 3 we present loss information on three types of operational
losses, which are for the purpose of this paper referred to as Types 1, 2 and 3.
As stated above, these data correspond to modified real data. Figure 4 pools
these losses across types. For these pooled losses, Figure 5 contains quarterly
6
loss occurrence times are definitely irregularly spaced in time, and also
show (especially for Type 3, see also Figure 5) a tendency to increase
over time. This nonstationarity can partly be explained by the already
mentioned selection bias.
type 1 type 2
20
40
15
30
10
20
10
5
0
1992 1994 1996 1998 2000 2002 1992 1994 1996 1998 2000 2002
30
6
20
4
10
2
0
1992 1994 1996 1998 2000 2002 1992 1994 1996 1998 2000 2002
Any serious attempt of analytic modelling will at least have to take the above
stylised facts into account. The analytic modelling referred to is not primar-
7
600
30
400
20
200
10
0
0
1992 1994 1996 1998 2000 2002 0 2000 4000 6000 8000
indeed equally important, EVT formulates very clearly under what condi-
tions estimates on extreme events can be worked out. Especially with regard
to exceptional losses (Pezier [61]), there is very little that statistical theory,
including EVT, can contribute. On the other hand, EVT is very useful when
it comes to analysing extreme events such as catastrophic storms or floods,
where these events occur within a specific physical or environmental model
and where numerous observations on normal events exist; see Finkenstadt
and Rootzen [40]. Clearly a case like Barings falls outside the range of EVTs
applicability. On the other hand, when data on sufficient normal and a few
extreme events within a welldefined class of risks exist, then EVT offers a
very powerful statistical tool allowing one to extrapolate from the normal
to the extreme. Numerous publications within financial risk management ex-
emplify this; see for instance Embrechts [30]. Specific applications of EVT
to risk management questions for central banks can be found in De Brandt
and Hartman [25] and Hartman et al. [50]. Relevant problems where EVT
technology could be applied are discussed in Borio et al. [14]. These papers
mainly concern spillover of crises between financial markets, contagion, sys-
temic risk and financial stability. An example of an EVT analysis related to
the interest rate crisis of 2000 in Turkey involving interventions at the cur-
rency level (the lira) by the Turkish central bank is discussed in Gencay and
Selcuk [44]. Embrechts [31] discusses the broader economic issues underlying
the application of EVT to financial risk management.
In the next sections we concentrate on the calculation of an operational
risk charge based on EVT and related actuarial techniques.
9
3 The Problem
In order to investigate the kind of methodological problems one faces when
trying to calculate a capital charge or reserve for (quantifiable) operational
risks, we introduce some mathematical notation.
A typical operational risk database will consist of realisations of random
variables
Ykt,i : t = 1, . . . , T , i = 1, . . . , s and k = 1, . . . , N t,i
where
where dt,i is some lower threshold below which losses are disregarded. Here
IA () = 1 whenever A, and 0 otherwise. Hence, the total loss amount
for year t becomes
s X
N t,i s
X X
Lt = Ykt,i = Lt,i , t = 1, . . . , T . (1)
i=1 k=1 i=1
ORVaRT1
+1
= FLT +1 (1 ) ,
10
100 %
x
T+1
ORVaR1
or more sophisticated coherent risk measures; see Artzner et al. [2]. Further-
more, based on extreme value methodology, one could estimate a conditional
loss distribution function for the operational risk category (categories) under
investigation,
4 Towards a Theory
Since certain operational risk data are in many ways akin to insurance losses,
it is clear that methods from the field of (nonlife) insurance can play a fun-
damental role in their quantitative analysis. In this section we discuss some
of these tools, also referred to as Insurance Analytics. For a discussion of the
latter terminology, see Embrechts [32]. A further comparison with actuar-
ial methodology can, for instance, be found in Duffy [27]. As mentioned in
the Introduction, we have also made some references to EVT applications to
specific risk management issues for central banks; the comments made below
also apply to these.
(EVT-2) Use the socalled Peaks over threshold (POT) method to fit an
EVT model to the data above u = 1; plot the data (dots) and the
fitted model (solid line) on loglog scale. Linearity supports Pareto
type power behaviour of the loss distribution P (X1 > x) = x h(x);
see Figure 9. Here h is a socalled slowly varying function, i.e. for all
h(tx)
x > 0, limt h(t)
= 1. For h c, a constant, a loglog plot would be
linear.
(EVT-3) Estimate risk measures such as a 99% VaR and 99% CVaR
and calculate 95% confidence intervals around these risk measures; see
Figure 9.
industrial fire data: mean excess plot
400
300
Mean Excess
200
100
Figure 8: Empirical mean excess function for the fire loss data.
13
industrial fire data: log log plot
0.5000
0.0500
95
0.0050
99
0.0005
1 10 100 1000
x (on log scale)
consideration can be worked out. The techniques used belong to the realm of
maximum likelihood theory. We would however like to stress under precise
model assumptions. In Embrechts et al. [35] a simulation study by McNeil
and Saladin [54] is reported which estimates, in the case of independent and
identically distributed (iid) data, the number of observations needed in order
to achieve a preassigned accuracy. For instance, in the iid case and a Pareto
loss distribution with tail index 2 (a realistic assumption), in order to achieve
a reasonable accuracy for the VaR at = 0.001, a minimum number of 100
observations above a 90% threshold u is needed (corresponding to an original
1,000 observations).1
The basic result underlying the POT method is that the marked point
process of excesses over a high threshold u, under fairly general (though very
precise) conditions, can be well approximated by a compound Poisson process
(see Figure 10):
N (u)
X
Yk T k
k=1
where (Yk ) iid have a generalised Pareto distribution and N (u) denotes the
number of exceedances of u by (Xk ). The exceedances of u form (in the
limit) a homogeneous Poisson process and both are independent. See Lead-
better [52] for details. A consequence of the Poisson property is that inter
exceedance times of u are iid exponential. Hence such a model forms a good
first guess. More advanced techniques can be introduced taking, for instance,
nonstationarity and covariate modelling into account; see Embrechts [30],
ChavezDemoulin and Embrechts [16] and Coles [17] for a discussion of these
techniques. The asymptotic independence between exceedance times and ex-
cesses makes likelihood fitting straightforward.
1
See also Embrechts et al. [36], pp. 194, 270 and 343 for the need to check conditions for
the underlying data before an EVT analysis can be used. EVIS allows for several diagnostic
15
YN(u)
Xt
Y1
N(u)
u
Xn
X1
t
1 T1 TN(u) n
Figure 10: Stylised presentation of the POT method.
Turning to the mean excess plots for the operational risk data from Fig-
ures 13 (for the typespecific data) and Figure 4 (for the pooled data),
we clearly see the typical increasing (nearly linear) trends indicating heavy
tailed, Paretotype losses; see Figures 1114 and compare them with Fig-
ure 8. As a first step, we can carry out the above extreme value analysis
for the pooled data, though a refined analysis, taking nonstationarity into
account, is no doubt necessary. Disregarding the possible nonstationarity of
the data, one could be tempted to use the POT method and fit a generalised
Pareto distribution to the pooled losses above u = 0.4, say. Estimates for the
99% VaR and the 99% CVaR, including their 95% confidence intervals, are
given in Figure 15. For the VaR we get a point estimate of 9.1, and a 95%
confidence interval of (6.0, 18.5). The 99% CVaR beyond 9.1 is estimated
as 25.9, and the lower limit for its 95% confidence interval is 11.7. Since, as
in the fire insurance case, the tails are very heavy (b
= 1.63), we get a very
large estimate for the upper confidence limit for CVaR.
As already discussed before, the data in Figure 4 may contain a transition
from more sparse data over the first half of the period under investigation
to more frequent losses over the second half. It also seems that the early
checks on these conditions.
16
12
10
6
8
Mean Excess
Mean Excess
6
4
4
2
2
0 2 4 6 8 0 1 2 3
Threshold Threshold
Figure 11: Mean excess plot for op- Figure 12: Mean excess plot for op-
erational risk losses, Type 1. erational risk losses, Type 2.
type 3: mean excess plot pooled operational losses: mean excess plot
2.5
14
12
2.0
10
Mean Excess
Mean Excess
1.5
1.0
0.5
0 1 2 3 4 5 0 2 4 6 8
Threshold Threshold
Figure 13: Mean excess plot for op- Figure 14: Mean excess plot for
erational risk losses, Type 3. pooled operational risk losses.
pooled operational losses: log log plot
0.0500
95
1F(x) (on log scale)
0.0050
99
0.0005
Figure 15: Empirical and fitted distribution tails for pooled operational losses
on loglog scale, including estimates for VaR and CVaR.
17
losses (in Figure 4 for instance) are not only more sparse, but also heav-
ier. Again, this may be due to the way in which operational loss databases
are built up. When gathering data for years some distance in the past, one
only remembers the larger losses. Our EVT analysis should be adjusted for
such a switch in size and/or intensity. Though ChavezDemoulin and Em-
brechts [16] contains the relevant methodology, one should however realise
that for such more advanced modelling, many more observations are needed.
In the next section, we make a more mathematical (actuarial) excursion
in the realm of insurance risk theory. Risk theory provides models for ran-
dom losses in a dynamic setting and yields techniques for the calculation of
reserves given solvency constraints; see for instance Daykin et al. [24]. This
theory has been applied in a variety of contexts and may also yield a rele-
vant toolkit, especially in combination with EVT, if central banks were to
manage their foreign reserves more actively and hence be more exposed to
market, credit, and operational risk. In particular, ruin theory provides a
longerterm view on solvency for a dynamic loss process. We discuss some of
the key aspects of ruin theory with a specific application to operational risk
below.
Given that (1) yields the total operational risk loss of s different subcate-
gories during a given year, it can be seen as resulting from a superposition
of several (namely s) compound processes. So far, we are not aware of any
studies which establish detailed features of individual processes or their in-
terdependencies. Note that in Ebnother [28] and Ebnother et al. [29], condi-
tions on the aggregated process are imposed: independence, or dependence
through a common Poisson shock model. For the moment, we summarise (1)
18
where N (t) is the total number of losses over a time period [0, t] across all
s categories and the Yk s are the individual losses. We drop the additional
indices.
From an actuarial point of view, it would now be natural to consider an
initial (risk) capital u and a premium rate c > 0 and define the cumulative
risk process
Ct = u + ct Lt , t 0. (2)
In Figure 16 we have plotted such a risk process for the pooled operational
risk losses shown in Figure 4. Again, the regime switch is clearly seen,
splitting the time axis into roughly pre and past1998.
pooled operational losses: risk process
120
100
80
60
Figure 16: Risk process Ct with u = 50, c = 28 and the loss process from
Figure 4.
Given a small > 0, for the process in (2), a risk capital u can then be
calculated putting the socalled ruin probability, i.e. the probability for the
surplus process Ct to become negative, over a given time horizon [T , T ] equal
to , small:
u ; T , T = P inf (u + ct Lt ) < 0 = . (3)
T tT
19
For c = 0, T = T +, T = T + 1
u = ORVaRT1
+1
.
and
lim eRu (u) = C . (5)
u
The small claims condition leading to the existence of R > 0 can be expressed
in terms of E(eRYk ) and typically holds for distribution functions with expo-
nentially bounded tails. The constant C can be calculated explicitly; see for
instance Grandell [47], Asmussen [3] and Rolski et al. [63] for details. For the
case T = 0, T = and a process for which (4) holds, we can solve for u
in (3), obtaining
1 1
u = log ,
R
20
Hence the ruin probability (u) is determined by the tail of the loss distribu-
tion 1 G(x) for x large, meaning that ruin (or a given limit excess) is caused
by typically one (or a few) large claim(s). For a more detailed discussion on
this path leading to ruin, see Embrechts et al. [36], Section 8.3 and the
references given there. The asymptotic estimate (6) holds under very general
conditions of heavy tailedness, the simplest one being 1 G(x) = x h(x)
for h slowly varying and > 1. In this case (6) becomes
A very general result of type (7) for the distribution of the ultimate
supremum of a random walk with a negative drift is derived in Mikosch
and Samorodnitsky [56]. Mathematically, these results are equivalent
with ruin estimation for a related risk model.
For all of these models an estimate of type (7) holds. Invariably, the derivation
is based on the socalled one large claim heuristics; see Asmussen [3],
p. 264. These heuristics may eventually play an important role in the analysis
of operational risk data.
22
changed process and define its corresponding infinite horizon ruin function:
(u) = P sup (L(t) ct) > u .
t0
We then ask for conditions on the process parameters involved, as well as for
conditions on ((t)), under which
(t)
lim = 1, (8)
t (t)
meaning that, asymptotically, ruin is of the same order of magnitude in the
timechanged (more realistic) process as it is for the original (more stylised)
process. These results can be interpreted as a kind of robustness charac-
terisation for general risk processes so that the polynomial ruin probability
estimate (7) holds. In Embrechts and Samorodnitsky [38], besides general
results for (8) to hold, specific examples are discussed. Motivated by the ex-
ample of transaction risk (see Table 1), Section 3 in the latter paper discusses
the case of mixing through Markov chain switching models, also referred to
as Markovmodulated or Markov renewal processes. In the context of oper-
ational risk, it is natural to consider a class of timechange processes ((t))
in which time runs at a different rate in different time intervals, depending
on the state of a certain underlying Markov chain. The Markov chain stays
a random amount of time in each state, with a distribution that depends
on that state. Going back to the transaction risk case, one can think of the
Markov chain states as resulting from an underlying market volume (inten-
sity) index. These changes in volumes traded may for instance have an effect
on back office errors. The results obtained in Embrechts and Samorodnit-
sky [38] may be useful to characterise interesting classes of loss processes
where ruin behaves as in (7). Recall from Figure 5 the fact that certain oper-
ational risk losses show periods of high (and low) intensity. Future dynamic
models for subcategories of operational risk losses will have to take these
24
characteristics into account. The discussion above mainly aims to show that
tools for such problems are at hand and await the availability of more detailed
loss databases.
Some remarks should be made at this point. Within classical insurance
risk theory, a full solution linking heavytailedness of the claim distribution
to the longtailedness of the corresponding ruin probability is discussed in
Asmussen [3]. Alternative models leading to similar distributional conclusions
can be found in the analysis of teletraffic data; see for instance Resnick and
Samorodnitsky [62] and Finkenstadt and Rootzen [40]. Whereas the basic
operational risk model in (1) may be of a more general nature than the ones
discussed above, support seems to exist for the supposition that under fairly
general conditions, the tail behaviour of P (LT +1 > x) will be powerlike. Fur-
thermore, the notion of timechange may seem somewhat artificial. This tech-
nique has however been around in insurance mathematics for a long time and
is used to transform a complicated loss process into a more standard one; see
for instance Cramer [18] or Buhlmann [15]. Within finance, these techniques
were introduced through the fundamental work of Olsen and Associates on
time; see Dacorogna et al. [21]. Further work has been done by Ane and
Geman [1], Geman et al. [42, 43] and more recently BarndorffNielsen and
Shephard [8]; they use timechange techniques to transform a financial time
series with randomness in the volatility to a standard BlackScholesMerton
model. As stated above, the situation is somewhat akin to the relationship
between a Brownian motionbased model (such as the BlackScholesMerton
model) and more recent models based on general semimartingales. It is a
wellknown result, see Monroe [57], that any semimartingale can be written
as a timechanged Brownian motion.
25
Several procedures exist for numerically calculating (10) under a wide range
of conditions. These include recursive methods such as the PanjerEuler
method for claim number distributions satisfying P (N = k) = (a + kb )P (N =
k 1) for k = 1, 2, . . . (see Panjer [58]), and Fast Fourier Transform methods
(see Bertram [12]). Grubel and Hermesmeier [48, 49] are excellent review pa-
pers containing further references. The actuarial literature contains numerous
26
publications on the subject; good places to start are Panjer and Willmot [59]
and Hogg and Klugman [51].
Finally, looking at (1), several aggregation operations are taking place, in-
cluding the superposition of the different loss frequency processes (N t,i )i=1,...,s
and the aggregation of the different loss size variables Ykt,i k=1,...,N t,i ,i=1,...,s .
For the former, techniques from the theory of point processes are available;
see for instance Daley and VereJones [22]. The issue of dependence modelling
within and across operational risk loss types will no doubt play a crucial role;
copula techniques, as introduced in risk management in Embrechts et al. [37],
can no doubt be used here.
5 Final Comments
As stated in the Introduction, tools from the realm of insurance as discussed
in this paper may well become relevant, conditional on the further devel-
opment and implementation of quantitative operational risk measurement
within the financial industry. Our paper aims at encouraging a better ex-
change of ideas between actuaries and risk managers. Even if one assumes
full replicability of operational risk losses within the several operational risk
subcategories, their interdependence will make detailed modelling difficult.
The theory presented in this paper is based on specific conditions and can
be applied in cases where testing has shown that these underlying assump-
tions are indeed fulfilled. The ongoing discussions around Basel II will show
at which level the tools presented will become useful. However, we strongly
doubt that a full operational risk capital charge can be based solely on sta-
tistical modelling.
Some of the tools introduced and mainly exemplified through their appli-
cation to the quantitative modelling of operational risk are no doubt useful
27
well beyond in more general risk management. Further research will have
to look more carefully at the risk management issues underlying the central
bank landscape. In particular, the issue of market liquidity under extreme
events, together with the more active management of foreign reserves and
the longer time view will necessitate tools that complement the existing ones
in risk management for commercial banks. Insurance analytics as presented
in this paper, and discussed more in detail in Embrechts [33], will no doubt
form part of this.
References
[1] Ane, T. and Geman, H. (2000). Order Flow, Transaction Clock and
Normality of Asset Returns. Journal of Finance, 55, pp. 225984.
[2] Artzner, P., Delbaen, F., Eber, J. M., and Heath, D. (1999). Coherent
Measures of Risk, Mathematical Finance, 9, pp. 20328.
[5] Asmussen, S., Binswanger, K., and Hjgaard, B. (2000). Rare Events
Simulation for Heavytailed Distributions. Bernoulli, 6(2), pp. 30322.
[10] Batten, D. S. (1982). Central Banks Demand for Foreign Reserves under
Fixed and Floating Exchange Rates. Federal Reserve Bank of St. Louis.
28
[13] Blum, P., Dias, A., and Embrechts, P. (2002). The ART of Dependence
Modelling: The Latest Advances in Correlation Analysis. In: Alternative
Risk Strategies (ed. M. Lane). Risk Waters Group, London, pp. 33956.
[14] Borio, C., Furfine, C., and Lowe, P. (2001) Procyclicality of the Financial
System and Financial Stability; Issues and Policy Options. BIS Papers
No. 1.
[19] Crouhy, C., Galai, D., and Mark, R. (2000). Risk Management. McGraw
Hill, New York.
[21] Dacorogna, M. M., Gencay, R., Muller, U. A., Olsen, R. B., and Pictet,
O. V. (2001). An Introduction to Highfrequency Finance. Academic
Press.
[23] Danelsson, J., Embrechts, P., Goodhart, C., Keating, C., Muennich, F.,
Renault, O., and Shin, H. S. (2001). An Academic Response to Basel II.
Financial Markets Group, London School of Economics, Special Paper
No. 130.
[24] Daykin, C. D., Pentikainen, T., and Pesonen, M. (1994). Practical Risk
Theory for Actuaries. Chapman and Hall/CRC, London.
29
[26] Does, R. J. M. M., Roes, K. C. B., and Trip, A. (1999). Statistical Process
Control in Industry. Kluwer Academic.
[27] Duffy, P. (2002). Operational Risk. The Actuary, October 2002, pp. 24
25.
[29] Ebnother, S., Vanini, P., McNeil, A. J., and AntolinezFehr, P. (2003).
Operational Risk: A Practitioners View. Journal of Risk, 5(3), pp. 115.
[34] Embrechts, P., Frey, R., and McNeil, A. J. (2004). Quantitative Risk
Management: Concepts, Techniques and Tools. Book manuscript, ETH
Zurich.
[35] Embrechts, P., Furrer, H., and Kaufmann, R. (2003). Quantifying Regu-
latory Capital for Operational Risk. Derivatives Use, Trading and Reg-
ulation, 9(3), pp. 21733.
[36] Embrechts, P., Kluppelberg, C., and Mikosch, T. (1997). Modelling Ex-
tremal Events for Insurance and Finance. Springer.
[42] Geman, H., Madan, D. B., and Yor, M. (2001). Time Changes for Levy
Processes. Mathematical Finance, 11(1), pp. 7996.
[43] Geman, H., Madan, D. B., and Yor, M. (2002). Stochastic Volatil-
ity, Jumps and Hidden Time Changes. Finance and Stochastics, 6(1),
pp. 6390.
[50] Hartman, P., Straetmans, S., and de Vries, C.G. (2001). Asset Market
Linkages in Crisis Periods. Working Paper No. 71, European Central
Bank, Frankfurt.
[54] McNeil, A. J. and Saladin, T. (1997). The Peaks over Thresholds Method
for Estimating High Quantiles of Loss Distributions. Proceedings of
XXVIIth International Astin Colloquium, Cairns, Australia, pp. 2343.
[63] Rolski, R, Schmidli, H., Teugels, J., and Schmidt, V. (1999). Stochastic
Processes for Insurance and Finance. Wiley.