Modeling Very Large Losses

9 Pages Posted: 8 May 2018

See all articles by Henryk Gzyl

Henryk Gzyl

Instituto de Estudios Superiores de Administración (IESA)

Date Written: April 30, 2018

Abstract

In this paper, we present a simple probabilistic model for aggregating very large losses into a loss collection. This supposes that “standard” losses come in various possible sizes – small, moderate and large – which, fortunately, seem to occur with decreasing frequency. Standard modeling allows us to infer a probability distribution describing their occurrence. From the historical record, we know that very large losses do occur, albeit very rarely, yet they are not usually included in the available data sets. Such losses should be made part of the distribution for computation purposes. For example, to a bank they may helpful in the computation of economic or regulatory capital, while to an insurance company they may be useful in the computation of premiums of losses due to catastrophic events. We develop a simple modeling procedure that allows us to include very large losses in a loss distribution obtained from moderately sized loss data. We say that a loss is large when it is larger than the value-at-risk (VaR) at a high confidence level. The original and extended distributions will have the same VaR but quite different values of tail VaR (TVaR).

Keywords: modeling very large losses, loss distribution, value-at-risk (VaR), expected shortfall (ES), very large losses.

Suggested Citation

Gzyl, Henryk, Modeling Very Large Losses (April 30, 2018). Journal of Operational Risk, Forthcoming. Available at SSRN: https://ssrn.com/abstract=3170965

Henryk Gzyl (Contact Author)

Instituto de Estudios Superiores de Administración (IESA) ( email )

Ave, Iesa, San Bernardino
Caracas, 1010
Venezuela

HOME PAGE: http://www.iesa.edu.ve

Here is the Coronavirus
related research on SSRN

Paper statistics

Downloads
0
Abstract Views
209
PlumX Metrics