Making Density Forecasting Models Statistically Consistent

34 Pages Posted: 24 Jan 2006

See all articles by Michael Carney

Michael Carney

Trinity College (Dublin) - School of Business Studies

Paidrig Cunningham

Trinity College (Dublin) - School of Business Studies

Brian M. Lucey

Trinity Business School, Trinity College Dublin; Jiangxi University of Finance and Economics; Abu Dhabi University - College of Business Administration; Ho Chi Minh City University of Economics and Finance

Date Written: January 2006

Abstract

We propose a new approach to density forecast optimisation and apply it to Value-at-Risk estimation. All existing density forecasting models try to optimise the distribution of the returns based solely on the predicted density at the observation. In this paper we argue that probabilistic predictions should be optimised on more than just this accuracy score and suggest that the statistical consistency of the probability estimates should also be optimised during training. Statistical consistency refers to the property that if a predicted density function suggests P percent probability of occurrence, the event truly ought to have probability P of occurring. We describe a quality score that can rank probability density forecasts in terms of statistical consistency based on the probability integral transform (Diebold et al., 1998b). We then describe a framework that can optimise any density forecasting model in terms of any set of objective functions. The framework uses a multi-objective evolutionary algorithm to determine a set of trade-off solutions known as the Pareto front of optimal solutions. Using this framework we develop an algorithm for optimising density forecasting models and implement this algorithm for GARCH (Bollerslev, 1986) and GJR models (Glosten et al., 1993). We call these new models Pareto-GARCH and Pareto-GJR. To determine whether this approach of multi-objective optimisation of density forecasting models produces better results over the standard GARCH and GJR optimisation techniques we compare the models produced empirically on a Value-at-Risk application. Our evaluation shows that our Pareto models produce superior results out-of-sample.

Keywords: Density Forecasting, Statistical Consistency, Calibration, Value at Risk

JEL Classification: C15, C52

Suggested Citation

Carney, Michael and Cunningham, Paidrig and Lucey, Brian M., Making Density Forecasting Models Statistically Consistent (January 2006). IIIS Discussion Paper Series, Available at SSRN: https://ssrn.com/abstract=877629 or http://dx.doi.org/10.2139/ssrn.877629

Michael Carney

Trinity College (Dublin) - School of Business Studies ( email )

AAP College Green
Dublin 2
Ireland

Paidrig Cunningham

Trinity College (Dublin) - School of Business Studies ( email )

AAP College Green
Dublin 2
Ireland

Brian M. Lucey (Contact Author)

Trinity Business School, Trinity College Dublin ( email )

The Sutherland Centre, Level 6, Arts Building
Dublin 2
Ireland
+353 1 608 1552 (Phone)
+353 1 679 9503 (Fax)

Jiangxi University of Finance and Economics ( email )

South Lushan Road
Nanchang, Jiangxi 330013
China

Abu Dhabi University - College of Business Administration ( email )

PO Box 59911
Abu Dhabi, Abu Dhabi 59911
United Arab Emirates

Ho Chi Minh City University of Economics and Finance ( email )

59C Nguyen Dình Chieu
6th Ward, District 3
Ho Chi Minh City, Ho Chi Minh 70000
Vietnam