On the Selection of Error Measures for Comparisons Among Forecasting Methods
Journal of Forecasting , Vol. 14, pp. 67-71, 1995
5 Pages Posted: 7 Feb 2005 Last revised: 1 Jan 2012
Abstract
Clements and Hendry (1993) proposed the Generalized Forecast Error Second Moment (GFESM) as an improvement to the Mean Square Error in comparing forecasting performance across data series. They based their conclusion on the fact that rankings based on GFESM remain unaltered if the series are linearly transformed. In this paper, we argue that this evaluation ignores other important criteria. Also, their conclusions were illustrated by a simulation study whose relationship to real data was not obvious. Thirdly, prior empirical studies show that the mean square error is an inappropriate measure to serve as a basis for comparison. This undermines the claims made for the GFESM.
Keywords: Accuracy Forecast evaluation Loss functions
Suggested Citation: Suggested Citation
Do you have a job opening that you would like to promote on SSRN?
Recommended Papers
-
By Fred Collopy and J. Scott Armstrong
-
Error Measures for Generalizing About Forecasting Methods: Empirical Comparisons
By J. Scott Armstrong and Fred Collopy
-
By J. Scott Armstrong and Robert Carbone
-
By J. Scott Armstrong and Fred Collopy
-
Causal Forces: Structuring Knowledge for Time-Series Extrapolation
By J. Scott Armstrong and Fred Collopy
-
Expert Opinions About Extrapolation and the Mystery of the Overlooked Discontinuities
By Fred Collopy and J. Scott Armstrong
-
Beyond Accuracy: Comparison of Criteria Used to Select Forecasting Methods
By Thomas Yokum and J. Scott Armstrong
-
Structuring Knowledge Retrieval: An Analysis of Decomposed Quantitative Judgments
By Donald G. Macgregor, Sarah Lichtenstein, ...