A Commentary on Error Measures
Posted: 8 Feb 2005
Is the choice of an error measure to identify the most accurate forecasting method a question of personal taste? It appears that this may be the case although the papers by Armstrong and Collopy and by Fildes argue that it should not be. Carbone and Armstrong (1982) found that Root Mean Square Error (RMSE) was the most preferred measure of forecast accuracy. This is despite the fact that it is widely accepted that unit-free measures are necessary for comparisons among forecast methods and RMSE is not unit-free. Mean Absolute Percentage Error (MAPS) was the most widely used unit-free accuracy measure but is relevant only for ratio-scaled data (as Armstrong and Collopy point out, this is not a problem with most economic and demographic data) and is consistent with a loss function linear in percentage, not absolute errors. This may not be appropriate in some forecasting applications. As a rough check on the relevance of these findings to my own area, population forecasting, I surveyed seventeen papers dealing with population forecasts (national, state, county, and city). See the papers referenced in Ahlburg (1987) and Land (1986) and the papers by Ahlburg (1982), Smith (1987), and Smith and Sincich (1991). I found ten used MAPE, four used RMSE, three used Root Mean Square Percentage Error (RMSPE), and three used Theil's U (four used several measures). None justified the choice of error measure. Given this state of affairs, Armstrong and Collopy and Fildes set out to assist the forecaster in the task of choosing the most appropriate measure of forecast accuracy to select the best forecasting method.
Keywords: Forecasting, error measuring, Root Mean Square Error, Mean Absolute Percentage Error
Suggested Citation: Suggested Citation