On the Selection of Forecasting Models
60 Pages Posted: 15 Mar 2003
Date Written: February 2003
It is standard in applied work to select forecasting models by ranking candidate models by their PMSE in simulated out-of-sample (SOOS) forecasts. Alternatively, forecast models may be selected using information criteria (IC). We compare the asymptotic and finite-sample properties of these methods in terms of their ability to minimize the true out-of-sample PMSE, allowing for possible misspecification of the forecast models under consideration. We first study a covariance stationary environment. We show that under suitable conditions the IC method will be consistent for the best approximating model among the candidate models. In contrast, under standard assumptions the SOOS method will select overparameterized models with positive probability, resulting in excessive finite-sample PMSEs. We also show that in the presence of unmodelled structural change both methods will be inadmissible in the sense that they may select a model with strictly higher PMSE than the best approximating model among the candidate models.
Keywords: Model selection, Forecast accuracy, Structural change, Information criteria, Simulated out-of-sample method
JEL Classification: C22, C52, C53
Suggested Citation: Suggested Citation