Robust Forecast Superiority Testing with an Application to Assessing Pools of Expert Forecasters
42 Pages Posted: 10 Mar 2020 Last revised: 24 Aug 2020
Date Written: August 24, 2020
Abstract
We develop a forecast superiority testing methodology which is robust to the choice of loss function. Following Jin, Corradi and Swanson (JCS: 2017), we rely on a mapping between generic loss forecast evaluation and stochastic dominance principles. However, unlike JCS tests, which are not uniformly valid, and have correct asymptotic size only under the least favorable case, our tests are uniformly asymptotically valid and non-conservative. These properties are derived by first establishing uniform convergence (over error support) of HAC variance estimators and of their bootstrap counterparts, and by extending the asymptotic validity of generalized moment selection tests to the case of non-vanishing recursive parameter estimation error. Monte Carlo experiments indicate good finite sample performance of the new tests, and an empirical illustration suggests that prior forecast accuracy matters in the Survey of Professional Forecasters. Namely, for our longest forecast horizons (4 quarters ahead), selecting pools of expert forecasters based on prior accuracy results in ensemble forecasts that are superior to those based on forming simple averages and medians from the entire panel of experts.
Suggested Citation: Suggested Citation