Bias Reduction and Likelihood Based Almost-Exactly Sized Hypothesis Testing in Predestricted Regressions using the Restricted Likelihood
48 Pages Posted: 10 Sep 2013
Date Written: August 2009
Difficulties with inference in predictive regressions are generally attributed to strong persistence in the predictor series. We show that the major source of the problem is actually the nuisance intercept parameter and propose basing inference on the Restricted Likelihood,which is free of such nuisance location parameters and also possesses small curvature, making it suitable for inference. The bias of the Restricted Maximum Likelihood (REML) estimates is shown to be approximately 50% less than that of the OLS estimates near the unit root, without loss of efficiency. The error in the chi-square approximation to the distribution of the REML based Likelihood Ratio Test (RLRT) for no predictability is shown to be 3/4 − ρ2 n−1 (G3 (·) − G1 (·)) + O n−2 , where |ρ| < 1 is the correlation of the innovation series and Gs (·) is the c.d.f. of a χ2s random variable. This very small error, free of the AR parameter, suggests that the RLRT for predictability has very good size properties even when the regressor has strong persistence. The Bartlett corrected RLRT achieves an O n−2 error. Power under local alternatives is obtained and extensions to more general univariate regressors and vector AR(1) regressors, where OLS may no longer be asymptotically efficient, are provided. In simulations the RLRT maintains size well, is robust to non-normal errors and has uniformly higher power than the Jansson-Moreira test with gains that can be substantial. The Campbell- Yogo Bonferroni Q test is found to have size distortions and can be significantly oversized.
Keywords: Bartlett correction, likelihood ratio test, curvature
Suggested Citation: Suggested Citation