Tests for Differences between Least Squares and Robust Regression Coefficients
48 Pages Posted: 17 Aug 2021
Date Written: October 5, 2012
At the present time there is no well accepted test for determining whether or not robust regression parameter estimates are significantly different than least squares estimates. Thus. we propose and demonstrate the efficacy of two Wald-like statistical tests for the above purposes using MM-estimators for robust regression. The two tests have corresponding appropriate null and alternative hypotheses, the latter of which allows for bias in both least squares and robust estimates. The tests are designed to detect significant differences between least squares and robust estimates due to both inefficiency of least squares under fat-tailed non-normality and large biases of least squares relative to robust regression coefficient estimators under bias inducing distributions. The asymptotic normality of the test statistics is established and the finite sample level and power of the two tests are evaluated by Monte Carlo, which yields promising test performance results. Our simulation studies focus on the choice of two well-known MM-estimator loss functions, the bisquare loss function due to J. W. Tukey as introduced in Beaton and Tukey (1974) and an optimal bias robust loss function due to Yohai and Zamar (1997). While one of the two tests is preferred if only one test is used, we show that use of both tests can be useful in diagnosing the reason for rejection of a null hypothesis.
Keywords: robust regression, M-estimator, MM-estimator, redescending psi function, bias test
JEL Classification: C13, C61
Suggested Citation: Suggested Citation