55 Pages Posted: 14 Jul 2017
Date Written: June 30, 2017
Research into predictive accuracy testing remains at the forefront of the forecasting field. One
reason for this is that rankings of predictive accuracy across alternative models, which under misspecification are loss function dependent, are universally utilized to assess the usefulness of econometric models. A second reason, which corresponds to the objective of this paper, is that researchers are currently focusing considerable attention on so-called big data, and on new (and old) tools that are available for the analysis of this data. One of the objectives in this field is the assessment of whether big-data leads to improvement in forecast accuracy. In this survey paper, we discuss some of the latest (and most interesting) methods currently available for analyzing and utilizing big data when the objective is improved prediction. Our discussion includes a summary of various so-called dimension reduction, shrinkage, and machine learning methods, as well as a summary of recent tools that are useful for ranking prediction models associated with the implementation of these methods. We also provide a brief empirical illustration of big-data in action, in which we show that big data are indeed useful when predicting the term structure of interest rates.
Keywords: Convex loss function, Empirical processes, Forecast superiority, General loss function, Big data, Shrinkage, Latent Factors
JEL Classification: C12, C22, C53
Suggested Citation: Suggested Citation
Swanson, Norman R. and Xiong, Weiqi, Big Data Analytics in Economics: What Have We Learned So Far, and Where Should We Go from Here? (June 30, 2017). Available at SSRN: https://ssrn.com/abstract=2998299