Econometrics at Scale: Spark Up Big Data in Economics
49 Pages Posted: 18 Aug 2018 Last revised: 13 Feb 2020
Date Written: February 6, 2020
Abstract
This paper provides an overview of how to use “big data” for economic research. We investigate the performance and ease of use of different Spark applications running on a distributed file system to enable the handling and analysis of data sets which were previously not usable due to their size. More specifically, we explain how to use Spark to (i) explore big data sets which exceed retail grade computers memory size and (ii) run typical econometric tasks including microeconometric, panel data and time series regression models which are prohibitively expensive to evaluate on stand-alone machines. By bridging the gap between the abstract concept of Spark and ready-to-use examples which can easily be altered to suite the researchers need, we provide economists and social scientists more generally with the theory and practice to handle the ever growing datasets available. The ease of reproducing the examples in this paper makes this guide a useful reference for researchers with a limited background in data handling and distributed computing.
Keywords: Time Series Econometrics, Distributed Computing, Apache Spark
JEL Classification: C53, C55
Suggested Citation: Suggested Citation