Nailing Prediction: Experimental Evidence on Tools and Skills in Predictive Model Development
74 Pages Posted: 14 Dec 2022 Last revised: 15 Dec 2022
Date Written: December 2, 2022
Abstract
As information technology (IT) and artificial intelligence (AI) continue to reshape workplace productivity, a principle puzzle has emerged: Why are some skills substituted by technology while others are complementary? To answer this question, we propose a framework focused on tools, which are specific implementations of a technology. The framework then allows us to distinguish between low-level "baseline" skills automated by the tool and high-level "derived" skills built on the tool's abstraction. We validate this framework through a field experiment involving a prediction competition. The experiment restricted access to software libraries for machine learning models and studied its impact on final predictive model accuracy. Beyond estimating and benchmarking a large treatment effect, we show significant heterogeneity in treatment that depends strongly on the type of skill being considered. Whereas a standard deviation increase in derived skills increases the treatment effect by 62%, a standard deviation increase in baseline skills decreases the treatment effect by 72%. Our results also show that the unrestricted group has significantly lower variation in model accuracy than the restricted group, resulting in more equal productive outcomes across the total population.
Keywords: Economics of AI, Predictive Model Development, Skills, Tools
Suggested Citation: Suggested Citation