Rethinking Probabilistic Prediction in the Wake of the 2016 U.S. Presidential Election
19 Pages Posted: 17 Apr 2017
Date Written: April 15, 2017
To many statisticians and citizens, the outcome of the most recent U.S. presidential election represents a failure of data-driven methods on the grandest scale. This impression has led to much debate and discussion about how the election predictions went awry — Were the polls inaccurate? Were the models wrong? Did we misinterpret the probabilities? — and how they went right — Perhaps the analyses were correct even though the predictions were wrong, that’s just the nature of probabilistic forecasting. With this in mind, we analyze the election outcome with respect to a core set of effectiveness principles. Regardless of whether and how the election predictions were right or wrong, we argue that they were ineffective in conveying the extent to which the data was informative of the outcome and the level of uncertainty in making these assessments. Among other things, our analysis sheds light on the shortcomings of the classical interpretations of probability and its communication to consumers in the form of predictions. We present here an alternative approach, based on a notion of validity, which offers two immediate insights for predictive inference. First, the predictions are more conservative, arguably more realistic, and come with certain guarantees on the probability of an erroneous prediction. Second, our approach easily and naturally reflects the (possibly substantial) uncertainty about the model by outputting plausibilities instead of probabilities. Had these simple steps been taken by the popular prediction outlets, the election outcome may not have been so shocking.
Keywords: Interpretation of Probability, Plausibility, Prediction, Statistical Modeling, Validity
Suggested Citation: Suggested Citation