Extremizing and Anti-Extremizing in Bayesian Ensembles of Binary-Event Forecasts
29 Pages Posted: 27 Mar 2017 Last revised: 24 Apr 2021
Date Written: August 25, 2020
Probability forecasts of binary events are often gathered from multiple models or experts and averaged to provide inputs regarding uncertainty in important decision-making problems. Averages of well-calibrated probabilities are underconfident and methods have been proposed to make them more extreme. To aggregate probabilities, we introduce a class of ensembles that are generalized additive models. These ensembles are based on Bayesian principles and can help us learn why and when extremizing is appropriate. Extremizing is typically viewed as shifting the average probability farther from one-half; we emphasize that it is more suitable to define extremizing as shifting it farther from the base rate. We introduce the notion of anti-extremizing to learn instances where it might be beneficial to make average probabilities less extreme.
Analytically, we find that our Bayesian ensembles often extremize the average forecast, but sometimes anti-extremize instead. On several publicly available datasets, we demonstrate that our Bayesian ensemble performs well and anti-extremizes anywhere from 18% to 73% of the cases. It anti-extremizes much more often when there is bracketing with respect to the base rate among the probabilities being aggregated than with no bracketing, suggesting that bracketing is a promising indicator of when we should consider anti-extremizing
Keywords: Forecast aggregation; linear opinion pool; generalized linear model; extremizing and anti-extremizing; bracketing; probit ensemble
Suggested Citation: Suggested Citation