Extremizing and Anti-Extremizing in Bayesian Ensembles of Binary-Event Forecasts
28 Pages Posted: 27 Mar 2017 Last revised: 8 Oct 2019
Date Written: March 27, 2018
Probability forecasts of binary events are often gathered from multiple models and averaged to provide inputs regarding uncertainty in important decision-making problems. Averages of well calibrated probabilities are underconfident, and methods have been proposed to make them more extreme. To aggregate probabilities,we introduce a large class of ensembles that are generalized additive models. These ensembles are based on Bayesian principles and can help us learn why and when extremizing is appropriate. Extremizing is typically viewed as shifting the average probability farther from one-half; we emphasize that it is more suitable to define extremizing as shifting it farther from the base rate. We also introduce the notion of anti-extremizing to learn if it might sometimes be beneficial to make average probabilities less extreme. Analytically, we find that our Bayesian ensembles often extremize the average forecast, but sometimes anti-extremize instead. On two publicly available datasets, we demonstrate that our Bayesian ensemble performs well and anti-extremizes in about 20% of the cases. It anti-extremizes much more often when there is bracketing with respect to the base rate among the probabilities being aggregated than with no bracketing, suggesting that bracketing is a promising indicator of when we should consider anti-extremizing.
Keywords: Forecast aggregation; linear opinion pool; generalized linear model; extremizing and anti-extremizing; bracketing; probit ensemble
Suggested Citation: Suggested Citation