Getting more Wisdom out of the Crowd: The Case of Competence-Weighted Aggregates
49 Pages Posted: 23 May 2022 Last revised: 28 Sep 2022
Date Written: September 26, 2022
Abstract
This paper shows that group discussions can serve as an instrument to improve individuals’ calibration, which in turn strongly increases the accuracy of competence-weighted, statistical aggregates. We conduct an experiment in which participants estimate quantities and report their self-perceived competence for various judgment problems. In addition, they engage in group discussions with other judges on unrelated judgment tasks. We find that prior to participating in the group discussions, judges’ self-perceived competence and their estimation accuracy are poorly aligned, which causes competence weighting to perform worse than prediction markets and simple averaging. However, the information exchange facilitated by the group discussions improved judges’ calibration, raising the accuracy of competence-weighted aggregates on subsequent judgment problems to prediction market levels and beyond.
Keywords: estimation accuracy, wisdom of the crowd, calibration, competence weighting, prediction markets
Suggested Citation: Suggested Citation