Bounding Human Effort in Augmented Intelligence Systems
36 Pages Posted: 13 Jan 2025 Last revised: 12 Jan 2025
Date Written: January 11, 2025
Abstract
Augmented intelligence systems leverage human-algorithm collaboration to improve decisionmaking in various settings. The performance of such systems hinges on both the algorithm's accuracy and the effort required by humans to correct the algorithm's mistakes. Yet, human correction effort has been ignored in the design of augmented intelligence systems thus far. In this work, we propose a framework that provably endows algorithms in augmented intelligence systems with bounds on the expected human effort required to correct their mistakes. We collaborate with a large social media firm to expand their advertising interest taxonomy using augmented intelligence, and show that bounding human correction effort (keeping algorithmic accuracy fixed) significantly improves the performance of the augmented intelligence system overall by increasing taxonomists' decision-making speed and accuracy. More generally, our research reveals a novel human-centric dimension of optimization for the imperfect algorithms commonly embedded in augmented intelligence systems.
Keywords: Human-Algorithm Hybrids, Augmented Intelligence, Ad Interest Taxonomies
Suggested Citation: Suggested Citation