Addressing Automation Bias Through Verifiability
EWAF 2023, European Workshop on Algorithmic Fairness, Proceedings of the 2nd European Workshop on Algorithmic Fairness
6 Pages Posted: 4 Aug 2023
Date Written: July 4, 2023
Abstract
The phenomenon of human bias finds a new facet in the hybrid human-machine interaction of today’s digitized decision-making systems: automation bias describes the fact that human decision-makers overly trust machine-generated decision proposals, sometimes against their better knowledge. Although human involvement in hybrid human-machine systems is the practical rule compared to fully automated systems in institutionalized decision-making processes, it is not clear how this can be operationalized in a safe, adequate and legally compliant way. In its current legislated form, human interaction does not ensure meaningful human involvement, it also represents a systemic avenue to shirk responsibility by decision support system manufacturers and deployers. In this paper, we analyze the literature on human performance in automated systems and automation bias and identify verification behavior as the key variable ameliorating automation bias. Based on the empirical evidence for automation bias and its cognitive-behavioral correlates, we propose verifiability as a minimum necessary requirement for meaningful human involvement. We argue that verifiability might be subdivided into 1) the intrinsic verification complexity of a system, 2) factors relating to the verification propensity of a user, and 3) the contextual factors influencing verification.
Keywords: automation bias, human centered computing, algorithmic regulation, human-in-the-loop, verification
Suggested Citation: Suggested Citation