When Do Citizens Resist The Use of AI Algorithms in Public Policy? Theory and Evidence
65 Pages Posted: 23 Jan 2023 Last revised: 7 Feb 2025
Date Written: January 18, 2023
Abstract
In recent years, there has been a significant rise in the use of algorithmic decision-making systems (ADS) to assist or replace human decision-making in a wide range of policy areas as policing, criminal sentencing, and social welfare assistance. How do citizens view the incorporation of this technology in guiding high-stakes decisions? I introduce a new theory to explain the conditions under which citizens view ADS as legitimate, fair, and accurate, and test it using data from original experiments embedded in a national U.S. survey. I show that across a wide range of policy domains, citizens exhibit aversion to the use of ADS in decisions that are seen as designed to sanction rather than to assist, and when they are required to make inferences about individuals rather than collectives. Evidence from a second experiment suggests that the employment of ADS in such contexts can significantly undermine the legitimacy of the policy actions they inform. Together, the study offers a framework to identify where AI-based tools will be deemed appropriate and where they might trigger backlash, highlighting the importance of accounting for citizens’ values in AI development and implementation to maintain legitimacy and democratic accountability.
Keywords: AI, algorithmic decision-making; public policy, legitimacy, public opinion, experimental evidence
Suggested Citation: Suggested Citation