When Do Citizens Resist The Use of Algorithmic Decision-making in Public Policy? Theory and Evidence

60 Pages Posted: 23 Jan 2023 Last revised: 14 Apr 2023

Date Written: January 18, 2023

Abstract

In recent years, there has been a significant rise in the use of algorithmic decision-making systems (ADSs) to assist or replace human decision-making in a wide range of policy contexts. These include decisions on issues such as policing, criminal sentencing, and social welfare assistance. How do citizens view the incorporation of this technology in making high-stakes decisions in public policy? I introduce a new theory to explain the conditions under which citizens view ADSs as legitimate, fair, and accurate, and test it using a series of original experiments embedded in a national U.S. survey. Using evidence on a wide range of decisions and policy domains, I show that citizens exhibit aversion to the use of ADSs in decisions that are seen as designed to sanction rather than to assist, and when they are required to make inferences about individuals rather than collectives. Evidence from a second experiment suggests that the employment of ADSs in such contexts can undermine the legitimacy of policy decisions. Overall, the theory and evidence I present provide novel insights into the way ADSs can be used in public policy and the political implications of this growing phenomenon.

Keywords: AI, algorithmic decision-making; public policy, legitimacy, public opinion, experimental evidence

Suggested Citation

Raviv, Shir, When Do Citizens Resist The Use of Algorithmic Decision-making in Public Policy? Theory and Evidence (January 18, 2023). Available at SSRN: https://ssrn.com/abstract=4328400 or http://dx.doi.org/10.2139/ssrn.4328400

Shir Raviv (Contact Author)

Columbia University ( email )

Northwest Corner, 550 W 120th St
New York City, NY 10027
United States

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
140
Abstract Views
766
Rank
380,903
PlumX Metrics