Artificial Intelligence and the Problem of Autonomy
Notre Dame Journal on Emerging Technologies, 1 (2020), 210–250
41 Pages Posted: 16 Sep 2019 Last revised: 14 Apr 2020
Date Written: April 7, 2020
Artificial intelligence (AI) systems are routinely said to operate autonomously, exposing gaps in regulatory regimes that assume the centrality of human actors. Yet surprisingly little attention is given to precisely what is meant by “autonomy” and its relationship to those gaps. Driverless vehicles and autonomous weapon systems are the most widely studied examples, but related issues arise in algorithms that allocate resources or determine eligibility for programs in the private or public sector. This article develops a novel typology of autonomy that distinguishes three discrete regulatory challenges posed by AI systems: the practical difficulties of managing risk associated with new technologies, the morality of certain functions being undertaken by machines at all, and the legitimacy gap when public authorities delegate their powers to algorithms.
Keywords: artificial intelligence, autonomy, autonomous vehicles, autonomous weapon systems, algorithmic decision-making
JEL Classification: K24, O32, O34, O38
Suggested Citation: Suggested Citation