Artificial Intelligence and the Problem of Autonomy
Notre Dame Journal on Emerging Technologies, vol. 1, Forthcoming
41 Pages Posted: 16 Sep 2019
Date Written: September 9, 2019
Artificial intelligence (AI) systems are routinely said to operate autonomously, exposing gaps in regulatory regimes that assume the centrality of human actors. Yet surprisingly little attention is given to precisely what is meant by “autonomy” and its relationship to those gaps. Driverless vehicles and autonomous weapon systems are the most widely studied examples, but related issues arise in algorithms that allocate resources or determine eligibility for programs in the private or public sector. This article develops a novel typology of autonomy that distinguishes three discrete regulatory challenges posed by AI systems: the practical difficulties of managing risk associated with new technologies, the morality of certain functions being undertaken by machines at all, and the legitimacy gap when public authorities delegate their powers to algorithms.
Keywords: artificial intelligence, autonomy, autonomous vehicles, autonomous weapon systems, algorithmic decision-making
Suggested Citation: Suggested Citation