Artificial Intelligence and the Problem of Autonomy

Notre Dame Journal on Emerging Technologies, vol. 1, Forthcoming

NUS Law Working Paper No. 2019/016

41 Pages Posted: 16 Sep 2019

See all articles by Simon Chesterman

Simon Chesterman

National University of Singapore (NUS) - Faculty of Law

Date Written: September 9, 2019

Abstract

Artificial intelligence (AI) systems are routinely said to operate autonomously, exposing gaps in regulatory regimes that assume the centrality of human actors. Yet surprisingly little attention is given to precisely what is meant by “autonomy” and its relationship to those gaps. Driverless vehicles and autonomous weapon systems are the most widely studied examples, but related issues arise in algorithms that allocate resources or determine eligibility for programs in the private or public sector. This article develops a novel typology of autonomy that distinguishes three discrete regulatory challenges posed by AI systems: the practical difficulties of managing risk associated with new technologies, the morality of certain functions being undertaken by machines at all, and the legitimacy gap when public authorities delegate their powers to algorithms.

Keywords: artificial intelligence, autonomy, autonomous vehicles, autonomous weapon systems, algorithmic decision-making

Suggested Citation

Chesterman, Simon, Artificial Intelligence and the Problem of Autonomy (September 9, 2019). Notre Dame Journal on Emerging Technologies, vol. 1, Forthcoming; NUS Law Working Paper No. 2019/016. Available at SSRN: https://ssrn.com/abstract=3450540 or http://dx.doi.org/10.2139/ssrn.3450540

Simon Chesterman (Contact Author)

National University of Singapore (NUS) - Faculty of Law ( email )

469G Bukit Timah Road
Eu Tong Sen Building
Singapore, 259776
Singapore

HOME PAGE: www.SimonChesterman.com

Register to save articles to
your library

Register

Paper statistics

Downloads
111
Abstract Views
416
rank
250,966
PlumX Metrics