On the Morality of Artificial Agents

29 Pages Posted: 20 May 2021

See all articles by Luciano Floridi

Luciano Floridi

Yale University - Digital Ethics Center; University of Bologna- Department of Legal Studies

J. W. Sanders

University of Oxford

Date Written: August 01, 2004

Abstract

Artificial agents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients (as entities that can be acted upon for good or evil) and also as moral agents (as entities that can perform actions, again for good or evil). In this paper, we clarify the concept of agent and go on to separate the concerns of morality and responsibility of agents (most interestingly for us, of AAs). We conclude that there is substantial and important scope, particularly in Computer Ethics, for the concept of moral agent not necessarily exhibiting free will, mental states or responsibility. This complements the more traditional approach, common at least since Montaigne and Descartes, which considers whether or not (artificial) agents have mental states, feelings, emotions and so on. By focussing directly on ‘mind-less morality’ we are able to avoid that question and also many of the concerns of Artificial Intelligence. A vital component in our approach is the ‘Method of Abstraction’ for analysing the level of abstraction (LoA) at which an agent is considered to act. The LoA is determined by the way in which one chooses to describe, analyse and discuss a system and its context. The ‘Method of Abstraction’ is explained in terms of an ‘interface’ or set of features or observables at a given ‘LoA’. Agenthood, and in particular moral agenthood, depends on a LoA. Our guidelines for agenthood are: interactivity (response to stimulus by change of state), autonomy (ability to change state without stimulus) and adaptability (ability to change the ‘transition rules’ by which state is changed) at a given LoA. Morality may be thought of as a ‘threshold’ defined on the observables in the interface determining the LoA under consideration. An agent is morally good if its actions all respect that threshold; and it is morally evil if some action violates it. That view is particularly informative when the agent constitutes a software or digital system, and the observables are numerical. Finally we review the consequences for Computer Ethics of our approach. In conclusion, this approach facilitates the discussion of the morality of agents not only in Cyberspace but also in the biosphere, where animals can be considered moral agents without their having to display free will, emotions or mental states, and in social contexts, where systems like organizations can play the role of moral agents. The primary ‘cost’ of this facility is the extension of the class of agents and moral agents to embrace AAs.

Keywords: Artificial agents, Computer ethics, Levels of abstraction, Moral responsibility

Suggested Citation

Floridi, Luciano and Sanders, J. W., On the Morality of Artificial Agents (August 01, 2004). Available at SSRN: https://ssrn.com/abstract=3848388 or http://dx.doi.org/10.2139/ssrn.3848388

Luciano Floridi (Contact Author)

Yale University - Digital Ethics Center ( email )

85 Trumbull Street
New Haven, CT CT 06511
United States
2034326473 (Phone)

University of Bologna- Department of Legal Studies ( email )

Via Zamboni 22
Bologna, Bo 40100
Italy

HOME PAGE: http://www.unibo.it/sitoweb/luciano.floridi/en

J. W. Sanders

University of Oxford

Mansfield Road
Oxford, Oxfordshire OX1 4AU
United Kingdom

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
820
Abstract Views
2,815
Rank
61,075
PlumX Metrics