Ascribing Moral Responsibility for the Actions of Autonomous Weapons Systems – Taking a Moral Gambit
28 Pages Posted: 26 Sep 2022 Last revised: 23 Nov 2022
Date Written: September 25, 2022
In this article we focus on the attribution of moral responsibility for the actions of autonomous weapons systems (AWS). To do so, we suggest that the responsibility gap can be closed if human agents can take meaningful moral responsibility for the actions of AWS. This is a moral responsibility attributed to individuals in a justified and fair way and which is accepted by individuals as an assessment of their own moral character. We argue that, given the unpredictability of AWS, meaningful moral responsibly can only be discharged by human agents who are willing to take a moral gambit: they decide to design/develop/deploy AWS despite the uncertainty about the effects an AWS may produce, hoping that unintended and unwanted or unforeseen outcomes may never occurs, but also accepting to be held responsible if such outcomes will occur. We argue that, while a moral gambit is permissible for the use of non-lethal AWS, this is not the case for the actions of lethal autonomous weapon systems.
Keywords: Artificial Intelligence, Autonomous Weapons Systems, Lethal Autonomous Weapons Systems, Meaningful Moral Responsibility, Moral Gambit, Moral Responsibility, Responsibility Gap
Suggested Citation: Suggested Citation