The Concept of 'The Human' in the Critique of Autonomous Weapons

14 Harvard National Security Journal (Forthcoming, 2023)

75 Pages Posted: 2 Feb 2023 Last revised: 30 Aug 2023

See all articles by Kevin Jon Heller

Kevin Jon Heller

University of Copenhagen (Centre for Military Studies)

Date Written: January 30, 2023

Abstract

The idea that using “killer robots” in armed conflict is unacceptable because they are not human is at the heart of nearly every critique of autonomous weapons. Some of those critiques are deontological, such as the claim that the decision to use lethal force requires a combatant to suffer psychologically and risk sacrifice, which is impossible for machines. Other critiques are consequentialist, such as the claim that autonomous weapons will never be able to comply with international humanitarian law (IHL) because machines lack human understanding and the ability to feel compassion.

This article challenges anthropocentric critiques of AWS. Such critiques, whether deontological or consequentialist, are uniformly based on a very specific concept of “the human” who goes to war: namely, the Enlightenment subject who perceives the world accurately, understands rationally, is impervious to negative emotions, and reliably translates thought into action. Decades of research in cognitive psychology indicate, however, that the Enlightenment subject does not exist. On the contrary, human decision-making is profoundly distorted by cognitive and social biases, negative emotions, and physiological limitations — particularly when humans find themselves in dangerous and uncertain situations like combat. Given those flaws, and in light of rapid improvement in sensor and AI technology, it is only a matter of time until autonomous weapons are able to comply with IHL better than human soldiers ever have or ever will.

The article is divided into five sections. Section I critiques deontological objections to autonomous weapons. It shows that those objections wrongly anthropomorphize AWS by assuming they “decide” on targets in a manner similar to humans, overstate the inclination of humans to think about the consequences of killing and to risk sacrificing themselves for others, and are predicated on a romanticized and anachronistic view of war in which most killing takes place face-to-face.

Sections II and III critique consequentialist objections to autonomous weapons that focus on the jus in bello. Section II addresses the common argument that IHL compliance requires human understanding – particularly the ability to discern the intentions of potential targets and to make fact-sensitive and context-dependent determinations. The section begins by demonstrating that such understanding is far less necessary to IHL than AWS critics assume. It then explains why, in those situations where understanding is necessary, well-documented limits on human decision-making undermine the idea that human soldiers are more likely to comply with IHL than autonomous weapons. Finally, the section ends by discussing why the concept of “meaningful human control” is an undesirable solution to the supposed problems of AWS and should give way to the superior concept of “meaningful human certification.”

Section III responds to the claim that autonomous weapons should be prohibited because machines cannot feel compassion, an emotion that is both normatively and legally required on the battlefield. It makes three arguments. The first is that compassion is irrelevant to IHL compliance. The second is that the potential benefits of compassion in combat are far outweighed by the costs of negative emotions such as stress and anger. And the third is that compassion can lead to negative outcomes in combat as well as positive ones.

Section IV focuses on international criminal law, addressing the argument that the non-human nature of autonomous weapons makes it difficult, if not impossible, to hold humans responsible for war crimes AWS may commit. The section shows not only that the problem of “accountability gaps” is significantly overstated, but also that there is no significant difference between human soldiers and autonomous weapons in terms of criminal responsibility.

Finally, Section V explores a consequentialist objection to autonomous weapons that focuses on the jus ad bellum: namely, that replacing human soldiers with non-human machines will reduce the number of casualties during an armed conflict, making it easier for democratic states to go to war. The section argues that this is the most persuasive objection to AWS – and one that is actually understated, because it ignores the potential for such weapons to minimise civilian casualties, another factor that affects a state’s willingness to use armed force. As the section notes, however, the jus ad bellum critique is less an objection to AWS than to modern warfare itself, because most of the weapons developed over the past century have had precisely the same effect.

Keywords: Autonomous Weapons, LAWS, AWS, Killer Robots, AI, IHL, Laws of War, LARS

Suggested Citation

Heller, Kevin Jon, The Concept of 'The Human' in the Critique of Autonomous Weapons (January 30, 2023). 14 Harvard National Security Journal (Forthcoming, 2023), Available at SSRN: https://ssrn.com/abstract=4342529 or http://dx.doi.org/10.2139/ssrn.4342529

Kevin Jon Heller (Contact Author)

University of Copenhagen (Centre for Military Studies) ( email )

Denmark

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
820
Abstract Views
2,292
Rank
64,668
PlumX Metrics