Making Autonomous Weapons Accountable: Command Responsibility for Computer-Guided Lethal Force in Armed Conflicts

Research Handbook on Remote Warfare, Edward Elgar Press, Jens David Ohlin ed., 2016, Forthcoming

Roger Williams Univ. Legal Studies Paper No. 166

28 Pages Posted: 21 Feb 2016

See all articles by Peter Margulies

Peter Margulies

Roger Williams University School of Law

Date Written: February 19, 2016

Abstract

Autonomous weapons systems, in which a computer makes targeting decisions without specific human authorization, pose challenges for international humanitarian law (IHL). The most salient challenge is accountability for autonomous IHL violations. An autonomous weapons system (AWS) that violates IHL cannot be a defendant in a war crimes trial or a subject of military discipline. Moreover, accountability for IHL violations would be ill-served by human combatants who shrugged off their own role in an AWS’s IHL violations, lamely claiming to be “outside the loop” of the computer’s autonomous decisions. To fill the AWS accountability gap, this paper relies on the doctrine of command responsibility. A human in command should have responsibility for autonomous decisions, just as a commander is currently held responsible for an unreasonable failure to prevent a subordinate’s IHL violations. Holding commanders responsible for an AWS is a logical refinement of current law, since it imposes liability on an individual with power and access to information who benefits most concretely from the AWS’s capabilities in war-fighting.

Accountability requires what I call dynamic diligence, a three-pronged approach entailing a flexible human/machine interface, periodic assessment, and parameters tailored to IHL compliance. The model features a dedicated AWS command, staffed by officers familiar with the capabilities of autonomous weapons. A dynamic human/machine interface will not require human authorization or real-time monitoring of targeting, but must enable humans to override an AWS’s decisions. Dynamic assessment should include regular reviews of the AWS’s learning process, to ensure that an AWS in the field does not learn behavior that violates IHL. Dynamic parameters encourage interpretability of AWS targeting decisions: a substantive, verbal explanation, rather than hidden layers of computer calculations, will facilitate effective review. Constrained by dynamic diligence, a commander can harness an AWS’s freedom from flawed human emotions, while furnishing “meaningful human control” to ensure fidelity to IHL principles.

Suggested Citation

Margulies, Peter, Making Autonomous Weapons Accountable: Command Responsibility for Computer-Guided Lethal Force in Armed Conflicts (February 19, 2016). Research Handbook on Remote Warfare, Edward Elgar Press, Jens David Ohlin ed., 2016, Forthcoming; Roger Williams Univ. Legal Studies Paper No. 166. Available at SSRN: https://ssrn.com/abstract=2734900

Peter Margulies (Contact Author)

Roger Williams University School of Law ( email )

10 Metacom Avenue
Bristol, RI 02809
United States

Here is the Coronavirus
related research on SSRN

Paper statistics

Downloads
436
Abstract Views
1,505
rank
68,426
PlumX Metrics