The Lawful Use of Autonomous Weapon Systems for Targeted Strikes (Part 3): Evaluating the Outer Limits
26 Pages Posted: 26 Apr 2019
Date Written: February 28, 2019
Abstract
Lethal Autonomous Weapon Systems (LAWS) are essentially weapon systems that, once activated, can select and engage targets without further human intervention. While these are neither currently fielded nor officially part of any nation’s defence strategy, there is ample evidence that many States and defence contractors are currently developing LAWS for future deployment. The main international humanitarian law (IHL) problem posed by these weapon systems is that lethal action against specific targets may be taken by sensory hardware and control software, rather than human operators exercising deliberative reasoning and judgment. However, in the case of targeted strikes, commanders and their battle staffs do select specific targets during the targeting cycle, thereby restricting machine discretion to the choice of munition and the timing of weapons release. This ameliorates a significant concern often put forward by ban proponents, which is the lack of meaningful human control in weapons autonomy.
In Part 1 of this sub-series of three articles, the concepts, advantages and technologies of LAWS were explained in order to lay down a basic understanding of how and why autonomous weapons (as opposed to remotely-piloted systems) will be used in targeted strikes. Part 2 built on this by explaining the US/NATO Joint Targeting Cycles and demonstrating that their (largely) human-led processes afford ‘meaningful human control’ (MHC). The corollary was that this precludes any prohibition on autonomous targeted strikes within an armed conflict. In addition, there was a consideration of the legal basis for specifically permitting autonomous targeted strikes in an armed conflict, using potential legal transplants from the Convention on Cluster Munitions.
In this Part 3 article, there will be a discussion of four problems, which should arguably confine autonomous targeted strikes to ‘areas of active hostilities’, or ‘hot’ battlefields: one legal (human rights-based); the other three non-legal (‘non-obvious’ nature of drone strikes). This latter objection encompasses psychological harms to civilians, agency-denial and potential international disorder. Moreover, it will be considered whether there may be an exception to these limitations, in light of the rising use of social media by terrorist groups, and recent official statements on the meaning of ‘imminence’ in ad bellum self-defence. Thus, in very narrow factual circumstances, it may be both lawful and militarily useful to deploy autonomous drones outside the immediate boundaries of an armed conflict (or ‘areas of active hostilities’).
This sub-series of three articles is the final of a four-part series, which comprises: 1) assessing the sense and scope of autonomy; 2) whether and how LAWS can be designed/deployed in compliance with IHL; 3) issues relating to the explosive remnants of war; and 4) the lawful use of LAWS for targeted strikes (current sub-series of three article).
Suggested Citation: Suggested Citation