Banning Autonomous Killing
Chapter in: The American Way of Bombing: How Legal and Ethical Norms Change, Matthew Evangelista & Henry Shue, eds. Ithaca, NY: Cornell University Press (2013, Forthcoming)
30 Pages Posted: 22 Aug 2013 Last revised: 5 Aug 2015
Date Written: August 21, 2013
Abstract
Scientific research on fully autonomous weapons systems is moving rapidly. At the current pace of discovery, such fully autonomous systems will be available to military arsenals within a few decades, if not a few years. These systems operate through computer programs that will both select and attack a target without human involvement after the program is activated. Looking to the law governing resort to military force, to relevant ethical considerations, as well as the practical experience of ten years of killing using unmanned systems (drones), the time is ripe to discuss a major multilateral treaty banning fully autonomous killing. Current legal and ethical principles presume a human conscience bearing on decisions to kill. Fully autonomous systems will have the capacity to remove a human conscience not only to extreme distance from a target -- as drones do now -- but also to remove the human conscience from the target in time. The computer of a fully autonomous system may be programmed years before a lethal operation is carried out. Without nearer term decisions by human beings, accountability becomes problematic and without accountability, the capacity of law and ethics to restrain is lost.
Keywords: fully autonomous weapon systems, robot weapons, drones, arms control, lethal autonomous robotics (LARs), unmanned systems, international law
JEL Classification: K33
Suggested Citation: Suggested Citation