Law and Ethics for Autonomous Weapon Systems: Why a Ban Won't Work and How the Laws of War Can
American University - Washington College of Law
Matthew C. Waxman
Columbia Law School
April 10, 2013
Stanford University, The Hoover Institution (Jean Perkins Task Force on National Security and Law Essay Series), 2013
American University, WCL Research Paper 2013-11
Columbia Public Law Research Paper 13-351
Public debate is heating up over the future development of autonomous weapon systems. Some concerned critics portray that future, often invoking science-fiction imagery, as a plain choice between a world in which those systems are banned outright and a world of legal void and ethical collapse on the battlefield. Yet an outright ban on autonomous weapon systems, even if it could be made effective, trades whatever risks autonomous weapon systems might pose in war for the real, if less visible, risk of failing to develop forms of automation that might make the use of force more precise and less harmful for civilians caught near it. Grounded in a more realistic assessment of technology — acknowledging what is known and what is yet unknown — as well as the interests of the many international and domestic actors involved, this paper outlines a practical alternative: the gradual evolution of codes of conduct based on traditional legal and ethical principles governing weapons and warfare.
This essay revises and significantly extends an earlier, short Policy Review article by Anderson & Waxman from 2012, also posted to SSRN at http://ssrn.com/abstract=2046375, "Law and Ethics for Robot Soldiers."
Number of Pages in PDF File: 33
Keywords: Law of war, laws of armed conflict, international humanitarian law, IHL, Human Rights Watch, HRW, robot, autonomous weapon system, autonomous lethal weapon, automation, autonomy, machine learning, artificial intelligence, weapons review, Article 36, indiscriminate, proportionality, distinction
JEL Classification: K33
Date posted: April 14, 2013 ; Last revised: July 3, 2014