Algorithmic Choice and Superior Responsibility: Closing the Gap Between Liability and Lethal Autonomy by Defining the Line Between Actors and Tools
41 Pages Posted: 29 Feb 2016
Date Written: 2015
Lethal Autonomous Weapons Systems (LAWS), those that select and engage targets without requiring human intervention or authorization, are posited to threaten the tenets of international humanitarian law, leading to increased international conflict and immoral or “unjust” decisions on the battlefield. A weapons system acting on its own accord, opponents argue, is a weapons system that severs the chain of command — where does the buck stop when an autonomous machine commits a war crime? In the penology of potential candidates for culpability (e.g., manufacturers, programmers, the State, commanders, or the machine itself) one legal construct should reach the foreground, superior responsibility. This doctrine lays the foundation for adhering to the conventions of international humanitarian law by allowing a commander to serve as a bridge between accountability and “use.” That a commander is held responsible for subordinates’ actions — including those actions committed by an autonomous machine — does not displace the bedrock principle of the superior’s role. The decentralized institution of war has long been fogged by the chasm between subordinates’ actions and superiors’ review. Lethal autonomy should not be deemed to alter this historic precept: actors (commanders) not tools (LAWS) remain culpable.
Keywords: Lethal Autonomous Weapons Systems, Accountability, Superior Responsibility, Preemptive Prohibition, Killer Robots
Suggested Citation: Suggested Citation