Amoral Machines, Or: How Roboticists Can Learn to Stop Worrying and Love the Law
20 Pages Posted: 24 Feb 2017 Last revised: 4 Feb 2019
Date Written: February 17, 2017
The media and academic dialogue surrounding high-stakes decision-making by robotics applications has been dominated by a focus on morality. But the tendency to do so while overlooking the role that legal incentives play in shaping the behavior of profit maximizing firms risks “marginalizing the entire field” of robotics and rendering many of the deepest challenges facing today’s engineers utterly intractable. This Essay attempts to both halt this trend and offer a course-correction. Invoking Oliver Wendell Holmes’ canonical analogy of a “bad man...who cares nothing for...ethical rules,” it demonstrates why philosophical abstractions like the trolley problem — in their classic framing — provide a poor means of understanding the real-world constraints faced by robotics engineers. Using insights gleaned from the economic analysis of law, it argues that profit maximizing firms designing autonomous decision-making systems will be less concerned with esoteric questions of right and wrong than with concrete questions of predictive legal liability. And until such time as the conversation surrounding so-called “moral machines” is revised to reflect this fundamental distinction between morality and law, the thinking on this topic by philosophers, engineers, and policymakers alike will remain hopelessly mired. Step aside roboticists — lawyers have got this one.
Keywords: robot, law, legal, lawyer, robotics, roboticist, machine, ethics, artificial, intelligence, AI, moral, morality, philosophy, roboethics, economics, Holmes, Hand, profit, maximize, maximization, trolley, problem, liability, autonomous, vehicle, self-driving, driverless, Google, Tesla, Waymo, Uber
Suggested Citation: Suggested Citation