Robot Criminals

45 Pages Posted: 26 Aug 2018 Last revised: 11 Apr 2019

See all articles by Ying Hu

Ying Hu

Yale University, Law School; National University of Singapore (NUS) - Faculty of Law

Date Written: March 7, 2018


When a robot harms humans, are there any grounds for holding it criminally liable for its misconduct? Yes, provided that the robot is capable of making, acting on, and communicating the reasons behind its moral decisions. If such a robot fails to observe the minimum moral standards that society requires of it, labeling it as a criminal can effectively fulfill criminal law’s function of censuring wrongful conduct and alleviating the emotional harm that may be inflicted on human victims.

Imposing criminal liability on robots does not absolve robot manufacturers, trainers, or owners of their individual criminal liability. The former is not rendered redundant by the latter. It is possible that no human is sufficiently at fault in causing a robot to commit a particular morally wrongful action. Additionally, imposing criminal liability on robots might sometimes have significant instrumental value, such as helping to identify culpable individuals and serving as a self-policing device for individuals who interact with robots. Finally, treating robots that satisfy the above-mentioned conditions as moral agents appears much more plausible if we adopt a less human-centric account of moral agency.

Keywords: Robot, Legal Personhood, Criminal Law, Law and Technology

JEL Classification: K14, K39

Suggested Citation

Hu, Ying, Robot Criminals (March 7, 2018). 52 U. Mich. J. L. Reform 487 (2019), Available at SSRN:

Ying Hu (Contact Author)

Yale University, Law School ( email )

127 Wall Street
New Haven, CT 06511
United States

National University of Singapore (NUS) - Faculty of Law ( email )

469G Bukit Timah Road
Eu Tong Sen Building
Singapore, 259776

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Abstract Views
PlumX Metrics