Algorithms Acting Badly: A Solution from Corporate Law

53 Pages Posted: 28 Feb 2020 Last revised: 30 Jul 2020

Date Written: February 27, 2020


Sometimes algorithms do things that hurt people. Clearly, algorithms offer many social benefits in terms of efficiency and data analysis. But when they discriminate in lending, manipulate stock markets, or violate expectations of privacy, they can injure us on a massive scale. In a recent survey of technologists, only a third predicted that artificial intelligence will be a net positive for society.

The law can help make algorithms work for us by imposing liability when they work against us. The problem is that algorithms fit poorly into existing conceptions of liability. Liability requires injurious acts, but what does it mean for an algorithm to act? Only people act, but algorithms are not people under the law. Some ambitious scholars have argued that the law should change its perspective and recognize sophisticated algorithms as people. However, the philosophical puzzles (are algorithms really people?), practical obstacles (how do you punish an algorithm?), and unexpected consequences (could algorithmic “people” sue us back?) have proven insurmountable.

This Article proposes a less direct but more grounded approach to algorithmic liability. Corporations currently design and run the algorithms that have the most significant social impacts. Longstanding principles of corporate liability already recognize that corporations are “people” capable of acting injuriously. Corporate law stipulates that corporations act through their employees because corporations have control over and benefit from employee conduct. When employees misbehave, corporations are in the best position to discipline and correct them. But there is no reason to say that corporations can only act through their employees. This Article argues that the same control- and benefit-based rationales extend to corporate algorithms. If the law were to recognize that algorithmic conduct should largely qualify as corporate action, the whole framework of corporate liability would kick in. By exercising the control it already has over corporations, the law can help ensure that corporate algorithms work largely in our favor.

Keywords: Artificial Intelligence, Corporate Law, Algorithmic Misconduct, Algorithmic Accountability Gap, Respondeat Superior, Uber, Philosophy of Law, Legal Theory

Suggested Citation

Diamantis, Mihailis, Algorithms Acting Badly: A Solution from Corporate Law (February 27, 2020). 89 Geo. Wash. L. Rev. __ (forthcoming 2020), U Iowa Legal Studies Research Paper No. 2020-12, Available at SSRN: or

Mihailis Diamantis (Contact Author)

University of Iowa - College of Law ( email )

Melrose and Byington
Iowa City, IA 52242
United States

Here is the Coronavirus
related research on SSRN

Paper statistics

Abstract Views
PlumX Metrics