Regulating the Risks of AI

65 Pages Posted: 23 Aug 2022 Last revised: 13 Jan 2024

See all articles by Margot E. Kaminski

Margot E. Kaminski

University of Colorado Law School; Yale University - Yale Information Society Project; University of Colorado at Boulder - Silicon Flatirons Center for Law, Technology, and Entrepreneurship

Date Written: August 19, 2022

Abstract

Companies and governments now use Artificial Intelligence (“AI”) in a wide range of settings. But using AI leads to well-known risks that arguably present challenges for a traditional liability model. It is thus unsurprising that lawmakers in both the United States and the European Union (“EU”) have turned to the tools of risk regulation in governing AI systems.

This Article describes the growing convergence around risk regulation in AI governance. It then addresses the question: what does it mean to use risk regulation to govern AI systems? The primary contribution of this Article is to offer an analytic framework for understanding the use of risk regulation as AI governance. It aims to surface the shortcomings of risk regulation as a legal approach, and to enable readers to identify which type of risk regulation is at play in a given law. The theoretical contribution of this Article is to encourage researchers to think about what is gained and what is lost by choosing a particular legal tool for constructing the meaning of AI systems in the law.

Whatever the value of using risk regulation, constructing AI harms as risks is a choice with consequences. Risk regulation comes with its own policy baggage: a set of tools and troubles that have emerged in other fields. Risk regulation tends to try to fix problems with the technology so it may be used, rather than contemplating that it might sometimes not be appropriate to use it at all. Risk regulation works best on quantifiable problems and struggles with hard-to-quantify harms. It can cloak what are really policy decisions as technical decisions. Risk regulation typically is not structured to make injured people whole. And the version of risk regulation typically deployed to govern AI systems lacks the feedback loops of tort liability. Thus the choice to use risk regulation in the first place channels the law towards a particular approach to AI governance that makes implicit tradeoffs and carries predictable shortcomings.

The second, more granular observation this Article makes is that not all risk regulation is the same. That is, once regulators choose to deploy risk regulation, there are still significant variations in what type of risk regulation they might use. Risk regulation is a legal transplant with multiple possible origins. This Article identifies at least four models for AI risk regulation that meaningfully diverge in how they address accountability.

Keywords: AI, data privacy, regulation, governance, risk regulation, risks, artificial intelligence

Suggested Citation

Kaminski, Margot E., Regulating the Risks of AI (August 19, 2022). Boston University Law Review, Vol. 103:1347, 2023, U of Colorado Law Legal Studies Research Paper No. 22-21, Available at SSRN: https://ssrn.com/abstract=4195066 or http://dx.doi.org/10.2139/ssrn.4195066

Margot E. Kaminski (Contact Author)

University of Colorado Law School ( email )

401 UCB
Boulder, CO 80309
United States

Yale University - Yale Information Society Project ( email )

127 Wall Street
New Haven, CT 06511
United States

University of Colorado at Boulder - Silicon Flatirons Center for Law, Technology, and Entrepreneurship ( email )

Wolf Law Building
2450 Kittredge Loop Road
Boulder, CO
United States

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
5,726
Abstract Views
15,054
Rank
3,009
PlumX Metrics