AI Regulation Has Its Own Alignment Problem: The Technical and Institutional Feasibility of Disclosure, Registration, Licensing, and Auditing

80 Pages Posted: 30 Nov 2023

See all articles by Neel Guha

Neel Guha

Stanford University

Christie Lawrence

Stanford University

Lindsey A. Gailmard

Stanford University

Kit Rodolfa

Stanford University

Faiz Surani

University of California, Santa Barbara

Rishi Bommasani

Stanford University

Inioluwa Raji

affiliation not provided to SSRN

Mariano-Florentino Cuéllar

Carnegie Endowment for International Peace; Stanford Law School

Colleen Honigsberg

Stanford Law School

Percy Liang

Stanford University - Department of Computer Science

Daniel E. Ho

Stanford Law School

Date Written: November 15, 2023

Abstract

Calls for regulating artificial intelligence (AI) are widespread, but there remains little consensus on both the specific harms that regulation can and should address and the appropriate regulatory actions to take. Computer scientists propose technical solutions that may be infeasible or illegal; lawyers propose regulation that may be technically impossible; and commentators propose policies that may backfire. AI regulation, in that sense, has its own alignment problem, where proposed interventions are often misaligned with societal values. In this Essay, we detail and assess the alignment and technical and institutional feasibility of four dominant proposals for AI regulation in the United States: disclosure, registration, licensing, and auditing. Our caution against the rush to heavily regulate AI without addressing regulatory alignment is underpinned by three arguments. First, AI regulatory proposals tend to suffer from both regulatory mismatch (i.e., vertical misalignment) and value conflict (i.e., horizontal misalignment). Clarity about a proposal’s objectives, feasibility, and impact may highlight that the proposal is mismatched with the harm intended to address. In fact, the impulse for AI regulation may in some instances be better addressed by non-AI regulatory reform. And the more concrete the proposed regulation, the more it will expose tensions and tradeoffs between different regulatory objectives and values. Proposals that purportedly address all that ails AI (safety, trustworthiness, bias, accuracy, and privacy) ignore the reality that many goals cannot be jointly satisfied. Second, the dominant AI regulatory proposals face common technical and institutional feasibility challenges—who in government should coordinate and enforce regulation, how can the scope of regulatory interventions avoid ballooning, and what standards and metrics operationalize trustworthy AI values given the lack of, and unclear path to achieve, technical consensus? Third, the federal government can, to varying degrees, reduce AI regulatory misalignment by designing interventions to account for feasibility and alignment considerations. We thus close with concrete recommendations to minimize misalignment in AI regulation.

Keywords: artificial intelligence regulation, regulation, artificial intelligence policy, technology policy, technology law

Suggested Citation

Guha, Neel and Lawrence, Christie and Gailmard, Lindsey A. and Rodolfa, Kit and Surani, Faiz and Bommasani, Rishi and Raji, Inioluwa and Cuéllar, Mariano-Florentino and Honigsberg, Colleen and Liang, Percy and Ho, Daniel E., AI Regulation Has Its Own Alignment Problem: The Technical and Institutional Feasibility of Disclosure, Registration, Licensing, and Auditing (November 15, 2023). George Washington Law Review, Forthcoming, Available at SSRN: https://ssrn.com/abstract=4634443

Neel Guha (Contact Author)

Stanford University ( email )

Stanford, CA
United States

Christie Lawrence

Stanford University ( email )

Stanford, CA 94305
United States

Lindsey A. Gailmard

Stanford University ( email )

Stanford, CA 94305
United States

Kit Rodolfa

Stanford University ( email )

Stanford, CA 94305
United States

Faiz Surani

University of California, Santa Barbara ( email )

South Hall 5504
Santa Barbara, CA 93106
United States

HOME PAGE: http://faizsurani.com

Rishi Bommasani

Stanford University ( email )

Stanford, CA 94305
United States

Inioluwa Raji

affiliation not provided to SSRN

Mariano-Florentino Cuéllar

Carnegie Endowment for International Peace ( email )

1779 Massachuesetts Avenue, N.W.
Washington, DC 20036
United States

Stanford Law School ( email )

559 Nathan Abbott Way
Stanford, CA 94305-8610
United States
650-723-9216 (Phone)
650-725-0253 (Fax)

Colleen Honigsberg

Stanford Law School ( email )

559 Nathan Abbott Way
Stanford, CA 94305
United States

Percy Liang

Stanford University - Department of Computer Science ( email )

Gates Computer Science Building
353 Serra Mall
Stanford, CA 94305-9025
United States

Daniel E. Ho

Stanford Law School ( email )

559 Nathan Abbott Way
Stanford, CA 94305-8610
United States
650-723-9560 (Phone)

HOME PAGE: http://dho.stanford.edu

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
846
Abstract Views
2,364
Rank
58,062
PlumX Metrics