Auditing Large Language Models: A Three-Layered Approach

29 Pages Posted: 17 Feb 2023

See all articles by Jakob Mökander

Jakob Mökander

University of Oxford - Oxford Internet Institute; Princeton University - Center for Information Technology Policy

Jonas Schuett

Centre for the Governance of AI; Goethe University Frankfurt; Institute for Law & AI

Hannah Rose Kirk

University of Oxford - Oxford Internet Institute

Luciano Floridi

Yale University - Digital Ethics Center; University of Bologna- Department of Legal Studies

Date Written: February 16, 2023

Abstract

The emergence of large language models (LLMs) represents a major advance in artificial intelligence (AI) research. However, the widespread use of LLMs is also coupled with significant ethical and social challenges. Previous research has pointed towards auditing as a promising governance mechanism to help ensure that AI systems are designed and deployed in ways that are ethical, legal, and technically robust. However, existing auditing procedures fail to address the governance challenges posed by LLMs, which are adaptable to a wide range of downstream tasks. To help bridge that gap, we offer three contributions in this article. First, we establish the need to develop new auditing procedures that capture the risks posed by LLMs by analysing the affordances and constraints of existing auditing procedures. Second, we outline a blueprint to audit LLMs in feasible and effective ways by drawing on best practices from IT governance and system engineering. Specifically, we propose a three-layered approach, whereby governance audits, model audits, and application audits complement and inform each other. Finally, we discuss the limitations not only of our three-layered approach but also of the prospect of auditing LLMs at all. Ultimately, this article seeks to expand the methodological toolkit available to technology providers and policymakers who wish to analyse and evaluate LLMs from technical, ethical, and legal perspectives.

Keywords: Artificial intelligence, auditing, ethics, foundation models, governance, large language models, natural language processing, policy, risk management

Suggested Citation

Mökander, Jakob and Schuett, Jonas and Kirk, Hannah Rose and Floridi, Luciano, Auditing Large Language Models: A Three-Layered Approach (February 16, 2023). Available at SSRN: https://ssrn.com/abstract=4361607 or http://dx.doi.org/10.2139/ssrn.4361607

Jakob Mökander (Contact Author)

University of Oxford - Oxford Internet Institute ( email )

Princeton University - Center for Information Technology Policy ( email )

C231A E-Quad
Olden Street
Princeton, NJ 08540
United States

Jonas Schuett

Centre for the Governance of AI ( email )

Oxford
United Kingdom

HOME PAGE: http://governance.ai

Goethe University Frankfurt ( email )

Frankfurt
Germany

Institute for Law & AI ( email )

1427 Cambridge St
#5
Cambridge, MA 02139
United States

Hannah Rose Kirk

University of Oxford - Oxford Internet Institute

Luciano Floridi

Yale University - Digital Ethics Center ( email )

85 Trumbull Street
New Haven, CT CT 06511
United States
2034326473 (Phone)

University of Bologna- Department of Legal Studies ( email )

Via Zamboni 22
Bologna, Bo 40100
Italy

HOME PAGE: http://www.unibo.it/sitoweb/luciano.floridi/en

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
1,213
Abstract Views
4,487
Rank
35,602
PlumX Metrics