Artificial Meaning?

69 Pages Posted: 8 Nov 2024 Last revised: 14 Oct 2024

See all articles by Thomas R. Lee

Thomas R. Lee

Brigham Young University - J. Reuben Clark Law School

Jesse Egbert

Northern Arizona University

Date Written: October 01, 2024

Abstract

The textualist turn is increasingly an empirical one—an inquiry into ordinary meaning in the sense of what is commonly or typically ascribed to a given word or phrase. Such an inquiry is inherently empirical. And empirical questions call for replicable evidence produced by transparent methods-not bare human intuition or arbitrary preference for one dictionary definition over another.

Both scholars and judges have begun to make this turn. They have started to adopt the tools used in the field of corpus linguistics—a field that studies language usage by examining large databases (corpora) of naturally occurring language.

This turn is now being challenged by a proposal to use a simpler, now-familiar large language model (LLM)—AI-driven LLMs like ChatGPT. The proposal began with two recent law review articles. And it caught fire—and a load of media attention—with a concurring opinion by Eleventh Circuit Judge Kevin Newsom in a case called Snell v. United Specialty Insurance Co. The Snell concurrence proposed to use ChatGPT and other LLM AIs to generate empirical evidence of relevance to the question whether the installation of in-ground trampolines falls under the ordinary meaning of “landscaping” as used in an insurance policy. It developed a case for relying on such evidence—and for rejecting the methodology of corpus linguistics—based in part on recent legal scholarship. And it presented a series of AI queries and responses that it presented as “datapoints” to be considered “alongside” dictionaries and other evidence of ordinary meaning.

The proposal is alluring. And in some ways it seems inevitable that AI tools will be part of the future of an empirical analysis of ordinary meaning. But existing AI tools are not up to the task. They are engaged in a form of artificial rationalism—not empiricism. And they are in no position to produce reliable datapoints on questions like the one in Snell.

We respond to the counter-position developed in Snell and the articles it relies on. We show how AIs fall short and corpus tools deliver on core components of the empirical inquiry. We present a transparent, replicable means of developing data of relevance to the Snell issue. And we explore the elements of a future in which the strengths of AI-driven LLMs could be deployed in a corpus analysis, and the strengths of the corpus inquiry could be implemented in an inquiry involving AI tools.

Keywords: artificial intelligence, ordinary meaning, textualism, interpretation, corpus linguistics

Suggested Citation

Lee, Thomas R. and Egbert, Jesse, Artificial Meaning? (October 01, 2024). BYU Law Research Paper No. 24-26, Available at SSRN: https://ssrn.com/abstract=4973483 or http://dx.doi.org/10.2139/ssrn.4973483

Thomas R. Lee (Contact Author)

Brigham Young University - J. Reuben Clark Law School ( email )

430 JRCB
Brigham Young University
Provo, UT 84602
United States

Jesse Egbert

Northern Arizona University ( email )

PO Box 15066
Flagstaff, AZ 86011
United States

HOME PAGE: http://oak.ucc.nau.edu/jae89/

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
219
Abstract Views
734
Rank
278,235
PlumX Metrics