Large Language Models for Legal Interpretation? Don't Take Their Word for It
Georgetown Law Journal, Vol. 114 (forthcoming)
71 Pages Posted: 2 Apr 2025 Last revised: 19 Feb 2025
Date Written: February 03, 2025
Abstract
Recent breakthroughs in statistical language modeling have impacted countless domains, including the law. Chatbot applications such as ChatGPT, Claude, and DeepSeek – which incorporate ‘large’ neural network–based language models (LLMs) trained on vast swathes of internet text – process and generate natural language with remarkable fluency. Recently, scholars have proposed adding AI chatbot applications to the legal interpretive toolkit. These suggestions are no longer theoretical: in 2024, a U.S. judge queried LLM chatbots to interpret a disputed insurance contract and the U.S. Sentencing Guidelines.
We assess this emerging practice from a technical, linguistic, and legal perspective. This Article explains the design features and product development cycles of LLM-based chatbot applications, with a focus on properties that may promote their unintended misuse – or intentional abuse – by legal interpreters. Next, we argue that legal practitioners run the risk of inappropriately relying on LLMs to resolve legal interpretative questions. We conclude with guidance on how such systems – and the language models which underpin them – can be responsibly employed alongside other tools to investigate legal meaning.
Keywords: legal interpretation, large language model, statutory interpretation, textualism, LLMs, artificial intelligence, law and technology
Suggested Citation: Suggested Citation