Predicting Consumer Contracts
68 Pages Posted: 13 May 2021 Last revised: 8 Feb 2023
Date Written: May 12, 2021
Abstract
This Article empirically examines whether a computational language model can read and understand consumer contracts. In recent years, language models have heralded a paradigm shift in artificial intelligence, characterized by unprecedented machine capabilities and new societal risks. These models, which are trained on immense quantities of data to predict the next word in a sequence, can perform a wide range of complex tasks. In the legal domain, language models can interpret statutes, draft transactional documents, and, as this Article will explore, inform consumers of their contractual rights and obligations.
To showcase the opportunities and challenges of using language models to read consumer contracts, this Article studies the performance of GPT-3, the world’s first commercial language model. The case study evaluates the model’s ability to understand consumer contracts by testing its performance on a novel dataset comprised of questions relating to online terms of service. Although the results are not definitive, they offer several important insights. First, the model appears to be able to exploit subtle informational cues when answering questions about consumer contracts. Second, the model performs poorly in answering certain questions about contractual provisions that favor the rights and interests of consumers, suggesting that the model may contain an anti-consumer bias. Third, the model is brittle in unexpected ways. Performance in the case study was highly sensitive to the wording of questions, but surprisingly indifferent to variations in contractual language.
These preliminary findings suggest that while language models have the potential to empower consumers, they also have the potential to provide misleading advice and entrench harmful biases. Leveraging the benefits of language models in performing legal tasks, such as reading consumer contracts, and confronting the associated challenges requires a combination of thoughtful engineering and governance. Before language models are deployed in the legal domain, policymakers should explore technical and institutional safeguards to ensure that language models are used responsibly and align with broader social values.
Keywords: Artificial Intelligence, Consumer Contracts, ChatGPT, GPT-3, Language Models, Bias, Governance
JEL Classification: K12, K23, K24, O31, O33, O38
Suggested Citation: Suggested Citation