Predicting Consumer Contracts
54 Pages Posted: 13 May 2021 Last revised: 13 Aug 2021
Date Written: May 12, 2021
This Article empirically examines whether a computational language model can read and understand consumer contracts. Language models are able to perform a wide range of complex tasks by predicting the next word in a sequence. In the legal domain, language models can summarize laws, translate legalese into plain English, and, as this Article will explore, inform consumers of their contractual rights and obligations.
To showcase the opportunities and challenges of using language models to read consumer contracts, this Article studies the performance of GPT-3, the world’s first commercial language model. The case study employs a novel dataset comprised of questions relating to the terms of service of popular U.S. websites. Although the results are not definitive, they offer several important insights. First, owing to its immense training data, the model can exploit subtle informational cues embedded in questions. Second, the model performed poorly on contractual provisions that favor the rights and interests of consumers, suggesting that it may contain an anti-consumer bias. Third, the model is brittle in unexpected ways. Performance was highly sensitive to the wording of questions, but surprisingly indifferent to variations in contractual language.
While language models could potentially empower consumers, they could also provide misleading legal advice and entrench harmful biases. Leveraging the benefits of language models in reading consumer contracts and confronting the challenges they pose requires a combination of engineering and governance. Policymakers, together with developers and users of language models, should explore technical and institutional safeguards to ensure that language models are used responsibly and align with broader social values.
Keywords: Contract Law, Consumer Law, Machine Learning, Natural Language Processing, Language Models, GPT-3, Smart Readers, Access to Justice, Legal Automation, Algorithmic Bias, Governance of AI
JEL Classification: K12, K23, K24, O31, O33, O38
Suggested Citation: Suggested Citation