Testing Ordinary Meaning

80 Pages Posted: 6 Nov 2018 Last revised: 18 Dec 2020

See all articles by Kevin Tobia

Kevin Tobia

Georgetown University Law Center; Georgetown University - Department of Philosophy

Date Written: December 1, 2020


Within legal scholarship and practice, among the most pervasive tasks is the interpretation of texts. And within legal interpretation, perhaps the most pervasive inquiry is the search for “ordinary meaning.” Jurists often treat ordinary meaning analysis as an empirical inquiry, aiming to discover a fact about how people understand language. When evaluating ordinary meaning, interpreters rely on dictionary definitions or patterns of common usage, increasingly via “legal corpus linguistics” approaches. However, the most central question about these popular methods remains open: Do they reliably reflect ordinary meaning? This Article presents a series of experiments that assess whether (a) dictionary definitions and (b) common usage data reflect (c) how people actually understand language today.

The Article elaborates the implications of two main experimental results. First, neither the dictionary nor legal corpus linguistics methods reliably track ordinary people’s judgments about meaning. This shifts the argumentative burden to jurists who rely on these tools to identify “ordinary meaning” or “original public meaning”: these views must articulate and demonstrate a reliable method of analysis. Moreover, this divergence illuminates several interpretive fallacies. For example, advocates of legal corpus linguistics often contend that the nonappearance of a specific use in a corpus indicates that the use is not part of the relevant term’s ordinary meaning. The experiments reveal this claim to be a “Nonappearance Fallacy.” Ordinary meaning exceeds datasets of common usage — even very large ones.

Second, dictionary and legal corpus linguistics verdicts diverge dramatically from each other. Part of that divergence is explained by the finding that broad dictionary definitions tend to direct interpreters to extensive interpretations, while data of common usage tends to point interpreters to more prototypical cases. This suggests two different criteria that are often relevant in interpretation: a more extensive criterion and a more narrow, prototypical criterion. Although dictionaries and corpus linguistics might, in some cases, help us identify these criteria, a hard legal-philosophical question remains: Which of these two criteria should guide the interpretation of terms and phrases in legal texts? Insofar as there is no compelling case to prefer one, the results suggest that dictionary definitions, legal corpus linguistics, or even other more scientific measures of meaning may not be equipped in principle to deliver simple and unequivocal answers to inquiries about the so-called “ordinary meaning” of legal texts.

Keywords: legal interpretation, law and language, contracts, statutory interpretation, legal theory, legal philosophy, experimental jurisprudence, empirical legal studies, formalism, textualism, dictionaries, corpus linguistics

Suggested Citation

Tobia, Kevin, Testing Ordinary Meaning (December 1, 2020). Harvard Law Review, Vol. 134, 2020, Available at SSRN: https://ssrn.com/abstract=3266082 or http://dx.doi.org/10.2139/ssrn.3266082

Kevin Tobia (Contact Author)

Georgetown University Law Center ( email )

600 New Jersey Avenue, NW
Washington, DC 20001
United States

HOME PAGE: http://www.law.georgetown.edu/faculty/kevin-tobia/

Georgetown University - Department of Philosophy

37th and O Streets, N.W.
Washington, DC 20007
United States

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Abstract Views
PlumX Metrics