GPT-4 Passes the Bar Exam

382 Philosophical Transactions of the Royal Society A (2024)

35 Pages Posted: 15 Mar 2023 Last revised: 3 Apr 2024

See all articles by Daniel Martin Katz

Daniel Martin Katz

Illinois Tech - Chicago Kent College of Law; Bucerius Center for Legal Technology & Data Science; Stanford CodeX - The Center for Legal Informatics; 273 Ventures

Michael James Bommarito

273 Ventures; Licensio, LLC; Stanford Center for Legal Informatics; Michigan State College of Law; Bommarito Consulting, LLC

Shang Gao

Casetext

Pablo Arredondo

Casetext; Stanford CodeX

Date Written: March 15, 2023

Abstract

In this paper, we experimentally evaluate the zero-shot performance of a preliminary version of GPT-4 against prior generations of GPT on the entire Uniform Bar Examination (UBE), including not only the multiple-choice Multistate Bar Examination (MBE), but also the open-ended Multistate Essay Exam (MEE) and Multistate Performance Test (MPT) components. On the MBE, GPT-4 significantly outperforms both human test-takers and prior models, demonstrating a 26% increase over ChatGPT and beating humans in five of seven subject areas. On the MEE and MPT, which have not previously been evaluated by scholars, GPT-4 scores an average of 4.2/6.0 as compared to much lower scores for ChatGPT. Graded across the UBE components, in the manner in which a human tast-taker would be, GPT-4 scores approximately 297 points, significantly in excess of the passing threshold for all UBE jurisdictions. These findings document not just the rapid and remarkable advance of large language model performance generally, but also the potential for such models to support the delivery of legal services in society.

Keywords: GPT, GPT-4, Bar Exam, Legal Data, NLP, Legal NLP, Legal Analytics, natural language processing, natural language understanding, evaluation, machine learning, artificial intelligence, artificial intelligence and law, Neural NLP, Legal Tech, ChatGPT

JEL Classification: C45, C55, K49, O33, O30

Suggested Citation

Katz, Daniel Martin and Bommarito, Michael James and Gao, Shang and Arredondo, Pablo, GPT-4 Passes the Bar Exam (March 15, 2023). 382 Philosophical Transactions of the Royal Society A (2024), Available at SSRN: https://ssrn.com/abstract=4389233 or http://dx.doi.org/10.2139/ssrn.4389233

Daniel Martin Katz (Contact Author)

Illinois Tech - Chicago Kent College of Law ( email )

565 W. Adams St.
Chicago, IL 60661-3691
United States

HOME PAGE: http://www.danielmartinkatz.com/

Bucerius Center for Legal Technology & Data Science ( email )

Jungiusstr. 6
Hamburg, 20355
Germany

HOME PAGE: http://legaltechcenter.de/

Stanford CodeX - The Center for Legal Informatics ( email )

559 Nathan Abbott Way
Stanford, CA 94305-8610
United States

HOME PAGE: http://law.stanford.edu/directory/daniel-katz/

273 Ventures ( email )

HOME PAGE: http://273ventures.com

Licensio, LLC ( email )

Okemos, MI 48864
United States

Stanford Center for Legal Informatics ( email )

559 Nathan Abbott Way
Stanford, CA 94305-8610
United States

Michigan State College of Law ( email )

318 Law College Building
East Lansing, MI 48824-1300
United States

Bommarito Consulting, LLC ( email )

MI 48098
United States

Shang Gao

Casetext ( email )

United States

HOME PAGE: http://casetext.com

Pablo Arredondo

Casetext ( email )

United States

Stanford CodeX ( email )

559 Nathan Abbott Way
Stanford, CA 94305-8610
United States

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
9,863
Abstract Views
37,068
Rank
1,096
PlumX Metrics