Large Language Models as Corporate Lobbyists

9 Pages Posted: 4 Jan 2023 Last revised: 30 Jan 2023

See all articles by John Nay

John Nay

Stanford University - CodeX - Center for Legal Informatics; New York University (NYU); Brooklyn Artificial Intelligence Research; Brooklyn Investment Group (BKLN.com)

Date Written: January 2, 2023

Abstract

We demonstrate a proof-of-concept of a large language model conducting corporate lobbying related activities. An autoregressive large language model (OpenAI’s text-davinci-003) determines if proposed U.S. Congressional bills are relevant to specific public companies and provides explanations and confidence levels. For the bills the model deems as relevant, the model drafts a letter to the sponsor of the bill in an attempt to persuade the congressperson to make changes to the proposed legislation. We use hundreds of novel ground-truth labels of the relevance of a bill to a company to benchmark the performance of the model. It outperforms the baseline of predicting the most common outcome of irrelevance. We also benchmark the performance of the previous OpenAI GPT-3 model (text-davinci-002), which was the state-of-the-art model on many academic natural language tasks until text-davinci-003 was recently released. The performance of text-davinci-002 is worse than the simple baseline. These results suggest that, as large language models continue to exhibit improved natural language understanding capabilities, performance on lobbying related tasks will continue to improve.​​ Longer-term, if AI begins to influence law in a manner that is not a direct extension of human intentions, this threatens the critical role that law as information could play in aligning AI with humans. Initially, AI is being used to simply augment human lobbyists for a small portion of their daily tasks. However, firms have an incentive to use less and less human oversight over automated assessments of policy ideas and the written communication to regulatory agencies and Congressional staffers. The core question raised is where to draw the line between human-driven and AI-driven policy influence.

Keywords: Artificial Intelligence, AI, Machine Learning, Natural Language Processing, NLP, Self-Supervised Learning, Large Language Models, GPT, Foundation Models, AI Safety, AI Alignment, AI & Law, AI Policy, Computational Legal Studies, Computational Law, Law-Making, Public Policy, Policy-Making, Lobbying

JEL Classification: C45, C55, K49, O30

Suggested Citation

Nay, John, Large Language Models as Corporate Lobbyists (January 2, 2023). Available at SSRN: https://ssrn.com/abstract=4316615 or http://dx.doi.org/10.2139/ssrn.4316615

John Nay (Contact Author)

Stanford University - CodeX - Center for Legal Informatics ( email )

HOME PAGE: http://law.stanford.edu/directory/john-nay/

New York University (NYU) ( email )

Bobst Library, E-resource Acquisitions
20 Cooper Square 3rd Floor
New York, NY 10003-711
United States

HOME PAGE: http://nyu.edu

Brooklyn Artificial Intelligence Research ( email )

HOME PAGE: http://bkln.com

Brooklyn Investment Group (BKLN.com) ( email )

370 Jay St
Brooklyn, NY 11201
United States

HOME PAGE: http://bkln.com

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
629
Abstract Views
2,717
Rank
64,907
PlumX Metrics