Large Language Models as Fiduciaries: A Case Study Toward Robustly Communicating With Artificial Intelligence Through Legal Standards

27 Pages Posted: 25 Jan 2023 Last revised: 13 Apr 2023

See all articles by John Nay

John Nay

Stanford University - CodeX - Center for Legal Informatics; New York University (NYU)

Date Written: January 23, 2023


Artificial Intelligence (AI) is taking on increasingly autonomous roles, e.g., browsing the web as a research assistant and managing money. But specifying goals and restrictions for AI behavior is difficult. Similar to how parties to a legal contract cannot foresee every potential “if-then” contingency of their future relationship, we cannot specify desired AI behavior for all circumstances. Legal standards facilitate robust communication of inherently vague and underspecified goals. Instructions (in the case of language models, “prompts”) that employ legal standards will allow AI agents to develop shared understandings of the spirit of a directive that generalize expectations regarding acceptable actions to take in unspecified states of the world. Standards have built-in context that is lacking from other goal specification languages, such as plain language and programming languages. Through an empirical study on thousands of evaluation labels we constructed from U.S. court opinions, we demonstrate that large language models (LLMs) are beginning to exhibit an “understanding” of one of the most relevant legal standards for AI agents: fiduciary obligations. Performance comparisons across models suggest that, as LLMs continue to exhibit improved core capabilities, their legal standards understanding will also continue to improve. OpenAI’s latest LLM has 78% accuracy on our data, their previous release has 73% accuracy, and a model from their 2020 GPT-3 paper has 27% accuracy (worse than random). Our research is an initial step toward a framework for evaluating AI understanding of legal standards more broadly, and for conducting reinforcement learning with legal feedback (RLLF).

Keywords: Artificial Intelligence, AI, Machine Learning, Natural Language Processing, NLP, Self-Supervised Learning, Reinforcement Learning, RL, Large Language Models, Foundation Models, AI Safety, AI Alignment, AI & Law, AI Policy, Computational Legal Studies, Computational Law, Standards, Prompt Engineering

Suggested Citation

Nay, John, Large Language Models as Fiduciaries: A Case Study Toward Robustly Communicating With Artificial Intelligence Through Legal Standards (January 23, 2023). Available at SSRN: or

John Nay (Contact Author)

Stanford University - CodeX - Center for Legal Informatics ( email )


New York University (NYU) ( email )

Bobst Library, E-resource Acquisitions
20 Cooper Square 3rd Floor
New York, NY 10003-711
United States


Do you have negative results from your research you’d like to share?

Paper statistics

Abstract Views
PlumX Metrics