Law Informs Code: A Legal Informatics Approach to Aligning Artificial Intelligence with Humans

53 Pages Posted: 15 Sep 2022 Last revised: 25 Sep 2022

See all articles by John Nay

John Nay

New York University (NYU); Stanford University - CodeX - Center for Legal Informatics; Brooklyn Artificial Intelligence Research; Brooklyn Investment Group (

Date Written: September 13, 2022


Artificial Intelligence (AI) capabilities are rapidly advancing, and highly capable AI could cause radically different futures depending on how it is developed and deployed. We are currently unable to specify human goals and societal values in a way that reliably directs AI behavior. Specifying the desirability (value) of an AI system taking a particular action in a particular state of the world is unwieldy beyond a very limited set of value-action-states. The purpose of machine learning is to train on a subset of states and have the resulting agent generalize an ability to choose high value actions in unencountered circumstances. But the function ascribing values to an agent’s actions during training is inevitably an incredibly incomplete encapsulation of the breadth of human values, and the training process is unavoidably a sparse exploration of states pertinent to all possible futures. Therefore, after training, AI is deployed with a coarse map of human preferred territory and will often choose actions unaligned with our preferred paths.

Law-making and legal interpretation form a computational engine that converts opaque human values into legible and enforceable directives. Law Informs Code is the research agenda attempting to capture that complex computational process of human law, and embed it in AI. Similar to how parties to a legal contract cannot foresee every potential “if-then” contingency of their future relationship, and legislators cannot predict all the circumstances under which their proposed bills will be applied, we cannot ex ante specify “if-then” rules that provably direct good AI behavior. Legal theory and practice have developed arrays of tools to address these specification problems, a language of alignment. For instance, legal standards allow humans to develop shared understandings and adapt them to novel situations, i.e., to generalize expectations regarding actions taken to unspecified states of the world. In contrast to more prosaic uses of the law (e.g., as a deterrent of bad behavior through the threat of sanction), leveraged as an expression of how humans communicate their goals, and what society values, Law Informs Code.

We describe how the data generated by legal processes and the theoretical constructs and practices of law (methods of law-making, statutory interpretation, contract drafting, applications of standards, legal reasoning, etc.) can facilitate the robust specification of inherently vague human goals for AI. This helps with human-AI alignment and the local usefulness of AI. Toward society-AI alignment, we present a framework for understanding law as the applied philosophy of multi-agent alignment, harnessing public law as an up-to-date knowledge base of democratically endorsed values ascribed to state-action pairs. Although law is partly a reflection of historically contingent political power – and thus not a perfect aggregation of citizen preferences – if properly parsed, its distillation offers a legitimate computational comprehension of human goals and societal values. If law eventually informs powerful AI, engaging in the deliberative human political process to improve law takes on even more meaning.

Keywords: Artificial Intelligence, Machine Learning, Deep Learning, Neural Networks, Natural Language Processing, Reinforcement Learning, Self-Supervised Learning, Large Language Models, Foundation Models, AI Safety, AI Alignment, AI Ethics, AI & Law, AI Policy, Computational Legal Studies, Computational Law

Suggested Citation

Nay, John, Law Informs Code: A Legal Informatics Approach to Aligning Artificial Intelligence with Humans (September 13, 2022). Northwestern Journal of Technology and Intellectual Property, Volume 20, Forthcoming, Available at SSRN: or

John Nay (Contact Author)

New York University (NYU) ( email )

Bobst Library, E-resource Acquisitions
20 Cooper Square 3rd Floor
New York, NY 10003-711
United States


Stanford University - CodeX - Center for Legal Informatics ( email )


Brooklyn Artificial Intelligence Research ( email )


Brooklyn Investment Group ( ( email )

370 Jay St
Brooklyn, NY 11201
United States


Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Abstract Views
PlumX Metrics