Aligning Artificial Intelligence with Humans through Public Policy

10 Pages Posted: 2 Jun 2022

See all articles by John Nay

John Nay

New York University (NYU); Stanford Center for Legal Informatics; Brooklyn Artificial Intelligence Research; Brooklyn Investment Group (

James Daily

Washington University in St. Louis - Center for Empirical Research in the Law

Date Written: May 4, 2022


Given that Artificial Intelligence (AI) increasingly permeates our lives, it is critical that we systematically align AI objectives with the goals and values of humans. The human-AI alignment problem stems from the impracticality of explicitly specifying the rewards that AI models should receive for all the actions they could take in all relevant states of the world. One possible solution, then, is to leverage the capabilities of AI models to learn those rewards implicitly from a rich source of data describing human values in a wide range of contexts. The democratic policy-making process produces just such data by developing specific rules, flexible standards, interpretable guidelines, and generalizable precedents that synthesize citizens’ preferences over potential actions taken in many states of the world. Therefore, computationally encoding public policies to make them legible to AI systems should be an important part of a socio-technical approach to the broader human-AI alignment puzzle. Legal scholars are exploring AI, but most research has focused on how AI systems fit within existing law, rather than how AI may understand the law. This Essay outlines research on AI that learn structures in policy data that can be leveraged for downstream tasks. As a demonstration of the ability of AI to comprehend policy, we provide a case study of an AI system that predicts the relevance of proposed legislation to a given publicly traded company and its likely effect on that company. We believe this represents the “comprehension” phase of AI and policy, but leveraging policy as a key source of human values to align AI requires “understanding” policy. We outline what we believe will be required to move toward that, and two example research projects in that direction. Solving the alignment problem is crucial to ensuring that AI is beneficial both individually (to the person or group deploying the AI) and socially. As AI systems are given increasing responsibility in high-stakes contexts, integrating democratically-determined policy into those systems could align their behavior with human goals in a way that is responsive to a constantly evolving society.

Keywords: Artificial Intelligence, Machine Learning, Natural Language Processing, Law, Policy, Human-AI Alignment, AI Ethics, Computational Social Science, Computational Policy

Suggested Citation

Nay, John and Daily, James, Aligning Artificial Intelligence with Humans through Public Policy (May 4, 2022). Available at SSRN: or

John Nay (Contact Author)

New York University (NYU) ( email )

Bobst Library, E-resource Acquisitions
20 Cooper Square 3rd Floor
New York, NY 10003-711
United States


Stanford Center for Legal Informatics ( email )


Brooklyn Artificial Intelligence Research ( email )


Brooklyn Investment Group ( ( email )

370 Jay St
Brooklyn, NY 11201
United States


James Daily

Washington University in St. Louis - Center for Empirical Research in the Law ( email )

One Brookings Drive
St. Louis, MO 63130
United States


Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Abstract Views
PlumX Metrics