Building a Foundation for Data-Driven, Interpretable, and Robust Policy Design using the AI Economist

41 Pages Posted: 7 Sep 2021

Date Written: August 5, 2021

Abstract

Optimizing economic and public policy is critical to address socioeconomic issues and trade-offs, e.g., improving equality, productivity, or wellness, and poses a complex mechanism design problem.

A policy designer needs to consider multiple objectives, policy levers, and behavioral responses from strategic actors who optimize for their individual objectives.
Moreover, real-world policies should be explainable and robust to simulation-to-reality gaps, e.g., due to calibration issues.

Existing approaches are often limited to a narrow set of policy levers or objectives that are hard to measure, do not yield explicit optimal policies, or do not consider strategic behavior, for example.
Hence, it remains challenging to optimize policy in real-world scenarios.

Here we show that the AI Economist framework enables effective, flexible, and interpretable policy design using two-level reinforcement learning (RL) and data-driven simulations.

We validate our framework on optimizing the stringency of \USState{} policies and Federal subsidies during a pandemic, e.g., COVID-19, using a simulation fitted to real data.

We find that log-linear policies trained using RL significantly improve social welfare, based on both public health and economic outcomes, compared to past outcomes.

Their behavior can be explained, e.g., well-performing policies respond strongly to changes in recovery and vaccination rates.

They are also robust to calibration errors, e.g., infection rates that are over or underestimated.
As of yet, real-world policymaking has not seen adoption of machine learning methods at large, including RL and AI-driven simulations.

Our results show the potential of AI to guide policy design and improve social welfare amidst the complexity of the real world.

Note: Funding: The authors acknowledge that they received no funding in support for this research.

Declaration of Interests: The authors declare no competing interests.

Keywords: Machine Learning, Reinforcement Learning, Economics, Policy Design, COVID-19, Interpretability

Suggested Citation

Trott, Alexander and Srinivasa, Sunil and van der Wal, Douwe and Haneuse, Sebastien and Zheng, Stephan, Building a Foundation for Data-Driven, Interpretable, and Robust Policy Design using the AI Economist (August 5, 2021). Available at SSRN: https://ssrn.com/abstract=3900237 or http://dx.doi.org/10.2139/ssrn.3900237

Alexander Trott

Salesforce ( email )

United States

Sunil Srinivasa

Salesforce ( email )

United States

Douwe Van der Wal

Salesforce ( email )

United States

Sebastien Haneuse

Harvard University ( email )

Stephan Zheng (Contact Author)

Salesforce ( email )

United States

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
69
Abstract Views
332
Rank
621,623
PlumX Metrics