Can Socially-Minded Governance Control the AGI Beast?

15 Pages Posted: 5 Jan 2024

See all articles by Joshua S. Gans

Joshua S. Gans

University of Toronto - Rotman School of Management; NBER

Date Written: November 28, 2023

Abstract

This paper robustly concludes that it cannot. A model is constructed under idealised conditions that presume the risks associated with artificial general intelligence (AGI) are real, that safe AGI products are possible, and that there exist socially-minded funders who are interested in funding safe AGI even if this does not maximise profits. It is demonstrated that a socially-minded entity formed by such funders would not be able to minimise harm from AGI that might be created by unrestricted products released by for-profit firms. The reason is that a socially-minded entity has neither the incentive nor ability to minimise the use of unrestricted AGI products in ex post competition with for-profit firms and cannot preempt the AGI developed by for-profit firms ex ante.

Keywords: artificial general intelligence, existential risk, governance, social objectives

JEL Classification: O36

Suggested Citation

Gans, Joshua S., Can Socially-Minded Governance Control the AGI Beast? (November 28, 2023). Available at SSRN: https://ssrn.com/abstract=4646953 or http://dx.doi.org/10.2139/ssrn.4646953

Joshua S. Gans (Contact Author)

University of Toronto - Rotman School of Management ( email )

Canada

HOME PAGE: http://www.joshuagans.com

NBER ( email )

1050 Massachusetts Avenue
Cambridge, MA 02138
United States

Do you have negative results from your research you’d like to share?

Paper statistics

Downloads
108
Abstract Views
377
Rank
457,613
PlumX Metrics