Can Socially-Minded Governance Control the AGI Beast?

25 Pages Posted: 5 Jan 2024 Last revised: 3 Dec 2024

See all articles by Joshua S. Gans

Joshua S. Gans

University of Toronto - Rotman School of Management; NBER

Date Written: December 28, 2023

Abstract

This paper robustly concludes that it cannot. A model is constructed under idealised conditions that presume the risks associated with artificial general intelligence (AGI) are real, that safe AGI products are possible, and that there exist socially-minded funders who are interested in funding safe AGI even if this does not maximise profits. It is demonstrated that a socially-minded entity formed by such funders would not be able to minimise harm from AGI that unrestricted products released by for-profit firms might create. The reason is that a socially-minded entity can only minimise the use of unrestricted AGI products in ex post competition with for-profit firms at a prohibitive financial cost and so does not preempt the AGI developed by for-profit firms ex ante.

Keywords: artificial general intelligence, existential risk, governance, social objectives

JEL Classification: O36

Suggested Citation

Gans, Joshua S., Can Socially-Minded Governance Control the AGI Beast? (December 28, 2023). Available at SSRN: https://ssrn.com/abstract=4646953 or http://dx.doi.org/10.2139/ssrn.4646953

Joshua S. Gans (Contact Author)

University of Toronto - Rotman School of Management ( email )

Canada

HOME PAGE: http://www.joshuagans.com

NBER ( email )

1050 Massachusetts Avenue
Cambridge, MA 02138
United States

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
189
Abstract Views
838
Rank
349,145
PlumX Metrics