Can Socially-Minded Governance Control the Agi Beast?

16 Pages Posted: 4 Dec 2023

See all articles by Joshua Gans

Joshua Gans

University of Toronto - Rotman School of Management

Date Written: November 2023

Abstract

This paper robustly concludes that it cannot. A model is constructed under idealised conditions that presume the risks associated with artificial general intelligence (AGI) are real, that safe AGI products are possible, and that there exist socially-minded funders who are interested in funding safe AGI even if this does not maximise profits. It is demonstrated that a socially-minded entity formed by such funders would not be able to minimise harm from AGI that might be created by unrestricted products released by for-profit firms. The reason is that a socially-minded entity has neither the incentive nor ability to minimise the use of unrestricted AGI products in ex post competition with for-profit firms and cannot preempt the AGI developed by for-profit firms ex ante.

Institutional subscribers to the NBER working paper series, and residents of developing countries may download this paper without additional charge at www.nber.org.

Suggested Citation

Gans, Joshua, Can Socially-Minded Governance Control the Agi Beast? (November 2023). NBER Working Paper No. w31924, Available at SSRN: https://ssrn.com/abstract=4652397

Joshua Gans (Contact Author)

University of Toronto - Rotman School of Management ( email )

105 St. George Street
Toronto, Ontario M5S 3E6 M5S1S4
Canada

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
6
Abstract Views
146
PlumX Metrics