Can Socially-Minded Governance Control the AGI Beast?
25 Pages Posted: 5 Jan 2024 Last revised: 3 Dec 2024
Date Written: December 28, 2023
Abstract
This paper robustly concludes that it cannot. A model is constructed under idealised conditions that presume the risks associated with artificial general intelligence (AGI) are real, that safe AGI products are possible, and that there exist socially-minded funders who are interested in funding safe AGI even if this does not maximise profits. It is demonstrated that a socially-minded entity formed by such funders would not be able to minimise harm from AGI that unrestricted products released by for-profit firms might create. The reason is that a socially-minded entity can only minimise the use of unrestricted AGI products in ex post competition with for-profit firms at a prohibitive financial cost and so does not preempt the AGI developed by for-profit firms ex ante.
Keywords: artificial general intelligence, existential risk, governance, social objectives
JEL Classification: O36
Suggested Citation: Suggested Citation