Self-Regulating Artificial General Intelligence

16 Pages Posted: 28 Feb 2018 Last revised: 15 Jan 2025

See all articles by Joshua S. Gans

Joshua S. Gans

University of Toronto - Rotman School of Management; NBER

Multiple version iconThere are 2 versions of this paper

Date Written: February 2018

Abstract

This paper examines the paperclip apocalypse concern for artificial general intelligence. This arises when a superintelligent AI with a simple goal (ie., producing paperclips) accumulates power so that all resources are devoted towards that goal and are unavailable for any other use. Conditions are provided under which a paper apocalypse can arise but the model also shows that, under certain architectures for recursive self-improvement of AIs, that a paperclip AI may refrain from allowing power capabilities to be developed. The reason is that such developments pose the same control problem for the AI as they do for humans (over AIs) and hence, threaten to deprive it of resources for its primary goal.

Suggested Citation

Gans, Joshua S., Self-Regulating Artificial General Intelligence (February 2018). NBER Working Paper No. w24352, Available at SSRN: https://ssrn.com/abstract=3131062

Joshua S. Gans (Contact Author)

University of Toronto - Rotman School of Management ( email )

Canada

HOME PAGE: http://www.joshuagans.com

NBER ( email )

1050 Massachusetts Avenue
Cambridge, MA 02138
United States

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
31
Abstract Views
508
PlumX Metrics