Self-Regulating Artificial General Intelligence

16 Pages Posted: 28 Feb 2018

See all articles by Joshua S. Gans

Joshua S. Gans

University of Toronto - Rotman School of Management; NBER

Multiple version iconThere are 2 versions of this paper

Date Written: February 2018

Abstract

This paper examines the paperclip apocalypse concern for artificial general intelligence. This arises when a superintelligent AI with a simple goal (ie., producing paperclips) accumulates power so that all resources are devoted towards that goal and are unavailable for any other use. Conditions are provided under which a paper apocalypse can arise but the model also shows that, under certain architectures for recursive self-improvement of AIs, that a paperclip AI may refrain from allowing power capabilities to be developed. The reason is that such developments pose the same control problem for the AI as they do for humans (over AIs) and hence, threaten to deprive it of resources for its primary goal.

Institutional subscribers to the NBER working paper series, and residents of developing countries may download this paper without additional charge at www.nber.org.

Suggested Citation

Gans, Joshua S., Self-Regulating Artificial General Intelligence (February 2018). NBER Working Paper No. w24352. Available at SSRN: https://ssrn.com/abstract=3131062

Joshua S. Gans (Contact Author)

University of Toronto - Rotman School of Management ( email )

Canada

HOME PAGE: http://www.joshuagans.com

NBER ( email )

1050 Massachusetts Avenue
Cambridge, MA 02138
United States

Register to save articles to
your library

Register

Paper statistics

Downloads
5
Abstract Views
118
PlumX Metrics