Self-Regulating Artificial General Intelligence

15 Pages Posted: 23 Feb 2018

See all articles by Joshua S. Gans

Joshua S. Gans

University of Toronto - Rotman School of Management; NBER

Multiple version iconThere are 2 versions of this paper

Date Written: February 15, 2018

Abstract

This paper examines the paperclip apocalypse concern for artificial general intelligence. This arises when a superintelligent AI with a simple goal (ie., producing paperclips) accumulates power so that all resources are devoted towards that goal and are unavailable for any other use. Conditions are provided under which a paper apocalypse can arise but the model also shows that, under certain architectures for recursive self-improvement of AIs, that a paperclip AI may refrain from allowing power capabilities to be developed. The reason is that such developments pose the same control problem for the AI as they do for humans (over AIs) and hence, threaten to deprive it of resources for its primary goal.

Keywords: artificial intelligence, paperclips, control problem, superintelligence

JEL Classification: D50, D80

Suggested Citation

Gans, Joshua S., Self-Regulating Artificial General Intelligence (February 15, 2018). Rotman School of Management Working Paper No. 3124512. Available at SSRN: https://ssrn.com/abstract=3124512 or http://dx.doi.org/10.2139/ssrn.3124512

Joshua S. Gans (Contact Author)

University of Toronto - Rotman School of Management ( email )

Canada

HOME PAGE: http://www.joshuagans.com

NBER ( email )

1050 Massachusetts Avenue
Cambridge, MA 02138
United States

Register to save articles to
your library

Register

Paper statistics

Downloads
41
Abstract Views
218
PlumX Metrics