Self-Regulating Artificial General Intelligence
16 Pages Posted: 28 Feb 2018 Last revised: 15 Jan 2025
There are 2 versions of this paper
Self-Regulating Artificial General Intelligence
Date Written: February 2018
Abstract
This paper examines the paperclip apocalypse concern for artificial general intelligence. This arises when a superintelligent AI with a simple goal (ie., producing paperclips) accumulates power so that all resources are devoted towards that goal and are unavailable for any other use. Conditions are provided under which a paper apocalypse can arise but the model also shows that, under certain architectures for recursive self-improvement of AIs, that a paperclip AI may refrain from allowing power capabilities to be developed. The reason is that such developments pose the same control problem for the AI as they do for humans (over AIs) and hence, threaten to deprive it of resources for its primary goal.
Suggested Citation: Suggested Citation