A journal collecting research into the economics of artificial intelligence including machine learning, econometrics and automation
Editor Joshua S. Gans
Self-Regulating Artificial General Intelligence (2017)
Joshua S. Gans
2 comments on PubPeer arXiv: 1711.04309v2 arXiv: 1711.04309
Here we examine the paperclip apocalypse concern for artificial general intelligence (or AGI) whereby a superintelligent AI with a simple goal (ie., producing paperclips) accumulates power so that all resources are devoted towards that simple goal and are unavailable for any other use. We provide conditions under which a paper apocalypse can arise but also show that, under certain architectures for recursive self-improvement of AIs, that a paperclip AI may refrain from allowing power capabilities to be developed. The reason is that such developments pose the same control problem for the AI as they do for humans (over AIs) and hence, threaten to deprive it of resources for its primary goal.