What Would It Mean for a Machine to Have a Self?
37 Pages Posted: 15 Sep 2022
Date Written: September 1, 2022
What would it mean for autonomous AI agents to have a ‘self’? One proposal for a minimal notion of self is a representation of one’s body spatio-temporally located in the world, with a tag of that representation as the agent taking actions in the world. This turns self-representation into a constructive inference process of self-orienting, and raises a challenging computational problem that any agent must solve continually. Here we construct a series of novel ‘self-finding’ tasks modeled on simple video games—in which players must identify themselves when there are multiple self-candidates—and show through quantitative behavioral testing that humans are near optimal at self-orienting. In contrast, well-known Deep Reinforcement Learning algorithms, which excel at learning much more complex video games, are far from optimal. We suggest that self-orienting allows humans to navigate new settings, and that this is a crucial target for engineers wishing to develop flexible agents.
Suggested Citation: Suggested Citation