What Is It Like to Be a Bot?: The world according to GPT-4

42 Pages Posted: 12 May 2023 Last revised: 19 May 2023

Date Written: May 9, 2023

Abstract

The recent explosion of Large Language Models (LLMs) has provoked lively debate about “emergent” properties of the models, mainly concerning their purported “sparks” of General Intelligence. Here, I examine another potentially emergent capacity, namely, consciousness. Using OpenAI’s GPT-4 as exemplar and interlocutor, I argue that the blanket dismissal of LLM sentience is unwarranted, and undermined by a three-way analogy among bats, humans, and GPT-4. Inquiry into the emergence of sentience is facilitated with philosophical phenomenology and cognitive ethology, examining the pattern of errors made by GPT-4 and proposing their origin in the absence of the subjective awareness of time. This deficit suggests that GPT-4 ultimately lacks a capacity to construct a stable perceptual world; the temporal vacuum undermines any capacity for GPT-4 to construct a consistent, continuously updated, model of its environment. Accordingly, none of GPT-4’s statements are epistemically secure. Because the anthropomorphic illusion is so strong, I conclude by suggesting that GPT-4 works with its users to construct improvised works of fiction.

Keywords: Artificial Intelligence (AI), Large Language Models (LLM), cognitive ethology, phenomenology, temporality, philosophy, GPT-4

Suggested Citation

Lloyd, Dan, What Is It Like to Be a Bot?: The world according to GPT-4 (May 9, 2023). Available at SSRN: https://ssrn.com/abstract=4443727 or http://dx.doi.org/10.2139/ssrn.4443727

Dan Lloyd (Contact Author)

Trinity College ( email )

300 Summit Street
Hartford, CT 06106
United States

Do you have negative results from your research you’d like to share?

Paper statistics

Downloads
340
Abstract Views
881
Rank
163,163
PlumX Metrics