ChatGPT Intimates a Tantalizing Future; Its core LLM is Organized on Multiple Levels; and it has Broken the Idea of Thinking, Version 3
25 Pages Posted: 25 Jan 2023 Last revised: 13 Apr 2023
Date Written: February 6, 2023
I make three arguments. A philosophical argument: (1) The behavior of ChatGPT is so sophisticated that the ordinary concept of thinking is no longer useful in distinguishing between human behavior and the ChatGPT’s behavior. We don’t have explicit understanding about what either humans or ChatGPT are doing. Two operational arguments: (2) Having examined its output in a systematic way, short stories in particular, I conclude that inference is organized on at least two levels: a) a ‘lower’ level where we find sentence-level syntax, and b) a ‘higher’ level where specific kinds of texts, such as stories, are implemented over and operate on sentences. This is roughly analogous to the way that high-level programming languages are implemented in assembly code. (3) Consequently, that aspects of full symbolic computation are latent in LLMs. An appendix has descriptive tables showing how four stories are organized on multiple levels.
Keywords: ChatGPT, GPT, deep learning, large language models, artificial intelligence, language, story grammar, narratology, chatbot
Suggested Citation: Suggested Citation