GPT-3: Waterloo or Rubicon? Here be Dragons, Version 4.1

Working Paper, Version 2, 2020

38 Pages Posted: 12 Oct 2020 Last revised: 9 May 2022

Date Written: May 7, 2022

Abstract

GPT-3 is an AI engine that generates text in response to a prompt given to it by a human user. It does not understand the language that it produces, at least not as philosophers understand such things. And yet its output is in many cases astonishingly like human language. How is this possible? Think of the mind as a high-dimensional space of signifieds, that is, meaning-bearing elements. Correlatively, text consists of one-dimensional strings of signifiers, that is, linguistic forms. GPT-3 creates a language model by examining the distances and ordering of signifiers in a collection of text strings and computes over them so as to reverse engineer the trajectories texts take through that space. Peter Gärdenfors’ semantic geometry provides a way of thinking about the dimensionality of mental space and the multiplicity of phenomena in the world, about how mind mirrors the world. Yet artificial systems are limited by the fact that they do not have a sensorimotor system that has evolved over millions of years. They do have inherent limits.

Keywords: GPT-3, Transformers, NLP, Corpus Linguistics, AI, Artificial Intelligence, Computational Linguistics, Distributional Semantics

Suggested Citation

Benzon, William L., GPT-3: Waterloo or Rubicon? Here be Dragons, Version 4.1 (May 7, 2022). Working Paper, Version 2, 2020, Available at SSRN: https://ssrn.com/abstract=3667608

Do you have negative results from your research you’d like to share?

Paper statistics

Downloads
435
Abstract Views
1,808
Rank
122,631
PlumX Metrics