Are We Tripping? The Mirage of AI Hallucinations

9 Pages Posted: 13 Feb 2025 Last revised: 13 Feb 2025

Date Written: February 06, 2025

Abstract

There is a deep disorder in the discourse of generative artificial intelligence (AI). When AI seems to make things up or distort reality — adding extra fingers to human hands, inventing nonexistent court cases, or generating surreal advertisements — we commonly describe them as AI hallucinations. But a metaphor of hallucination reinforces the misconception that AI is conscious; it implies that AI experiences reality and sometimes becomes delirious. We need a new way to talk about AI outputs when they don't match our expectations for realism or facticity. For this paper, we analyzed the implications of more than 80 alternative terms suggested by scholars, educators, and commentators. Ultimately, we chose a more fitting term: AI mirage. Just as a desert mirage is an artifact of physical conditions, an AI mirage is an artifact of how systems process training data and prompts. In both cases, a human can mistake a mirage for reality or see it for what it really is. We propose the general use of the term AI mirage in place of AI hallucination because it can help build AI literacies, prompting us to explore how AI generates outputs and how humans decide what those outputs mean.

Keywords: AI Literacies, Hallucination, LLM, Large Language Model, Mirage, Terminology, AI, Generative Artificial Intelligence

Suggested Citation

Mills, Anna and Angell, Nate, Are We Tripping? The Mirage of AI Hallucinations (February 06, 2025). Available at SSRN: https://ssrn.com/abstract=5127162 or http://dx.doi.org/10.2139/ssrn.5127162

Anna Mills

College of Marin ( email )

835 College Avenue
Kentfield, CA 94904
United States

Nate Angell (Contact Author)

Nudgital ( email )

Portland, OR
United States

HOME PAGE: http://nudgital.com

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
629
Abstract Views
8,685
Rank
91,056
PlumX Metrics