Are We Tripping? The Mirage of AI Hallucinations
9 Pages Posted: 13 Feb 2025 Last revised: 13 Feb 2025
Date Written: February 06, 2025
Abstract
There is a deep disorder in the discourse of generative artificial intelligence (AI). When AI seems to make things up or distort reality — adding extra fingers to human hands, inventing nonexistent court cases, or generating surreal advertisements — we commonly describe them as AI hallucinations. But a metaphor of hallucination reinforces the misconception that AI is conscious; it implies that AI experiences reality and sometimes becomes delirious. We need a new way to talk about AI outputs when they don't match our expectations for realism or facticity. For this paper, we analyzed the implications of more than 80 alternative terms suggested by scholars, educators, and commentators. Ultimately, we chose a more fitting term: AI mirage. Just as a desert mirage is an artifact of physical conditions, an AI mirage is an artifact of how systems process training data and prompts. In both cases, a human can mistake a mirage for reality or see it for what it really is. We propose the general use of the term AI mirage in place of AI hallucination because it can help build AI literacies, prompting us to explore how AI generates outputs and how humans decide what those outputs mean.
Keywords: AI Literacies, Hallucination, LLM, Large Language Model, Mirage, Terminology, AI, Generative Artificial Intelligence
Suggested Citation: Suggested Citation