News from Generative Artificial Intelligence is Believed Less
34 Pages Posted: 10 Mar 2021 Last revised: 14 Sep 2022
Date Written: February 16, 2021
Artificial Intelligence (AI) algorithms are now able to produce text virtually indistinguishable from text written by humans across a variety of domains. A key question, then, is whether people believe content from AI as much as content from humans. Trust in the (human generated) news media has been decreasing over time and AI is viewed as lacking human desires, and emotions, suggesting that AI news may be viewed as more accurate. Contrary to this, two preregistered experiments conducted on representative U.S. samples (combined N = 4,034) showed that people rated news produced by AI as being less accurate than news produced by humans. When news items were tagged as produced by AI (compared to a human), people were more likely to incorrectly rate them as inaccurate when they were actually true, and more likely to correctly rate them as inaccurate when they were indeed false. These results were robust to experimental paradigm (separate and joint evaluations), news item (actual veracity, age), and several respondent characteristics (e.g., political orientation). This effect is particularly important given the increasing use of AI algorithms in news production, and the associated ethical and governance pressures to disclose their use.
Keywords: artificial intelligence, algorithms, news, accuracy
Suggested Citation: Suggested Citation