The Implied Truth Effect: Attaching Warnings to a Subset of Fake News Stories Increases Perceived Accuracy of Stories Without Warnings
45 Pages Posted: 14 Sep 2017 Last revised: 10 Dec 2017
Date Written: December 8, 2017
What can be done to combat political misinformation? One widely employed intervention involves attaching warnings to news stories that have been disputed by third-party fact-checkers. Prior work shows that the impact of such warnings may be undermined by politically motivated reasoning. We raise another possible negative consequence: an “implied truth” effect whereby false stories that fail to get tagged are considered validated, and thus are seen as more accurate. Such an effect is particularly important given that it is much easier to produce misinformation than it is to debunk it. Across five experiments (N = 5,271), we find that while warnings do lead to a modest reduction in perceived accuracy of fake news relative to a control condition, we also observed the hypothesized implied truth effect: the presence of warnings caused untagged stories to be seen as more accurate than in the control. Furthermore, the implied truth effect was larger (a) for fake headlines that were more plausible at baseline; and (b) among subgroups who were more likely to believe fake news at baseline (Trump supporters and young adults). The implied truth effect presents a major challenge to the policy of using warning tags to fight misinformation.
Note: A previous version of this working paper was titled “Assessing the effect of “disputed” warnings and source salience on perceptions of fake news accuracy”. To allow for a more detailed treatment of both issues, the source salience aspect of the previous manuscript (former Study 2) has been removed from this updated version and will be re-posted as a part of a separate paper investigating source effects.
Keywords: fake news, news media, social media, fact-checking, misinformation, source credibility
Suggested Citation: Suggested Citation