Fooled Twice – People Cannot Detect Deepfakes But Think They Can
30 Pages Posted: 23 Apr 2021 Publication Status: Published
More...Abstract
Hyper-realistic manipulation of audio-visual content, i.e., deepfakes, presents a new challenge for establishing veracity of online content. Research on the human impact of deepfakes, addressing both behaviors in response to and cognitive processing of deepfakes, remains sparse. In a pre-registered behavioral experiment ( N = 210), we show that (a) people cannot reliably detect deepfakes, and (b) neither raising awareness nor introducing financial incentives improves their detection accuracy. Zeroing in on the underlying cognitive processes, we find that (c) people are biased towards mistaking deepfakes as authentic videos (rather than vice versa) and (d) overestimate their own detection abilities. Together, these results suggest that people adopt a ``seeing-is-believing'' heuristic for deepfake detection while being overconfident in their (low) detection abilities. The combination renders people particularly susceptible to be influenced by inauthentic deepfake content.
Keywords: Deepfake detection, Experiment, Machine Behavior, Overcondence
Suggested Citation: Suggested Citation