The Rise and Fall of 'Social Bot' Research
19 Pages Posted: 29 Mar 2021 Last revised: 9 Apr 2021
Date Written: March 28, 2021
The idea that social media platforms like Twitter are inhabited by vast numbers of “social bots” has become widely accepted in recent years. “Social bots” are assumed to be automated social media accounts operated by malicious actors with the goal of manipulating public opinion. They are credited with the ability to produce content autonomously and to interact with human users. “Social bot” activity has been reported in many different political contexts, including Donald Trump’s election and the Brexit referendum in 2016. However, the relevant publications either use crude and questionable heuristics to discriminate between supposed “social bots” and humans or—in the vast majority of the cases—fully rely on the output of automatic bot detection tools, most commonly Botometer. We point out fundamental theoretical flaws of these approaches. Also, we closely and systematically inspected hundreds of accounts that had been counted or even presented as “social bots” in peer-reviewed studies. We were unable to find a single “social bot”. Instead, we found mostly accounts undoubtedly operated by human users, the vast majority of them using Twitter in an inconspicious and unremarkable fashion without the slightest traces of automation. We conclude that studies claiming to investigate the prevalence or influence of “social bots” have, in reality, just investigated false positives and artifacts of the flawed detection methods employed.
Keywords: Social bots, Bot detection, Botometer, False positives
Suggested Citation: Suggested Citation