Algorithms Patrolling Content: Where’s the Harm?
22 Pages Posted: 26 Mar 2021 Last revised: 18 Jul 2022
Date Written: July 17, 2022
At the heart of this paper is an examination of the colloquial concept of a ‘shadow ban’. The paper reveals ways in which algorithms on the Facebook platform have the effect of suppressing content distribution without specifically targeting it for removal, and examines the consequential stifling of users’ speech. The paper reveals how the Facebook shadow ban is implemented by blocking dissemination of content in News Feed. The decision-making criteria are based on ‘behaviour’, a term that relates to activity of the page that is identifiable through patterns in the data. It’s a technique that is rooted in computer security, and raises questions about the balance between security and freedom of expression.
The paper is situated in the field of research that addresses the responsibility and accountability of online platforms with regard to content moderation. It works through the lens of the user to study the experience of the shadow ban on 20 UK-based Facebook Pages over the period from November 2019 to January 2021. The potential harm was evaluated using human rights standards and a comparative metric produced from Facebook Insights data.
This revision of the paper connects the empirical research to recent developments in law and policy. It examines the UK’s Online Safety Bill and the EU’s Digital Services Act, both of which acknowledge shadow bans as a way to restrict content. The paper concludes that shadow bans are an interference with freedom of expression and outlines stronger safeguards for users as a vital step to protecting public discourse.
Keywords: Online platforms, content moderation, shadow bans, social media, platform accountability, intermediary liability, Facebook, big tech, algorithms, freedom of expression, human rights law, internet law, online safety bill, digital services act, online harms, transparency, free speech.
Suggested Citation: Suggested Citation