Algorithms Patrolling Content: Where’s the Harm?

24 Pages Posted: 26 Mar 2021

Date Written: February 22, 2021


This paper reveals ways in which algorithms on the Facebook platform have the effect of suppressing content distribution without specifically targeting it for removal, and examines the consequential stifling of users’ speech. At the heart of it is an examination of the colloquial concept of a ‘shadow ban’. This is a term that refers to specific scenario where users’ content is hidden or deprioritised without informing them. The paper reveals how the Facebook shadow ban works by blocking dissemination in News Feed. This is Facebook’s recommender system that curates content for users, and is also the name of the algorithm that encodes the process. The decision-making criteria are based on ‘behaviour’, a term that relates to activity of the page that is identifiable through patterns in the data. It’s a technique that is rooted in computer security, and raises questions about the balance between security and freedom of expression.

The paper is situated in the field of research that addresses the responsibility and accountability of the large online platforms with regard to content moderation. It works through the lens of the user to examine the impact of the Facebook shadow ban. Users, whether they are acting as speakers or as recipients of information, have positive rights that must be protected and they should not be treated as passive victims. The user experience was studied over the period of a year from November 2019 to November 2020 across 20 Facebook Pages from the UK. Data provided to the Pages via Facebook Insights was analysed in order to produce a comparative metric, and it was considered how the shadow ban could be assessed under human rights standards.

The paper concludes with a recommendation for quality controls on Facebook’s internal processes, potentially with a form of triage to identify genuine, lawful content that has been caught up in the security net. Overall, an improved understanding should be developed around the automated processes and algorithms that are used in content moderation. This is a vital step to safeguarding the online platforms as a forum for public discourse.

Keywords: Online platforms, content moderation, social media, platform accountability, intermediary liability, Facebook, big tech, algorithms, freedom of expression, human rights law, internet law, digital services act, online harms, transparency, free speech.

Suggested Citation

Horten, Monica, Algorithms Patrolling Content: Where’s the Harm? (February 22, 2021). Available at SSRN: or

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Abstract Views
PlumX Metrics