Content Moderation at the Infrastructure Layer: Evidence from Parler
40 Pages Posted: 10 Oct 2022 Last revised: 27 Nov 2023
Date Written: September 29, 2022
Moderation of problematic individuals and communities—–particularly those rife with misinformation—– has become a significant policy challenge in recent times. Social media platforms wield significant editorial control over their users’ content, but do not always have the incentives or technological capabilities to moderate harmful content. Yet these platforms rely on third-party technology infrastructure providers, who are increasingly intervening in content moderation decisions by withdrawing support to platforms that fail to self-regulate. Despite the rise in these initiatives, little is known about their spillover effects outside the focal platform. In January 2021, Parler, a social media platform, was taken offline by its web and app hosting services for its purported role in fomenting the Capitol riots. Using detailed user-level mobile usage data, we demonstrate that following the intervention, a section of Parler users migrated to Telegram, an alternative ’free-speech’ platform. Many other fringe platforms saw momentary surges that could not be sustained over time, and a large fraction of Parler users did not migrate to alternative platforms. The migration was driven predominantly by highly engaged Parler users with high partisan media consumption prior to January 2021. We also demonstrate the unintended consequences of the Parler deplatforming—when Parler users moved to Telegram, their exposure to misinformation and partisan content increased. Our results also serve as the first long-run evidence of cross-category platform substitution, which can guide competition policy for social media applications.
Keywords: Deplatforming, Content Moderation, Echo Chamber, Polarization, Misinformation, Fake News, Censorship, Radicalization, Free Speech, Parler
JEL Classification: L86, M37, M38
Suggested Citation: Suggested Citation