Regulating Online Content Moderation
35 Pages Posted: 26 Aug 2017 Last revised: 10 May 2018
Date Written: August 1, 2017
The Supreme Court held in 2017 that “the vast democratic forums of the Internet in general, and social media in particular,” are “the most important places…for the exchange of views.” Yet within these forums, speakers are subject to the closest and swiftest regime of censorship the world has ever known. This censorship comes not from the government, but from a small number of private corporations – Facebook, Twitter, Google – and a vast corps of human and algorithmic content moderators. The content moderators’ work is indispensable; without it, social media users would drown in spam and disturbing imagery. At the same time, content moderation practices correspond only loosely to First Amendment values. Recently-leaked internal training manuals from Facebook reveal that its content moderation practices are rushed, ad-hoc, and at times incoherent.
The time has come to consider legislation that would guarantee meaningful speech rights in online spaces. This Article evaluates a range of possible approaches to the problem. These include 1) an administrative monitoring and compliance regime to ensure that content moderation policies hew close to First Amendment principles; 2) a “personal accountability” regime handing control over content moderation to users; and 3) a relatively simple requirement that companies disclose their moderation policies. Each carries serious pitfalls, but none is as dangerous as option 4): continuing to entrust online speech rights to the private sector.
Keywords: First Amendment, Social Media
Suggested Citation: Suggested Citation