Hate Speech on Social Media: Content Moderation in Context

48 Pages Posted: 14 Sep 2020 Last revised: 11 Mar 2021

See all articles by Richard Ashby Wilson

Richard Ashby Wilson

University of Connecticut School of Law; Department of Anthropology, University of Connecticut

Molly K. Land

University of Connecticut School of Law

Date Written: September 10, 2020


For all practical purposes, the decision of social media companies to prohibit hate speech on their platforms means that the longstanding debate in the United States about whether to limit hate speech in the public square has been resolved in favor of greater regulation. Nonetheless, revisiting these debates provides several insights essential for developing more empirically-based and narrowly tailored policies regarding online hate.

First, a central issue in the hate speech debate is the extent to which hate speech contributes to violence. Those in favor of more robust regulation claim a connection to violence, while others dismiss these arguments as too tenuous to support regulation. The data generated by social media, however, now allow researchers to begin to empirically test whether there are visible, measurable harms resulting from hate speech. These data can assist in developing evidence-based policies to address the most significant harms of hate speech, while avoiding overbroad regulation that is inconsistent with international standards.

Second, reexamining the U.S. debate about hate speech also reveals the serious missteps of social media policies that prohibit hate speech without regard to context. The policies that social media companies have developed attempt to define hate speech solely with respect to the content of the message. As the early advocates of limits on hate speech made clear, the meaning, force, and consequences of speech acts are deeply contextual, and it is impossible to understand the harms of hate speech without reference to local political realities and the power asymmetries between social groups. Regulation that is abstracted from this context will inevitably be overbroad.

This Article revisits these hate speech debates and considers how they map onto the platform law of content moderation, where emerging evidence indicates a correlation between hate speech online, virulent nationalism, and violence against minorities and activists. It then concludes by developing specific recommendations to bring greater consideration of context into the policies and procedures of social media content moderation.

Keywords: hate speech, social media, content moderation, human rights, law and technology, cyberlaw, first amendment, constitutional law

Suggested Citation

Wilson, Richard Ashby and Land, Molly K., Hate Speech on Social Media: Content Moderation in Context (September 10, 2020). 52 Connecticut Law Review 1029 (2021), Available at SSRN: https://ssrn.com/abstract=3690616

Richard Ashby Wilson (Contact Author)

University of Connecticut School of Law ( email )

65 Elizabeth Street
Hartford, CT 06105
United States

HOME PAGE: http://law.uconn.edu/person/richard-a-wilson/

Department of Anthropology, University of Connecticut ( email )

354 Mansfield Road
Storrs, CT 06269-1176
United States

HOME PAGE: http://anthropology.uconn.edu/person/richard-ashby-wilson/

Molly K. Land

University of Connecticut School of Law ( email )

65 Elizabeth Street
Hartford, CT 06105
United States

Do you have negative results from your research you’d like to share?

Paper statistics

Abstract Views
PlumX Metrics