Leveraging CDA 230 to Counter Online Extremism

George Washington University Program on Extremism Legal Perspectives on Tech Series, 2019

12 Pages Posted: 13 Mar 2020

See all articles by Annemarie Bridy

Annemarie Bridy

Google; Yale University - Yale Information Society Project; Stanford Law School Center for Internet and Society

Date Written: September 1, 2019

Abstract

Current events make it plain that social media platforms have become vectors for the global spread of extremism, including the most virulent forms of racial and religious hatred. As offline violence with demonstrable links to online extremism escalates, regulators have made it clear that they expect the world’s largest social media platforms to more actively police harmful online speech, including that of terrorist organizations and organized hate groups.

Among US tech companies, Facebook has been the most receptive to the idea of increased government regulation. In an unusual op-ed in The Washington Post, Mark Zuckerberg explicitly asked Congress to tell him what to do about hate speech and terrorist propaganda on his services. Such regulation would be a significant departure from past US policy concerning online speech. Since the early days of the internet, US-based online services have benefited from a policy that gives them broad discretion to set and enforce their own guidelines defining acceptable (and unacceptable) user speech. That policy, codified in section 230 of the Communications Decency Act (CDA), was adopted in large part to help foster the internet’s growth as a diverse forum for civic (and civil) discourse.

In recent years, section 230 has come under fire from all directions. Some critics believe its grant of broad discretion allows social media platforms to be too permissive about their users’ speech. Others believe it lets them be too restrictive. At a moment of great uncertainty for the future of section 230, this article explains the positive role it can play in platforms’ efforts to take greater responsibility for regulating hate speech and extremist content. I argue that the scope of immunity in Section 230 needn’t be narrowed, but the statute could be productively amended to better safeguard free speech as the world’s largest social media platforms turn to automated tools to comply with new, speech-restrictive European regulations.

Keywords: Section 230, Online Speech, Online Extremism, Hate Speech, Social Media, Intermediary Liability, NetzDG, Terrorist Content Regulation

Suggested Citation

Bridy, Annemarie, Leveraging CDA 230 to Counter Online Extremism (September 1, 2019). George Washington University Program on Extremism Legal Perspectives on Tech Series, 2019, Available at SSRN: https://ssrn.com/abstract=3538919

Annemarie Bridy (Contact Author)

Google ( email )

25 Massachusetts Ave. NW #900
Washington, DC 20001
United States

Yale University - Yale Information Society Project

127 Wall Street
New Haven, CT 06511
United States

HOME PAGE: http://law.yale.edu/annemarie-bridy

Stanford Law School Center for Internet and Society

Palo Alto, CA
United States

HOME PAGE: http://cyberlaw.stanford.edu/about/people/annemarie-bridy

Here is the Coronavirus
related research on SSRN

Paper statistics

Downloads
38
Abstract Views
177
PlumX Metrics