95 Pages Posted: 25 Jan 2017 Last revised: 2 Oct 2017
Date Written: January 4, 2017
On August 10th, 2016, a complaint filed in the Eastern District of New York formally accused Facebook of aiding the execution of terrorist attacks. The complaint depicted user-generated posts and groups promoting and directing the perpetration of terrorist attacks. Under section 230 of the Communications Decency Act, Interactive Service Providers (ISPs), such as Facebook, cannot be held liable for user-generated content where the ISP did not create or develop the content-at-issue. However, this complaint stands out because it seeks to hold Facebook liable not only for the content of third parties, but also for the effect its personalized machine-learning algorithms—or “services”—have had on the ability of terrorists to execute attacks. By alleging that Facebook’s actual services, in addition to its publication of content, allow terrorists to more effectively execute attacks, the complaint seeks to negate the applicability of section 230 immunity.
This Note argues that Facebook’s services—specifically the personalization of content through machine-learning algorithms—constitute the “development” of content and as such do not qualify for section 230 immunity. This Note analyzes the jurisprudential evolution of section 230 in order to revise the analytical framework applied in early cases. This framework is guided by congressional and public policy goals and creates brighter lines for technological immunity. It tailors immunity to account for user-data mined by ISPs and the pervasive effect that the use of that data has on users—two issues that courts have yet to confront. This Note concludes that under the revised framework, machine-learning algorithms' content organization—made effective through the collection of individualized data—make ISPs co-developers of content and thus bar them from immunity.
Keywords: Facebook, social media, algorithm, Communications Decency Act, terrorism, incitement, secondary liability, section 230 immunity, Cohen v. Facebook, machine learning
JEL Classification: K00
Suggested Citation: Suggested Citation
Tremble, Catherine A., Wild Westworld: The Application of Section 230 of the Communications Decency Act to Social Networks’ Use of Machine-Learning Algorithms (January 4, 2017). Available at SSRN: https://ssrn.com/abstract=2905819 or http://dx.doi.org/10.2139/ssrn.2905819