Who Moderates the Moderators?: A Law and Economics Approach to Holding Online Platforms Accountable Without Destroying the Internet
67 Pages Posted: 24 Nov 2021 Last revised: 1 Dec 2022
Date Written: November 15, 2021
The salient objection to Section 230 reform that would saddle online platforms with any form of indirect liability for user-generated content is not one of principle, but of practicality: are there effective reforms that would meaningfully reduce the incidence of unlawful or tortious online content without destroying (or excessively damaging) the vibrant Internet ecosystem by imposing punishing, open-ended legal liability? Properly analyzed there are reasons to be optimistic about the possibility of effective reform.
In brief, this paper suggests that Section 230(c)(1)’s intermediary-liability protections for illegal or tortious conduct by third parties can and should be conditioned on taking reasonable steps to curb such conduct, subject to certain procedural constraints that will prevent a tide of unmeritorious litigation.
This basic principle is not without its strenuous and thoughtful detractors. A common set of objections to Section 230 reform has grown out of legitimate concerns that the economic and speech gains that have accompanied the rise of the Internet over the last three decades would be undermined or reversed if Section 230’s liability shield were weakened.
As the paper discusses, while many objections to Section 230 reform are well-founded, they also frequently suffer from overstatement or insufficiently supported suppositions about the magnitude of harm. At the same time, some of the expressed concerns are either simply misplaced or serve instead as arguments for broader civil-procedure reform (or decriminalization), rather than as defenses of the particularized immunity afforded by Section 230 itself.
This paper thus establishes a proper framework for evaluating online intermediary liability and assesses the implications of the common objections to Section 230 reform within that context. Our approach is rooted in the well-established law & economics analysis of liability rules and civil procedure, which we use to introduce a framework for understanding the tradeoffs faced by online platforms under differing legal standards with differing degrees of liability for the behavior and speech of third-party users.
Of central importance to the approach taken in this paper is the recognition that holding platform users responsible means acknowledging that platforms may sometimes shield users from responsibility. It also means acknowledging that, while direct deterrence is the normal strategy for enforcing legal norms, sometimes direct enforcement is impractical or ineffective. Alternative measures, including indirect liability, are justified when they lower the total costs of direct enforcement and residual misconduct.
This analysis is bolstered by a discussion of common law and statutory antecedents that allow us to understand how courts and legislatures have been able to develop appropriate liability regimes for the behavior of third parties in different, but analogous, contexts. Ultimately, and drawing on this analysis, we describe the contours of our recommended duty-of-care standard, along with a set of necessary procedural reforms that would help to ensure that we retain as much of the value of user-generated content as possible, while encouraging platforms to better police illicit and tortious content on their services.
Keywords: Section 230, online intermediary liability, collateral enforcement, indirect liability, online platforms, online intermediaries, Communications Decency Act
JEL Classification: K13, K20, K41, K42, L51, L86
Suggested Citation: Suggested Citation