Guarding the Guardians: Content Moderation by Online Intermediaries and the Rule of Law
10 Pages Posted: 26 Feb 2020
Date Written: February 23, 2020
Online intermediaries have become a focal point of content moderation. They may enable or disable access by removing or blocking controversial content, or by terminating users’ accounts altogether. Consequently, governments, rightholders and users around the world are pressing online intermediaries to hone their gatekeeping functions and censor content amounting to hate speech, inciting materials, or copyright infringement. The rising pressure on online platforms to block, remove, monitor, or filter illegitimate content is fostering the deployment of technological measures to identify potentially objectionable content, which may expose platforms to legal liability or raise a public outcry. As a result, online intermediaries are effectively performing three roles at the same time: they act like a legislature, in defining what constitute legitimate content on their platform, like judges who determine the legitimacy of content in particular instances, and like administrative agencies who act on these adjudications to block illegitimate content. Thus, in content moderation, public law enforcement power and adjudication power converge in the hands of a small number of private mega platforms.
Existing platform-liability regimes have demonstrated that platforms often fail in weeding out legitimate content, and therefore holding them liable for users’ content may result in effectively silencing lawful speech.3 Yet, pervasive power must be restrained in order to ensure civil liberties and the rule of law. Hence, we argue that even if they are not liable for illegitimate content made available by their subscribers, online intermediaries must still be held accountable for content moderation.
Traditional legal rights and processes, however, are ill equipped to oversee the robust, non-transparent, and relatively effective nature of algorithmic content moderation by online intermediaries. We currently lack sufficient safeguards against over-enforcement of protected speech as well as under-enforcement of illicit content. Allowing unchecked power to escape traditional schemes of constitutional restraints is potentially game-changing for democracy as it raises serious challenges to the rule of law as well as to notions of trust and accountability.
This chapter describes three ways in which content moderation by online intermediaries challenges the rule of law: it blurs the distinction between private interests and public responsibilities; it delegates the power to make social choices about content legitimacy to opaque algorithms; and it circumvents the constitutional safeguard of the separation of powers. The chapter further discusses the barriers to accountability in online content moderation by intermediaries, including the dynamic nature of algorithmic content moderation using machine learning; barriers arising from the partialness of data and data floods; and trade secrecy which protects the algorithmic decision making process. Finally, the chapter proposes a strategy to overcome these barriers to accountability of online intermediaries, namely ‘black box tinkering’: a reverse engineering methodology that could be used by governmental agencies, as well as social activists, as a check on private content moderation. After describing the benefits of black box tinkering, the chapter explains what regulatory steps should be taken to promote the adoption of this oversight strategy.
Suggested Citation: Suggested Citation