The Case against Social Media Content Regulation: Reaffirming Congress’ Duty to Protect Online Bias, 'Harmful Content,' and Dissident Speech from the Administrative State
CEI, Issue Analysis 2020 No. 4
34 Pages Posted: 21 Jul 2020
Date Written: June 28, 2020
Abstract
As repeatedly noted by most defenders of free speech, expressing popular opinions never needs protection. Rather, it is the commitment to protecting dissident expression that is the mark of an open society. On the other hand, no one has the right to force people to agree with one’s ideas, much less transmit them.
However the flouting of these principles is now commonplace across the political spectrum. Government regulation of media content has recently gained currency among politicians and pundits of both left and right. In March 2019, for example, President Trump issued an executive order directing that colleges receiving federal research or education grants promote free inquiry. And in May 2020 he issued another addressing alleged censorship and bias by allegedly monopolistic social media companies.
In this political environment, policy makers, pressure groups, and even some technology sector leaders — whose enterprises have benefited greatly from free expression — are pursuing the idea of online content and speech standards, along with other policies that, if compulsory, would seriously burden their emerging competitors.
The current social media debate centers around competing interventionist agendas. Conservatives want social media titans regulated to remain "neutral," while liberals tend to want them to eradicate harmful content and address other alleged societal ills. Meanwhile, some maintain that Internet service should be regulated as a public utility.
Blocking or compelling speech in reaction to governmental pressure would not only violate the Constitution’s First Amendment — it would require immense expansion of constitutionally dubious administrative agencies. These agencies would either enforce government-affirmed social media and service provider de-platforming — the denial to certain speakers of the means to communicate their ideas to the public — or coerce platforms into carrying any message by actively policing that practice.
When it comes to protecting free speech, the brouhaha over social media power and bias boils down to one thing: The Internet — and any future communications platforms — needs protection from both the bans on speech sought by the left and the forced conservative ride-along speech sought by the right.
In the social media debate, the problem is not that big tech’s power is unchecked. Rather, the problem is that social media regulation — by either the left or right — would make it that way. Like the banks, social media giants are not too big to fail, but close regulation would make them that way.
American values strongly favor a marketplace of ideas where debate and civil controversy can thrive. Therefore, the creation of new regulatory oversight bodies and filing requirements to exile politically disfavored opinions on the one hand, and efforts to force the inclusion of conservative content on the other, should both be rejected.
Much of the Internet’s spectacular growth can be attributed to the immunity from liability for user-generated content afforded to social media platforms — and other Internet-enabled services such as discussion boards, review and auction sites, and commentary sections — by Section 230 of the 1996 Communications Decency Act. Host take-down or retention of undesirable or controversial content by “interactive computer services,” in the Act’s words, can be contentious, biased, or mistaken. But Section 230 does not require neutrality in the treatment of user-generated content in exchange for immunity.
In fact, Sec. 230 explicitly protects non-neutrality, albeit exercised in “good faith.” Section 230’s broad liability protection represented an acceleration of a decades-long trend in courts narrowing liability for publishers, republishers, and distributors.
It is the case that changes have been made to Section 230, such as with respect to sex trafficking, but deeper, riskier change is in the air today, advocated for by both Republicans and Democrats. It is possible that some content removals may happen in bad faith, or that companies violate their own terms of service, but addressing those on a case-by-case basis would be a more fruitful approach. Section 230 notwithstanding, laws addressing misrepresentation or deceptive business practices already impose legal discipline on companies.
Regime-changing regulation of dominant tech firms — whether via imposing online sales taxes, privacy mandates, or speech codes — is likely not to discipline them, but to make them stronger and more impervious to displacement by emerging competitors.
The vast energy expended on accusing purveyors of information, either on mainstream or social media, of bias or of inadequate removal of harmful content should be redirected toward the development of tools that empower users to better customize the content they choose to access. Existing social media firms want rules they can live with — which translates into rules that future social networks cannot live with. Government cannot create new competitors, but it can prevent their emergence by imposing barriers to market entry.
At risk in the regulatory fervor, too, is the right of political — as opposed to commercial — anonymity online. Government has a duty to protect dissent, not regulate it, but a casualty of regulation would appear to be future dissident platforms.
The Section 230 special immunity must remain intact for others beyond today's slate of big tech players, lest Congress turn social media’s economic power into genuine coercive political power. Competing biases are preferable to pretended objectivity. Given that reality, Congress should acknowledge the inevitable presence of bias, protect competition in speech, and defend the conditions that would allow future platforms and protocols to emerge in service of the public.
The priority is not that Facebook or Google or any other platform should remain politically neutral, but that citizens remain free to choose alternatives that might emerge and grow with the same Section 230 exemptions from which the modern online giants have long benefited. Policy makers must avoid creating an environment in which Internet giants benefit from protective regulation that prevents the emergence of new competitors in the decentralized infrastructure of the marketplace of ideas.
Keywords: Social Media, Monopoly, Speech, Free Speech, Hate Speech, Harmful Content, Sec. 230, Communications Decency Act
JEL Classification: K2
Suggested Citation: Suggested Citation