Platform Governance with Algorithm-based Content Moderation: An Empirical Study on Reddit
39 Pages Posted: 10 Mar 2021 Last revised: 25 Mar 2024
Date Written: January 17, 2021
Abstract
With increasing volumes of participation in social media and online communities, content moderation has become an integral component of platform governance. Volunteer (human) moderators have thus far been the essential workforce for content moderation. Because volunteer-based content moderation faces challenges in achieving scalable, desirable, and sustainable moderation, many online platforms have recently started to adopt algorithm-based content moderation tools (bots). When bots are introduced into platform governance, it is unclear how volunteer moderators react in terms of their community-policing and -nurturing efforts. To understand the impacts of these increasingly popular bot moderators, we conduct an empirical study with data collected from 156 communities (subreddits) on Reddit. Based on a series of econometric analyses, we find that bots augment volunteer moderators by stimulating them to moderate a larger quantity of posts, and such effects are pronounced in larger communities. Specifically, volunteer moderators perform 20.9% more community policing, particularly over subjective rules. Moreover, in communities with larger sizes, volunteers also exert increased efforts in offering more explanations and suggestions after their community adopted bots. Notably, increases in activities are primarily driven by the increased need for nurturing efforts to accompany growth in subjective policing. Moreover, introducing bots to content moderation also improves the retention of volunteer moderators. Overall, we show that introducing algorithm-based content moderation into platform governance is beneficial for sustaining digital communities.
Keywords: Platform governance, human-machine interaction, content moderation, human moderators, bot moderators
Suggested Citation: Suggested Citation