Trade-offs in Automating Platform Regulatory Compliance By Algorithm: Evidence from the COVID-19 Pandemic
33 Pages Posted: 20 May 2020 Last revised: 29 Oct 2021
Date Written: October 29, 2021
Abstract
In a static environment, using algorithms can help platforms more quickly and easily achieve regulatory compliance. However, in a dynamic context, the rigidity of complying with regulations by having to pre-specify the parameters that algorithms use as inputs, may pose challenges. We draw on the literature on the trade-offs between algorithmic and human decision-making to study the effect of algorithmic regulation of ad content in times of rapid change. To comply with local US laws, digital ad venues need to identify sensitive ads likely to be subject to more restrictive policies and practices. However, in periods of rapid change when there is a lack of consensus about which ads are sensitive and should be subject to previously drafted policies, using algorithms to identify sensitive content can be problematic. We collect data on European and American ads published in the Facebook Ad Library. We show that algorithmic determination of what constitutes an issue of national importance resulted in COVID-19-related ads being disqualified because they lacked an appropriate disclaimer. Our results show that ads run by governmental organizations designed to inform the public about COVID-19 issues are more likely to be banned by Facebook's algorithm than similar ads run by non-governmental organizations. We suggest that algorithmic inflexibility towards categorization in periods of unpredictable shifts in the environment worsens the problems of large digital platforms trying to achieve regulatory compliance using algorithms.
Keywords: Algorithmic Decision-Making, Ad Ban, COVID-19, Human Intervention, IS and Crisis
JEL Classification: M3, K2
Suggested Citation: Suggested Citation