Silencing Bad Bots: Global, Legal and Political Questions for Mean Machine Communication

23 Comm. L. & Pol'y 159-195 (2018)

Posted: 24 Mar 2018

See all articles by Meg Leta Jones

Meg Leta Jones

Georgetown University - Communication, Culture, and Technology

Date Written: March 20, 2017

Abstract

As digital automation expands across social contexts, the way in which legal systems respond when algorithms produce lies and hate presents a pressing policy problem. Search results, autofill suggestions, and intelligent personal assistants generate seemingly objective information for users in order to be helpful, efficient or fun but, as social technologies, can also produce prejudicial and false content. Chatbots and trending lists have made headlines for quickly being transformed from sweet to spiteful and political to inaccurate. As humans progressively engage with and rely on machine communication, the legality of algorithmically created information that harms the reputation or dignity of an individual, entity or group is a policy question posed and answered differently around the world. This article compares various defamation and hate speech laws through the lens of algorithmic content production – mean machine communication – and presents a set of outstanding issues that will require international and interdisciplinary attention.

Suggested Citation

Jones, Meg, Silencing Bad Bots: Global, Legal and Political Questions for Mean Machine Communication (March 20, 2017). 23 Comm. L. & Pol'y 159-195 (2018), Available at SSRN: https://ssrn.com/abstract=3144790

Meg Jones (Contact Author)

Georgetown University - Communication, Culture, and Technology ( email )

3520 Prospect St NW
Suite 311
Washington, DC 20057
United States

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Abstract Views
599
PlumX Metrics