Supervising Automated Journalists in the Newsroom: Liability for Algorithmically Produced News Stories
Forthcoming in Revue du Droit des Technologies de l'Information (Summer 2016)
21 Pages Posted: 23 Apr 2016
Date Written: April 13, 2016
Algorithmic processes that convert data into narrative news texts allow news rooms to publish stories with limited to no human intervention. The new trend creates many opportunities, but also raises significant legal questions. Aside from financial benefits, further refinement could make the smart algorithms capable of writing less standard, maybe even opinion, pieces. The responsible human merely needs to define clear questions about what the algorithm needs to discuss in the article and in what manner. But how does it square with the traditional rules of publishing and editorial control?
This working paper analyses the question of authorship for algorithmic output and the liability issues that could arise when the algorithmic output includes inaccurate, harmful or even illegal content. The analysis of authorship and liability issues is performed by assessing the existing relevant Belgian legislation and case law regarding copyright and press liability. Furthermore, the paper answers the question as to how publishers should prevent the creation of inaccurate content by the algorithms they use. Parallels are drawn with the judgement of the European Court of Human Rights in Delfi v. Estonia. The paper assesses whether an obligation of a responsible human to monitor all output of the automated journalist is feasible, or rather defeat the purpose of having the smart algorithms at his/her disposal.
Keywords: Automated Journalism, Legal liability, Authorship of Automated Journalism, Ethical responsibilities, Belgian constitutional cascade system
Suggested Citation: Suggested Citation