Against the Dehumanisation of Decision-Making – Algorithmic Decisions at the Crossroads of Intellectual Property, Data Protection, and Freedom of Information

9 (2018) JIPITEC 3 para 1

32 Pages Posted: 17 Jun 2018

Date Written: May 31, 2018

Abstract

Nowadays algorithms can decide if one can get a loan, is allowed to cross a border, or must go to prison. Artificial intelligence techniques (natural language processing and machine learning in the first place) enable private and public decision-makers to analyse big data in order to build profiles, which are used to make decisions in an automated way.

This work presents ten arguments against algorithmic decision-making. These revolve around the concepts of ubiquitous discretionary interpretation, holistic intuition, algorithmic bias, the three black boxes, psychology of conformity, power of sanctions, civilising force of hypocrisy, pluralism, empathy, and technocracy.

The lack of transparency of the algorithmic decision-making process does not stem merely from the characteristics of the relevant techniques used, which can make it impossible to access the rationale of the decision. It depends also on the abuse of and overlap between intellectual property rights (the “legal black box”). In the US, nearly half a million patented inventions concern algorithms; more than 67% of the algorithm-related patents were issued over the last ten years and the trend is increasing.

To counter the increased monopolisation of algorithms by means of intellectual property rights (with trade secrets leading the way), this paper presents three legal routes that enable citizens to ‘open’ the algorithms.

First, copyright and patent exceptions, as well as trade secrets are discussed.

Second, the GDPR is critically assessed. In principle, data controllers are not allowed to use algorithms to take decisions that have legal effects on the data subject’s life or similarly significantly affect them. However, when they are allowed to do so, the data subject still has the right to obtain human intervention, to express their point of view, as well as to contest the decision. Additionally, the data controller shall provide meaningful information about the logic involved in the algorithmic decision.

Third, this paper critically analyses the first known case of a court using the access right under the freedom of information regime to grant an injunction to release the source code of the computer program that implements an algorithm.

Only an integrated approach – which takes into account intellectual property, data protection, and freedom of information – may provide the citizen affected by an algorithmic decision of an effective remedy as required by the Charter of Fundamental Rights of the EU and the European Convention on Human Rights.

Keywords: Algorithmic decision-making, Data Protection Act 2018, GDPR, algorithmic accountability, algorithmic bias, algorithmic governance, algorithmic transparency, freedom of information request, patent infringement defences, right not to be subject to an algorithmic decision, software copyright exceptions

JEL Classification: K12, K13, K42, O34

Suggested Citation

Noto La Diega, Guido, Against the Dehumanisation of Decision-Making – Algorithmic Decisions at the Crossroads of Intellectual Property, Data Protection, and Freedom of Information (May 31, 2018). 9 (2018) JIPITEC 3 para 1. Available at SSRN: https://ssrn.com/abstract=3188080

Guido Noto La Diega (Contact Author)

Northumbria University ( email )

Pandon Building
208, City Campus East-1
Newcastle-Upon-Tyne, Newcastle NE1 8ST
United Kingdom
7708928768 (Phone)
7708928768 (Fax)

HOME PAGE: http://https://northumbria.academia.edu/GuidoNotoLaDiega

Register to save articles to
your library

Register

Paper statistics

Downloads
312
Abstract Views
1,476
rank
96,625
PlumX Metrics