Does the GDPR Help or Hinder Fair Algorithmic Decision-Making?

LLM dissertation, Innovation, Technology & The Law, University of Edinburgh, 2017

43 Pages Posted: 27 Mar 2018

Date Written: Aug 21, 2017

Abstract

New information and communication technologies have facilitated the collection of data, both personal and non-personal, at a large scale, creating what some call the “data-driven society”. This data collection has facilitated the growth of automated algorithmic tools for making or informing decisions about people in a wide range of areas with legal or societal impact, including employment, policing and the media.

The new General Data Protection Regulation (GDPR) has been mooted as a potential regulatory lever for addressing the harms of algorithmic decision-making (ADM), largely because of the existence of Article 22 on “Automated individual decision-making, including profiling”. This has been argued to provide a “right to explanation” to data subjects who have had decisions made about them in an automated fashion.

This dissertation proposes that Article 22, and the other provisions for data subjects’ rights, are insufficient for regulating the use of ADM on the “natural persons” 6 who are intended to be protected by the GDPR. I first consider the types of decisions that have raised concern, using examples from both academic and grey literature (Chapter 2). This is accompanied by a survey of the algorithms that are used in ADM and how algorithm designers are attempting to address potential harms (Chapter 2.2). With these concerns and technologies in mind, I analyse the various articles of the GDPR that have been proposed in the literature to address ADM (Chapter 3). Concluding that the provisions of the GDPR that are most promising for decision-making are those on certification, I then discuss the benefits, drawbacks, and challenges for creating such certification bodies (Chapter 4).

The conclusion of this work is that the current debate over over Articles 15 and 22 neglects to consider how algorithmic classification works, and in particular the need to consider classifications as a collective, i.e., looking at entire input and output datasets rather than just the personal data that pertain to an individual data subject. This calls into question the appropriateness of legislation, such as the GDPR, that focuses on individual data subjects’ rights and access to individual personal data. With respect to the question raised by the dissertation’s title, I argue that while the GDPR does not hinder the regulation of ADM, the focus of the Regulation on data protection means that it inherently has a narrower scope than is required, and so interaction with other regulatory levers and bodies is required. The certification provisions of the GDPR may offer one such mechanism for this interaction.

Keywords: data protection, GDPR, automated decision-making, certification

Suggested Citation

Henderson, Tristan, Does the GDPR Help or Hinder Fair Algorithmic Decision-Making? (Aug 21, 2017). LLM dissertation, Innovation, Technology & The Law, University of Edinburgh, 2017. Available at SSRN: https://ssrn.com/abstract=3140887 or http://dx.doi.org/10.2139/ssrn.3140887

Tristan Henderson (Contact Author)

University of St Andrews ( email )

North St
Saint Andrews, Fife KY16 9AJ
United Kingdom

Register to save articles to
your library

Register

Paper statistics

Downloads
197
rank
145,066
Abstract Views
1,130
PlumX Metrics