Teaching Fairness to Artificial Intelligence: Existing and Novel Strategies Against Algorithmic Discrimination Under EU Law
35 Pages Posted: 5 May 2018 Last revised: 21 Jun 2018
Date Written: April 18, 2018
Empirical evidence is mounting that artificial intelligence applications driven by machine learning threaten to discriminate against legally protected groups. As ever more decisions are subjected to algorithmic processes, discrimination by algorithms is rightly recognized by policymakers around the world as a key challenge for contemporary societies. This article suggests that algorithmic bias raises intricate questions for EU law. The existing categories of EU anti-discrimination law do not provide an easy fit for algorithmic decision making, and the statistical basis of machine learning generally offers companies a fast-track to justification. Furthermore, victims won’t be able to prove their case without access to the data and the algorithms, which they generally lack. To remedy these problems, this article suggests an integrated vision of anti-discrimination and data protection law to enforce fairness in the digital age. More precisely, it shows how the concepts of anti-discrimination law may be combined with the enforcement tools of the GDPR to unlock the algorithmic black box. In doing so, the law should harness a growing literature in computer science on algorithmic fairness that seeks to ensure equal protection at the data and code level. The interplay of anti-discrimination law, data protection law and algorithmic fairness therefore facilitates “equal protection by design”. In the end, however, recourse to technology does not prevent the law from making hard normative choices about the implementation of formal or substantive concepts of equality. Understood in this way, the deployment of artificial intelligence not only raises novel risks, but also harbors novel opportunities for consciously designing fair market exchange.
Keywords: Algorithmic Discrimination, Algorithmic Fairness, Law Enforcement, Data Protection Audits, Data Protection Impact Assessments
JEL Classification: K12, K20, K31, K42
Suggested Citation: Suggested Citation