Protected Grounds and the System of Non-Discrimination Law in the Context of Algorithmic Decision-Making and Artificial Intelligence
67 Pages Posted: 7 Jan 2021
Date Written: November 2, 2020
Algorithmic decision-making and similar types of artificial intelligence (AI) may lead to improvements in all sectors of society, but can also have discriminatory effects. While current non-discrimination law offers people some protection, AI decision-making presents the law with several challenges. For instance, AI can generate new categories of people based on seemingly innocuous characteristics, such as web browser preference or apartment number, or more complicated categories combining many data points. Such new types of differentiation could evade non-discrimination law, as browser type and house number are not protected characteristics, but such differentiation could still be unfair, for instance if it reinforces social inequality.
This paper concerns the following question: What system of non-discrimination law can best be applied to AI, knowing that AI can differentiate on the basis of characteristics that do not correlate with protected grounds of discrimination such as ethnicity or gender, and in light of the particular characteristics of the different systems of non-discrimination law? To answer this question, this paper analyses the current loopholes in the protection offered by non-discrimination law and explores the best way for lawmakers to approach AI-driven differentiation. While we focus on Europe, the conceptual and theoretical focus of the paper can make it useful for scholars and policymakers from other regions too, as they encounter similar problems with AI.
Keywords: Discrimination, Artificial Intelligence, Machine Learning, Big Data. Non-Discrimination Law, European Law, Algorithmic Decision-Making, Algorithms, Automated Decision-Making
JEL Classification: K12, K00, D10, D11, D20, D30, D40, D60, D70, L00, L11, L20, L51
Suggested Citation: Suggested Citation