Discriminatory AI and the Law – Legal Standards for Algorithmic Profiling.
Draft Chapter, in: Silja Vöneky, Philipp Kellmeyer, Oliver Müller and Wolfram Burgard (ed.) Responsible AI, Cambridge University Press (Forthcoming)
35 Pages Posted: 8 Jul 2021 Last revised: 17 Aug 2021
Date Written: June 29, 2021
Abstract
Artificial Intelligence is increasingly used to assess people (profiling) and helps employers to find qualified employees, internet platforms to distribute information or to sell goods, and security authorities to single out suspects. Apart from being more efficient than humans in processing huge amounts of data, intelligent algorithms – which are free of human prejudices and stereotypes – would also prevent discriminatory decisions, or so the story goes. However, many studies show that the use of AI can lead to discriminatory outcomes. From a legal point of view, this raises the question if the law as it stands prohibits objectionable forms of differential treatment and detrimental impact. In the legal literature dealing with automated profiling, some authors have suggested that we need a “right to reasonable inferences”, i.e. a certain methodology for AI algorithms affecting humans. This paper takes up this idea with respect to discriminatory AI and claims that such a right already exists in antidiscrimination law. It argues that the need to justify differential treatment and detrimental impact implies that profiling methods correspond to certain standards. It is now a major challenge for both lawyers as well as data and computer scientist to develop and establish those methodological standards in order to guarantee compliance with antidiscrimination law (and other legal regimes), as the paper outlines.
Keywords: Artificial intelligence, algorithm, profiling, discrimination, differential treatment, detrimental impact, bias, automated decision-making, data protection, proportionality, reasonable inference
JEL Classification: K00, K10, K20, K30
Suggested Citation: Suggested Citation