The Theory of Artificial Immutability: Protecting Algorithmic Groups under Anti-Discrimination Law

50 Pages Posted: 6 May 2022 Last revised: 10 Nov 2022

See all articles by Sandra Wachter

Sandra Wachter

University of Oxford - Oxford Internet Institute

Date Written: February 15, 2022


Artificial Intelligence (AI) and machine learning algorithms are increasingly used to make important decisions about people. Decisions taken on the basis of socially defined groups can have harmful consequences, creating unequal, discriminatory, and unfair outcomes on the basis of irrelevant or unacceptable differences. Equality and anti-discrimination laws aim to protect against these types of harms.

While issues of AI bias and proxy discrimination are well explored, less focus has been paid to the harms created by profiling based on groups that do not map to or correlate with legally protected groups such as sex or ethnicity. Groups like dog owners, sad teens, video gamers, single parents, gamblers, or the poor are routinely used to allocate resources and make decisions such as which advertisement to show, price to offer, or public service to fund. AI also creates seemingly incomprehensible groups defined by parameters that defy human understanding such as pixels in a picture, clicking behavior, electronic signals, or web traffic. These algorithmic groups feed into important automated decisions, such as loan or job applications, that significantly impact people’s lives.

A technology that groups people in unprecedented ways and makes decisions about them naturally raises a question: are our existing equality laws, at their core, fit for purpose to protect against emergent, AI-driven inequality?

This paper examines the legal status of algorithmic groups in North American and European anti-discrimination doctrine, law, and jurisprudence. I propose a new theory of harm to close the gap between legal doctrine and emergent forms of algorithmic discrimination. Algorithmic groups do not currently enjoy legal protection unless they can be mapped onto an existing protected group. Such linkage is rare in practice. In response, this paper examines three possible pathways to expand the scope of anti-discrimination law to include algorithmic groups.

The first possibility is to show that algorithmic groups meet the law’s requirements to be considered a protected ground. A novel taxonomy of characteristics that make groups worthy of protection is developed, covering (1) immutability and choice, (2) relevance, arbitrariness, and merit, (3) historical oppression, stigma, and structural disadvantage, and (4) social saliency. Algorithmic groups typically do not meet these requirements in practice.

The second possibility is to examine the law’s theoretical account of the “moral wrongness” of discrimination. I will show that algorithmic groups do not invoke the same moral wrongfulness that the law wants to prevent.

The third possibility is to find an opening in the fundamental doctrines and remit of anti-discrimination law. Many scholars agree that the law aims to eliminate differences between certain groups, or to create a ‘level playing field’. This approach creates a problem for algorithmic groups because it may be impossible to establish the existence of oppression, or to show that certain groups receive better treatment than others.

A new theory of harm is required if algorithmic groups ought to be protected under the law. Different theories of equality share common ground in their commitment to protect basic rights, liberties, freedoms, and access to goods and services. This common ground provides an opportunity to bring algorithmic groups within the scope of the law. The harms and interests facing these algorithmic groups are the same as those targeted by the law; only the mode, perpetrator, and process of bringing about the harms are different.

The paper closes with two contributions to drive legal reform and judicial reinterpretation. Firstly, I propose four key elements of a good decision criteria reflecting the law’s fundamental aims: stability, transparency, empirical coherence, and ethical and normative acceptability. AI makes these criteria effectively impossible to meet in practice.

Secondly, I propose a new theory of harm, the “theory of artificial immutability,” that aims to bring AI groups within the scope of the law. My theory describes how algorithmic groups act as de facto immutable characteristics in practice. I propose five sources of artificial immutability in AI: opacity, vagueness, instability, involuntariness and invisibility, and a lack of social concept. Each of these erodes the key elements of good decision criteria.

To remedy this, greater emphasis needs to be placed on whether people have control over decision criteria and whether they are able to achieve important goals and steer their path in life. I conclude with reflections on how the law can be reformed to account for artificial immutability, making use of a fruitful overlap with prior work on the “right to reasonable inferences.”

Keywords: law, legal theory, artificial intelligence, machine learning, bias, fairness, discrimination, equality, anti-discrimination, non-discrimination, protected groups, protected classes, computer science

Suggested Citation

Wachter, Sandra, The Theory of Artificial Immutability: Protecting Algorithmic Groups under Anti-Discrimination Law (February 15, 2022). Tulane Law Review, Forthcoming, Available at SSRN: or

Sandra Wachter (Contact Author)

University of Oxford - Oxford Internet Institute ( email )

1 St. Giles
University of Oxford
Oxford OX1 3PG Oxfordshire, Oxfordshire OX1 3JS
United Kingdom

Do you have negative results from your research you’d like to share?

Paper statistics

Abstract Views
PlumX Metrics