AI, on the Law of Intentionality: Toward Proof of Intentional Discrimination by Artificial Intelligence

Posted: 14 Aug 2020 Last revised: 29 Sep 2020

See all articles by Emile Loza de Siles

Emile Loza de Siles

Duquesne University School of Law; Technology & Cybersecurity Law Group; University of Maryland Global Campus

Date Written: 2020

Abstract

The U.S. Supreme Court in McDonnell Douglas v. Green, 411 U.S. 792, 93 S. Ct. 1817 (1973), established the procedural order and burden-shifting framework by which to prove intentional discrimination through circumstantial evidence. Courts have widely adopted the McDonnell Douglas framework.

Intentional discrimination turns on the question as to whether the actor’s challenged conduct was motivated, in whole or part, upon the protected class, actual or perceived, of the plaintiff. Intentionality arises from the actor’s state of mind, but what if the actor has no mind?

In this age of ever more ubiquitous exposure to government and private sector artificial intelligence (“AI”) systems, decisions about people are ever more automated. AI systems and data as AI rocket fuel abound, automating decisions about access to credit; hiring, promotion, and other employment decisions; child welfare placements; elder abuse interventions; subsidized housing lotteries; tenant selection; sentencing, parole, and other criminal and juvenile justice decisions; and a host of other matters.

Algorithmic bias and discrimination have been widely reported. The decisional actors, however, are computational, not human. Those actors have no mind (at least currently) and, therefore, have no capacity for intentionality.

Machine learning further compounds the complexity of ascribing intentionality to discrimination by computational actor. These iteratively learning AI systems increasingly function with greater degrees of autonomy in finding the statistical models to apply to the given data and decisional questions. Those humans who design and create such systems may not know the decisional “rationale” of such systems. The veil of machine learning intentionality is even more impenetrable for mere choosers and users of these AI systems. Even if such AI decision rationales were explainable, barriers to discoverability and government transparency on trade secret and other grounds further confound the search for proof of intentional discrimination.

How, then, does McDonnell Douglas apply where the discriminatory actors are advanced analytics engines, but nevertheless mindless with intentionality an impossibility? Agency theory, consumer protection principles applied to require reasonable AI practices, and other theories may intersect with the McDonnell Douglas evidentiary framework to reveal some ways forward to penetrate the pressing question of proving intentional discrimination by artificial intelligence.

Keywords: artificial intelligence, law, discrimination, intentionality, machine learning, algorithmic bias, Evidence, Expert Testimony, Race, McDonnell Douglas, Race, Disability, Age, National Origin

Suggested Citation

Loza de Siles, Emile, AI, on the Law of Intentionality: Toward Proof of Intentional Discrimination by Artificial Intelligence (2020). Available at SSRN: https://ssrn.com/abstract=3663393

Emile Loza de Siles (Contact Author)

Duquesne University School of Law ( email )

600 Forbes Avenue
Pittsburgh, PA 15282
United States

Technology & Cybersecurity Law Group ( email )

Washington, DC
United States

University of Maryland Global Campus ( email )

3501 University Boulevard East
Adelphi, MD 20783
United States

Here is the Coronavirus
related research on SSRN

Paper statistics

Abstract Views
108
PlumX Metrics