Technological Tethereds: Potential Impact of Untrustworthy Artificial Intelligence in Criminal Justice Risk Assessment Instruments

78 Pages Posted: 31 Jul 2020 Last revised: 27 May 2021

See all articles by Sonia Gipson Rankin

Sonia Gipson Rankin

University of New Mexico - School of Law

Abstract

Issues of racial inequality and violence are front and center in today’s society, as are issues surrounding artificial intelligence (AI). This Article, written by a law professor who is also a computer scientist, takes a deep dive into understanding how and why hacked and rogue AI creates unlawful and unfair outcomes, particularly for persons of color.

Black Americans are disproportionally featured in criminal justice, and their stories are obfuscated. The seemingly endless back-to-back murders of George Floyd, Breonna Taylor, and Ahmaud Arbery, and heartbreakingly countless others have finally shaken the United States from its slumbering journey towards intentional criminal justice reform. Myths about Black crime and criminals are embedded in the data collected by AI and do not tell the truth of race and crime. However, the number of Black people harmed by hacked and rogue AI will dwarf all historical records, and the gravity of harm is incomprehensible.

The lack of technical transparency and legal accountability leaves wrongfully convicted defendants without legal remedies if they are unlawfully detained based on a cyberattack, faulty or hacked data, or rogue AI. Scholars and engineers acknowledge that the artificial intelligence that is giving recommendations to law enforcement, prosecutors, judges, and parole boards lacks the common sense of an 18-month-old child. This Article reviews the ways AI is used in the legal system and the courts’ response to this use. It outlines the design schemes of proprietary risk assessment instruments used in the criminal justice system, outlines potential legal theories for victims, and provides recommendations for legal and technical remedies to victims of hacked data in criminal justice risk assessment instruments. It concludes that, with proper oversight, AI can increase fairness in the criminal justice system, but without this oversight, AI-based products will further exacerbate the extinguishment of liberty interests enshrined in the Constitution.

According to anti-lynching advocate Ida B. Wells-Barnett, “The way to right wrongs is to turn the light of truth upon them.” Thus, transparency is vital to safeguarding equity through AI design and must be the first step. The Article seeks ways to provide that transparency, for the benefit of all America, but particularly persons of color who are far more likely to be impacted by AI deficiencies. It also suggests legal reforms that will help plaintiffs recover when AI goes rogue.

Suggested Citation

Gipson Rankin, Sonia, Technological Tethereds: Potential Impact of Untrustworthy Artificial Intelligence in Criminal Justice Risk Assessment Instruments. UNM School of Law Research Paper No. 2020-06, Washington and Lee Law Review, Vol 78, No 2 (Spring 2021), Available at SSRN: https://ssrn.com/abstract=3662761 or http://dx.doi.org/10.2139/ssrn.3662761

Sonia Gipson Rankin (Contact Author)

University of New Mexico - School of Law ( email )

1117 Stanford, N.E.
Albuquerque, NM 87131
United States

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
488
Abstract Views
3,116
Rank
122,793
PlumX Metrics