Algorithmic Destruction

29 Pages Posted: 27 Mar 2022 Last revised: 10 May 2023

See all articles by Tiffany C. Li

Tiffany C. Li

University of New Hampshire School of Law (formerly Franklin Pierce Law Center); Yale Law School - Information Society Project; Yale Law School - Information Society Project; University of San Francisco - School of Law

Date Written: 2022

Abstract

Contemporary privacy law does not go far enough to protect our privacy interests, particularly where artificial intelligence and machine learning are concerned. While many have written on problems of algorithmic bias or deletion, this article introduces the novel concept of the “algorithmic shadow," the persistent imprint of data in a trained machine learning model, and uses the algorithmic shadow as a lens through which to view the failures of data deletion in dealing with the realities of machine learning. This article is also the first to substantively critique the novel privacy remedy of algorithmic disgorgement, also known as algorithmic destruction.

What is the algorithmic shadow? Simply put, when you feed a set of specific data to train a machine learning model, that data produces an impact on the model that results from such training. Even if you later delete data from the training data set, the already-trained model still contains a persistent “shadow” of the deleted data. The algorithmic shadow describes the persistent imprint of the data that has been fed into a machine learning model and used to refine that machine learning system.

The failure of data deletion to resolve the privacy losses caused by algorithmic shadows highlights the ineffectiveness of data deletion as a right and a remedy. Algorithmic destruction (deletion of models or algorithms trained on misbegotten data) has emerged as an alternative, or perhaps supplement, to data deletion. While algorithmic destruction or disgorgement may resolve some of the failures of data deletion, this remedy and potential right is also not without its own drawbacks.

This article has three goals: First, the article introduces and defines the concept of the algorithmic shadow, a novel concept that has so far evaded significant legal scholarly discussion, despite its importance in future discussions of artificial intelligence and privacy law. Second, the article explains why the algorithmic shadow exposes and exacerbates existing problems with data deletion as a privacy right and remedy. Finally, the article examines algorithmic destruction as a potential algorithmic right and algorithmic remedy, comparing it with data deletion, particularly in light with algorithmic shadow harms.

Suggested Citation

Li, Tiffany, Algorithmic Destruction (2022). SMU Law Review, Volume 75, Issue 3, 2022, Available at SSRN: https://ssrn.com/abstract=4066845 or http://dx.doi.org/10.2139/ssrn.4066845

Tiffany Li (Contact Author)

University of New Hampshire School of Law (formerly Franklin Pierce Law Center)

Two White Street
Concord, NH 03301
United States

Yale Law School - Information Society Project ( email )

New Haven, CT

Yale Law School - Information Society Project ( email )

127 Wall Street
New Haven, CT 06511
United States

University of San Francisco - School of Law ( email )

2130 Fulton Street
San Francisco, CA 94117
United States

Do you have negative results from your research you’d like to share?

Paper statistics

Downloads
1,265
Abstract Views
6,044
Rank
29,968
PlumX Metrics