Data Privacy, Human Rights, and Algorithmic Opacity

61 Pages Posted: 10 Jan 2022 Last revised: 7 Jun 2024

See all articles by Sylvia Lu

Sylvia Lu

University of Michigan Law School

Date Written: December 31, 2022

Abstract

Decades ago, it was difficult to imagine a reality in which artificial intelligence (AI) could penetrate every corner of our lives to monitor our innermost selves for commercial interests. Within just a few decades, the private sector has seen a wild proliferation of AI systems, many of which are more powerful and penetrating than anticipated. In many cases, AI systems have become “the power behind the throne,” tracking user activities and making fateful decisions through predictive analysis of personal information. Despite the growing power of AI, proprietary algorithmic systems can be technically complex, legally claimed as trade secrets, and managerially invisible to outsiders, creating an opacity that hinders oversight of AI systems. Accordingly, many AI-based services and products have been found to be invasive, manipulative, and biased, eroding data privacy rules, human rights, and democratic norms in modern society.

The emergence of AI systems has thus generated a deep tension between algorithmic secrecy and data privacy. Yet, in today’s policy debate, algorithmic transparency in a privacy context is an equally important issue that is nonetheless managerially disregarded, commercially evasive, and legally unactualized. This paper is the first to illustrate how regulators should rethink strategies regarding data privacy through the interplay of human rights, algorithmic disclosures, and whistleblowing systems. As the world increasingly looks to the European Union’s (EU) data protection law—the General Data Protection Regulation (GDPR)—as a regulatory frame of reference, this piece assesses the effectiveness of the GDPR’s response to data protection issues raised by opaque AI systems. Based on a case study of Google’s AI applications and privacy disclosures, it demonstrates that even the EU fails to enforce data protection rules to address issues caused by algorithmic opacity.

The author argues that as algorithmic opacity has become a primary barrier to oversight and enforcement, regulators in the EU, the United States, and elsewhere should not overprotect the secrecy of every aspect of AI applications that implicate public concerns. Rather, policymakers should consider imposing a duty of algorithmic disclosures through sustainability reporting and whistleblower protection on firms deploying AI to maximize effective enforcement of data privacy laws, human rights, and other democratic values.

Keywords: privacy, data protection, information privacy, human rights, artificial intelligence, algorithmic disclosure, algorithmic transparency, algorithmic opacity, trade secrets, GDPR

Suggested Citation

Lu, Sylvia, Data Privacy, Human Rights, and Algorithmic Opacity (December 31, 2022). California Law Review, Vol. 110, 2023 , Available at SSRN: https://ssrn.com/abstract=4004716

Sylvia Lu (Contact Author)

University of Michigan Law School ( email )

625 South State Street
Ann Arbor, MI 48109-1215
United States

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
359
Abstract Views
2,335
Rank
161,345
PlumX Metrics