Data Privacy, Human Rights, and Algorithmic Opacity

97 Pages Posted: 10 Jan 2022 Last revised: 9 May 2022

See all articles by Sylvia Lu

Sylvia Lu

University of California, Berkeley - School of Law

Date Written: May 6, 2022

Abstract

Decades ago, it was difficult to imagine a reality in which artificial intelligence (AI) could penetrate every corner of our lives to monitor our innermost selves for commercial interests. Within a few decades, the private sector has seen a wild proliferation of AI systems, many of which are more powerful and penetrating than anticipated. In many cases, machine-learning-based AI systems have become “the power behind the throne,” tracking user activities and making fateful decisions through predictive analysis of personal information. However, machine-learning algorithms can be technically complex and legally claimed as trade secrets, creating an opacity that hinders oversight of AI systems. Accordingly, many AI-based services and products have been found to be invasive, manipulative, and biased, eroding privacy rules and human rights in modern society.

The emergence of advanced AI systems thus generates a deeper tension between algorithmic secrecy and data privacy. Yet, in today’s policy debate, algorithmic transparency in a privacy context is an issue that is equally important but managerially disregarded, commercially evasive, and legally unactualized. This Note illustrates how regulators should rethink strategies regarding transparency for privacy protection through the interplay of human rights, disclosure regulations, and whistleblowing systems. It discusses how machine-learning algorithms threaten privacy protection through algorithmic opacity, assesses the effectiveness of the EU’s response to privacy issues raised by opaque AI systems, demonstrates the GDPR’s inadequacy in addressing privacy issues caused by algorithmic opacity, and proposes new algorithmic transparency strategies toward privacy protection, along with a broad array of policy implications and suggested moves. The analytical results indicate that in a world where algorithmic opacity has become a strategic tool for firms to escape accountability, regulators in the EU, the US, and elsewhere should adopt a human-rights-based approach to impose a social transparency duty on firms deploying high-risk AI techniques.

Keywords: privacy, data protection, information privacy, human rights, artificial intelligence, algorithmic disclosure, algorithmic transparency, algorithmic opacity, trade secrets, GDPR

Suggested Citation

Lu, Sylvia Si-Wei, Data Privacy, Human Rights, and Algorithmic Opacity (May 6, 2022). California Law Review, Vol. 110, 2022 Forthcoming, Available at SSRN: https://ssrn.com/abstract=4004716

Sylvia Si-Wei Lu (Contact Author)

University of California, Berkeley - School of Law ( email )

215 Boalt Hall
Berkeley, CA 94720-7200
United States

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
251
Abstract Views
1,055
rank
168,673
PlumX Metrics