Politics of Adversarial Machine Learning

Towards Trustworthy ML: Rethinking Security and Privacy for ML Workshop, Eighth International Conference on Learning Representations (ICLR) 2020

6 Pages Posted: 27 Mar 2020 Last revised: 11 May 2020

See all articles by Kendra Albert

Kendra Albert

Harvard Law School

Jon Penney

Harvard Law School; Harvard University - Berkman Klein Center for Internet & Society; Citizen Lab, University of Toronto

Bruce Schneier

Harvard University - Berkman Klein Center for Internet & Society; Harvard University - Harvard Kennedy School (HKS)

Ram Shankar Siva Kumar

Microsoft Corporation

Date Written: 2020

Abstract

In addition to their security properties, adversarial machine-learning attacks and defenses have political dimensions. They enable or foreclose certain options for both the subjects of the machine learning systems and for those who deploy them, creating risks for civil liberties and human rights. In this paper, we draw on insights from science and technology studies, anthropology, and human rights literature, to inform how defenses against adversarial attacks can be used to suppress dissent and limit attempts to investigate machine learning systems. To make this concrete, we use real-world examples of how attacks such as perturbation, model inversion, or membership inference can be used for socially desirable ends. Although the predictions of this analysis may seem dire, there is hope. Efforts to address human rights concerns in the commercial spyware industry provide guidance for similar measures to ensure ML systems serve democratic, not authoritarian ends.

Keywords: Artificial Intelligence, AI, Machine Learning, Ml, Security, Socio-Technical Systems, Adversarial Machine Learning, Privacy, Security, Human Rights, Spyware, Politics of Technology, Politics of Machine Learning

JEL Classification: K1, K23, K42, O32, 031,

Suggested Citation

Albert, Kendra and Penney, Jonathon and Schneier, Bruce and Siva Kumar, Ram Shankar, Politics of Adversarial Machine Learning (2020). Towards Trustworthy ML: Rethinking Security and Privacy for ML Workshop, Eighth International Conference on Learning Representations (ICLR) 2020, Available at SSRN: https://ssrn.com/abstract=3547322 or http://dx.doi.org/10.2139/ssrn.3547322

Kendra Albert

Harvard Law School ( email )

1563 Massachusetts Ave
Cambridge, MA 02138
United States

Jonathon Penney (Contact Author)

Harvard Law School ( email )

1575 Massachusetts
Hauser 406
Cambridge, MA 02138
United States

Harvard University - Berkman Klein Center for Internet & Society ( email )

Harvard Law School
23 Everett, 2nd Floor
Cambridge, MA Nova Scotia 02138
Canada

Citizen Lab, University of Toronto ( email )

Munk School of Global Affairs
University of Toronto
Toronto, Ontario M5S 3K7
Canada

Bruce Schneier

Harvard University - Berkman Klein Center for Internet & Society ( email )

Harvard Law School
Cambridge, MA 02138
United States

Harvard University - Harvard Kennedy School (HKS) ( email )

79 John F. Kennedy Street
Cambridge, MA 02138
United States

Ram Shankar Siva Kumar

Microsoft Corporation ( email )

One Microsoft Way
Redmond, WA 98052
United States

Here is the Coronavirus
related research on SSRN

Paper statistics

Downloads
33
Abstract Views
293
PlumX Metrics