Legal Risks of Adversarial Machine Learning Research

International Conference on Machine Learning (ICML) 2020 Workshop on Law & Machine Learning

14 Pages Posted: 29 Jul 2020

See all articles by Ram Shankar Siva Kumar

Ram Shankar Siva Kumar

Microsoft Corporation; Harvard University - Berkman Klein Center for Internet & Society

Jon Penney

Osgoode Hall Law School; Harvard University - Berkman Klein Center for Internet & Society; Citizen Lab, University of Toronto

Bruce Schneier

Harvard University - Berkman Klein Center for Internet & Society; Harvard University - Harvard Kennedy School (HKS)

Kendra Albert

Harvard Law School

Date Written: July 3, 2020

Abstract

Adversarial machine learning is the systematic study of how motivated adversaries can compromise the confidentiality, integrity, and availability of machine learning (ML) systems through targeted or blanket attacks. The problem of attacking ML systems is so prevalent that CERT, the federally funded research and development center tasked with studying attacks, issued a broad vulnerability note on how most ML classifiers are vulnerable to adversarial manipulation. Google, IBM, Facebook, and Microsoft have committed to investing in securing machine learning systems. The US and EU are likewise putting security and safety of AI systems as a top priority.

Now, research on adversarial machine learning is booming but it is not without risks. Studying or testing the security of any operational system may violate the Computer Fraud and Abuse Act (CFAA), the primary United States federal statute that creates liability for hacking. The CFAA’s broad scope, rigid requirements, and heavy penalties, critics argue, has a chilling effect on security research. Adversarial ML security research is likely no different. However, prior work on adversarial ML research and the CFAA is sparse and narrowly focused. In this article, we help address this gap in the literature. For legal practitioners, we describe the complex and confusing legal landscape of applying the CFAA to adversarial ML. For adversarial ML researchers, we describe the potential risks of conducting adversarial ML research. We also conclude with an analysis predicting how the US Supreme Court may resolve some present inconsistencies in the CFAA’s application in Van Buren v. United States, an appeal expected to be decided in 2021. We argue that the court is likely to adopt a narrow construction of the CFAA, and that this will actually lead to better adversarial ML security outcomes in the long term.

Keywords: Adversarial machine learning, security, machine learning, artificial intelligence, law, AI, computer fraud and abuse act, CFAA, legal risks, chilling effects, ML

JEL Classification: K00

Suggested Citation

Siva Kumar, Ram Shankar and Penney, Jonathon and Schneier, Bruce and Albert, Kendra, Legal Risks of Adversarial Machine Learning Research (July 3, 2020). International Conference on Machine Learning (ICML) 2020 Workshop on Law & Machine Learning, Available at SSRN: https://ssrn.com/abstract=3642779

Ram Shankar Siva Kumar

Microsoft Corporation ( email )

One Microsoft Way
Redmond, WA 98052
United States

Harvard University - Berkman Klein Center for Internet & Society ( email )

Harvard Law School
23 Everett, 2nd Floor
Cambridge, MA 02138
United States

Jonathon Penney (Contact Author)

Osgoode Hall Law School ( email )

4700 Keele Street
Toronto, Ontario M3J 1P3
Canada

Harvard University - Berkman Klein Center for Internet & Society ( email )

Harvard Law School
23 Everett, 2nd Floor
Cambridge, MA 02138
United States

Citizen Lab, University of Toronto ( email )

Munk School of Global Affairs
University of Toronto
Toronto, Ontario M5S 3K7
Canada

Bruce Schneier

Harvard University - Berkman Klein Center for Internet & Society ( email )

Harvard Law School
Cambridge, MA 02138
United States

Harvard University - Harvard Kennedy School (HKS) ( email )

79 John F. Kennedy Street
Cambridge, MA 02138
United States

Kendra Albert

Harvard Law School ( email )

1563 Massachusetts Ave
Cambridge, MA 02138
United States

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
213
Abstract Views
1,081
Rank
284,713
PlumX Metrics