Legal Risks of Adversarial Machine Learning Research
International Conference on Machine Learning (ICML) 2020 Workshop on Law & Machine Learning
14 Pages Posted: 29 Jul 2020
Date Written: July 3, 2020
Abstract
Adversarial machine learning is the systematic study of how motivated adversaries can compromise the confidentiality, integrity, and availability of machine learning (ML) systems through targeted or blanket attacks. The problem of attacking ML systems is so prevalent that CERT, the federally funded research and development center tasked with studying attacks, issued a broad vulnerability note on how most ML classifiers are vulnerable to adversarial manipulation. Google, IBM, Facebook, and Microsoft have committed to investing in securing machine learning systems. The US and EU are likewise putting security and safety of AI systems as a top priority.
Now, research on adversarial machine learning is booming but it is not without risks. Studying or testing the security of any operational system may violate the Computer Fraud and Abuse Act (CFAA), the primary United States federal statute that creates liability for hacking. The CFAA’s broad scope, rigid requirements, and heavy penalties, critics argue, has a chilling effect on security research. Adversarial ML security research is likely no different. However, prior work on adversarial ML research and the CFAA is sparse and narrowly focused. In this article, we help address this gap in the literature. For legal practitioners, we describe the complex and confusing legal landscape of applying the CFAA to adversarial ML. For adversarial ML researchers, we describe the potential risks of conducting adversarial ML research. We also conclude with an analysis predicting how the US Supreme Court may resolve some present inconsistencies in the CFAA’s application in Van Buren v. United States, an appeal expected to be decided in 2021. We argue that the court is likely to adopt a narrow construction of the CFAA, and that this will actually lead to better adversarial ML security outcomes in the long term.
Keywords: Adversarial machine learning, security, machine learning, artificial intelligence, law, AI, computer fraud and abuse act, CFAA, legal risks, chilling effects, ML
JEL Classification: K00
Suggested Citation: Suggested Citation