Clearing Opacity through Machine Learning
Duke Law School Public Law & Legal Theory Series No. 2020-13
38 Pages Posted: 8 Mar 2020 Last revised: 12 Feb 2021
Date Written: February 12, 2020
Abstract
Artificial intelligence and machine learning provide powerful tools in many fields ranging from criminal justice to human biology to climate change. Part of the power of these tools arises from their ability to make predictions and glean useful information about complex real-world systems without the need to understand the workings of those systems.
But these machine-learning tools are often as opaque as the underlying systems, whether because they are complex, nonintuitive, deliberately kept secret, or a synergistic combination of those three factors. A burgeoning literature addresses challenges arising from the opacity of machine-learning systems. This literature has largely focused on the benefits and difficulties of providing information to lay individuals, such as citizens impacted by algorithm-driven government decisions.
In this Article, we explore the potential of machine learning to clear opacity — that is, to help drive scientific understanding of the frequently complex and nonintuitive real-world systems that machine-learning algorithms examine. Using multiple examples drawn from cutting-edge scientific research, we argue machine-learning algorithms can advance fundamental scientific knowledge and that deliberate secrecy around machine-learning tools restricts that learning enterprise. We examine why developers are likely to keep machine-learning systems secret, and the costs and benefits of that secrecy. Finally, we draw on the innovation policy toolbox to suggest ways to reduce secrecy so that machine learning can help us not only to interact with complex, non-intuitive real-world systems but also to understand them.
Keywords: artificial intelligence, secrecy, opacity, machine learning, basic research, innovation policy, trade secrets
Suggested Citation: Suggested Citation