Beyond Algorithmic Disclosure For AI
Columbia Science and Technology Law Review, forthcoming 2024
18 Pages Posted: 18 Jun 2024
Date Written: June 12, 2024
Abstract
One of the most commonly recommended policy interventions with respect to algorithms in general and artificial intelligence ("AI") systems in particular is the need for greater transparency, often focusing on the disclosure of the variables employed by the algorithm and the weights given to those variables. This Essay argues that any meaningful transparency regime must provide information on other critical dimensions as well. For example, any transparency regime must also include key information about the data on which the algorithm was trained, including its source, scope, quality, and inner correlations, subject to constraints imposed by copyright, privacy, and cybersecurity law. Disclosures about prerelease testing also play a critical role in understanding an AI system's robustness and its susceptibility to specification gaming. Finally, the fact that AI, like all complex systems, tends to exhibit emergent phenomena, such as proxy discrimination, interactions among multiple agents, the impact of adverse environments, and the well-known tendency of generative AI to hallucinate, makes ongoing post-release evaluation a critical component of any system of AI transparency.
Keywords: artificial intelligence, AI system, policies, algorithms, algorithmic transparency, algorithmic disclosure, algorithmic discrimination, bias, training data
Suggested Citation: Suggested Citation