Fair’s Fair: How Public Benefit Considerations in the Fair Use Doctrine Can Patch Bias in Artificial Intelligence Systems
11 Indiana Journal of Law & Social Equality 229 (2023)
16 Pages Posted: 24 May 2022 Last revised: 26 Sep 2023
Date Written: December 23, 2021
Abstract
The impact of artificial intelligence (AI) expands relentlessly despite well-documented examples of bias in AI systems, from facial recognition failing to differentiate between darker-skinned faces to hiring tools discriminating against female candidates. These biases can be introduced to AI systems in a variety of ways; however, a major source of bias is found in training datasets, the collection of images, text, audio, or information used to build and train AI systems.
This Article first grapples with the pressure copyright law exerts on AI developers and researchers to use biased training data to build algorithms, focusing on the potential risk of copyright infringement. Second, it examines how the fair use doctrine, particularly its public benefit consideration, can be applied to AI systems and begin to address the algorithmic bias problem afflicting many of today’s systems. Ultimately, the Article concludes that the social utility and human rights benefits of diversifying AI training data justifies the fair use of copyrighted works.
Keywords: artificial intelligence, technology, bias, copyright, intellectual property, fair use, discrimination, public benefit, algorithms, algorithmic bias, social utility, human rights
JEL Classification: K30, O32, O34
Suggested Citation: Suggested Citation