Will User-Contributed AI Training Data Eat Its Own Tail?

11 Pages Posted: 2 Aug 2024

See all articles by Joshua S. Gans

Joshua S. Gans

University of Toronto - Rotman School of Management; NBER

Date Written: July 04, 2024

Abstract

This paper examines and finds that the answer is likely to be no. The environment examined starts with users who contribute based on their motives to create a public good. Their own actions determine the quality of that public good but also embed a free-rider problem. When AI is trained on that data, it can generate similar contributions to the public good. It is shown that this increases the incentive of human users to provide contributions that are more costly to supply. Thus, the overall quality of contributions from both AI and humans rises compared to human-only contributions. In situations where platform providers want to generate more contributions using explicit incentives, the rate of return on such incentives is shown to be lower in this environment.

Keywords: training data, user contributions, prediction, artificial intelligence

JEL Classification: O31, D70, H44

Suggested Citation

Gans, Joshua S., Will User-Contributed AI Training Data Eat Its Own Tail? (July 04, 2024). Available at SSRN: https://ssrn.com/abstract=4885662

Joshua S. Gans (Contact Author)

University of Toronto - Rotman School of Management ( email )

Canada

HOME PAGE: http://www.joshuagans.com

NBER ( email )

1050 Massachusetts Avenue
Cambridge, MA 02138
United States

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
55
Abstract Views
258
Rank
822,687
PlumX Metrics