Variety and Risk-Taking in Content Creation: Evidence from a Field Experiment Using Image Recognition Techniques
Posted: 9 Jun 2022
Date Written: June 3, 2022
Abstract
Social networks rely on the sharing of content that is engaging to their users. Since continued generation of user-generated content is critical to their success, they have created a variety of tools to motivate their creators to create and share new content, to facilitate discovery of new content by their users, and to provide attention and recognition to the best content shared by their creators. Such attention and recognition has been shown in past research to increase the volume of content shared on the networks. But how do these affect the nature of content created and shared on their platforms? Do they cause creators to share content similar to the ones that received attention and recognition? Or do creators take risks and create content different from the ones that received attention and recognition? These are the questions we ask in this paper. Our empirical context is an image-sharing social network, where creators share various types of digital art and photographs. We leverage exogenous variation in attention and recognition given to specific pieces of a content, induced through a randomized controlled experiment. We employ a machine learning algorithm to convert images into a set of lower-level features, using a transfer learning approach to train this algorithm. Our main findings are that creators create content different from the ones that received attention and recognition, and that this is robust to a variety of ways in which we classify image content. Our results illustrate the importance of tools meant to create attention and recognition to the creation and sharing of diverse content by the creators on social networks, and give insights into the motivations for content creators to share content in the first place.
Keywords: Awards and Recognition, Content Creation, Image Recognition, Machine Learning, Large Scale Field Experiment
JEL Classification: M30, C93
Suggested Citation: Suggested Citation