Scaling Human Effort in Idea Screening and Content Evaluation
61 Pages Posted: 18 Sep 2020 Last revised: 22 Jan 2021
Date Written: September 3, 2020
Abstract
Brands and advertisers often tap into the crowd to generate ideas for new products and ad creatives by hosting ideation contests. Content evaluators then winnow thousands of submitted ideas before a separate stakeholder, such as a manager or client, decides on a small subset to pursue. We demonstrate the information value of data generated by content evaluators in past contests and propose a proof-of-concept machine learning approach to efficiently surface the best submissions in new contests with less human effort. The approach combines ratings by different evaluators based on their correlation with the past stakeholder choices, controlling for submission characteristics and textual content features. Using field data from a crowdsourcing platform, we demonstrate that the approach improves performance by identifying nonlinear transformations and efficiently reweighting evaluator ratings. Implementing the proposed approach can affect the optimal assignment of internal experts to ideation contests. Two evaluators whose votes were a priori equally correlated with sponsor choices may provide substantially different incremental information to improve the model-based idea ranking. We provide additional support for our findings using simulations based on a product design survey.
Keywords: Crowdsourcing, Innovation, Content Evaluation, Wisdom of Crowds, Machine Learning
Suggested Citation: Suggested Citation
Do you have a job opening that you would like to promote on SSRN?
