Examining Multi-Phased Online Crowdsourcing Contests Using Quality Level and Quality Ambiguity

43 Pages Posted: 5 Sep 2015 Last revised: 20 Mar 2019

See all articles by Nirup M. Menon

Nirup M. Menon

George Mason University - Department of Information Systems and Operations Management

Shun Ye

George Mason University - Department of Information Systems and Operations Management

Date Written: September 1, 2017

Abstract

When a platform crowdsources a multi-phased project as multiple contests, it posts a focal phase contest after prior phases’ contests are completed. Because a prior phase’s winning solution becomes an input in the focal phase contest, this multi-phased setup can be leveraged to objectively gauge the characteristics of the focal contest’s problem formulation, which can then help the platform and the solution seeker predict the success of the contest in attracting high quality contestants and generating high quality outcomes. This study develops two measures for the winning solution of a prior phase contest: quality level which is the degree of completeness with which the solution meets user requirements, and quality ambiguity representing the degree of lack of clarity about the quality level. Under the premise that an intrinsic motivation of contestants on crowdsourcing platforms is to solve challenging problems, a theoretical model of the relationships between the two measures and the contest outcome, mediated by the quality of contestants, is developed. Data from a popular software crowdsourcing platform, TopCoder, is used to validate hypotheses on the direct, interaction, and nonlinear impacts of the quality level and quality ambiguity of a prior phase contest’s winning solution on the outcome of the focal phase contest. Results show an inverted U-shaped relationship between the quality ambiguity of a prior phase contest’s winning solution and the quality of the contestants in the focal phase contest, highlighting a sweet spot for quality ambiguity. Our findings generate important implications for the design of online crowdsourcing contests.

Keywords: Crowdsourcing, contest, software development, sequential interdependency, uncertainty

Suggested Citation

Menon, Nirup M. and Ye, Shun, Examining Multi-Phased Online Crowdsourcing Contests Using Quality Level and Quality Ambiguity (September 1, 2017). Available at SSRN: https://ssrn.com/abstract=2655909 or http://dx.doi.org/10.2139/ssrn.2655909

Nirup M. Menon (Contact Author)

George Mason University - Department of Information Systems and Operations Management ( email )

4400 University Drive
MS 5F4
Fairfax, VA 22030
United States

Shun Ye

George Mason University - Department of Information Systems and Operations Management ( email )

4400 University Drive
Fairfax, VA 22030
United States

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
87
Abstract Views
960
Rank
630,949
PlumX Metrics