Seventeenth Americas Conference on Information Systems, August 4th-7th 2011, Detroit, Michigan
10 Pages Posted: 16 Jan 2013 Last revised: 19 Mar 2013
Date Written: 2011
Crowds can be used to generate and evaluate design solutions. To increase a crowdsourcing system’s effectiveness, we propose and compare two evaluation methods, one using five-point Likert scale rating and the other prediction voting. Our results indicate that although the two evaluation methods correlate, they have different goals: whereas prediction voting focuses evaluators on identifying the very best solutions, the rating focuses evaluators on the entire range of solutions. Thus, prediction voting is appropriate when there are many poor quality solutions that need to be filtered out, and rating is suited when all ideas are reasonable and distinctions need to be made across all solutions. The crowd prefers participating in prediction voting. The results have pragmatic implications, suggesting that evaluation methods should be assigned in relation to the distribution of quality present at each stage of crowdsourcing.
Keywords: crowdsourcing, creativity, human-based genetic algorithms, evaluation, Mechanical Turk
Suggested Citation: Suggested Citation
Bao, Jin and Sakamoto, Yasuaki and Nickerson, Jeffrey V., Evaluating Design Solutions Using Crowds (2011). Seventeenth Americas Conference on Information Systems, August 4th-7th 2011, Detroit, Michigan; Howe School Research Paper No. 2013-5. Available at SSRN: https://ssrn.com/abstract=2201651 or http://dx.doi.org/10.2139/ssrn.2201651