Reply to MTurk, Prolific or panels? Choosing the right audience for online research
18 Pages Posted: 18 Feb 2021
Date Written: January 28, 2021
Abstract
In a recent paper published on SSRN, Peer et al., (2021) compared data quality across five participant recruitment platforms commonly used for research in the behavioral sciences. After finding evidence to suggest Prolific data is superior to alternatives, the authors, who are themselves primarily members of Prolific, state that using other platforms “appears to reflect a market failure and an inefficient allocation or even misuse of scarce research budgets” (pg., 21). Such an assertion has the potential to change how research funds are allocated in potentially harmful ways. Therefore, we sought to interrogate the claims made by Peer et al. We found surprising methodological decisions, which were undisclosed in their paper, that severely limit the inferences that can be drawn from their data. Most notably, when these researchers gathered data with the CloudResearch MTurk Toolkit, they chose to turn off the recommended data quality filters, including filters that are on by default and were designed to address known data quality issues on MTurk. When we replicated their study using these recommended options, we found CloudResearch data superior to that of Prolific. After presenting our findings, we discuss several theoretical factors that are crucial for evaluating the strengths and weaknesses of different online platforms and encourage researchers to adopt a “fit for purpose” view when evaluating platforms for online data collection.
Suggested Citation: Suggested Citation