Data Quality of Platforms and Panels for Online Behavioral Research
http://link.springer.com/article/10.3758/s13428-021-01694-3
Posted: 9 Mar 2021 Last revised: 30 Sep 2021
Date Written: January 10, 2021
Abstract
We examine key aspects of data quality for online behavioral research between selected platforms (Amazon Mechanical Turk, CloudResearch and Prolific) and panels (Qualtrics and Dynata). To identify the key aspects of data quality, we first engaged with the behavioral research community to discover which aspects are most critical to researchers and found these include attention, comprehension, honesty, and reliability. We then explored differences in these data quality aspects in two studies (N~4,000), with or without data quality filters (approval rating). We found considerable differences between audiences, especially in comprehension, attention, and dishonesty. In Study 1 (without filters) we found that only Prolific provided high data quality on all measures. In Study 2 (with filters), we found high data quality among CloudResearch and Prolific. MTurk showed alarmingly low data quality even with data quality filters. We also found that while reputation (approval rating) did not predict data quality, frequency and purpose of usage did, especially on MTurk: the lowest data quality came from MTurk participants that report using the site as their main source of income but spend few hours on it per week. We provide a framework for future investigation into the ever-changing nature of data quality in online research, and how the evolving set of platforms and panels performs on these key aspects.
Suggested Citation: Suggested Citation