Assessing the Efficacy of a Participant-Vetting Procedure to Improve Data-Quality on Amazon’s Mechanical Turk

Authors

  • Emilio D. Rivera
  • Benjamin M. Wilkowski
  • Aaron J. Moss
  • Cheskie Rosenzweig
  • Leib Litman

Abstract

In recent years, Amazon’s Mechanical Turk (MTurk) has become a pivotal source for participant recruitment in many social-science fields. In the last several years, however, concerns about data quality have arisen. In response, CloudResearch developed an intensive pre-screening procedure to vet the full participant pool available on MTurk and exclude those providing low-quality data. To assess its efficacy, we compared three MTurk samples that completed identical measures: Sample 1 was collected prior to the pre-screening’s implementation. Sample 2 was collected shortly following its implementation, and Sample 3 was collected nearly a full-year after its implementation. Results indicated that the reliability and validity of scales improved with the implementation of this prescreening procedure, and that this was especially apparent with more recent versions. Thus, this prescreening procedure appears to be a valuable tool to help ensure the collection of high-quality data on MTurk.