摘要:Using participants who have been previously exposed to experimental stimuli (referred to as non-naïveté) can reduce effect sizes. The workforce of Amazon's Mechanical Turk is particularly vulnerable to this problem and solutions are usually cost and time inefficient and of mixed effectiveness. In response to this problem and its currently underwhelming solutions, we tested various participant recruitment strategies designed to recruit participants naïve to frequently used experimental stimuli. We collected samples using maximum HIT restrictions (50 for Experiment 1 and 2, 500 for Experiment 2) and TurkPrime's (now CloudResearch) naiveté feature and compared them to samples recruited with standard restrictions (95% HIT approval rating). In these comparisons, we replicated past findings where using nonnaïve (vs. naïve) participants has been shown to reduce effect sizes and affect performance on a variety of tasks (e.g., the Cognitive Reflection Test, a Public Goods Game). We demonstrate that restricting by the maximum number of HITs heavily reduces the number of “experienced” research subjects in samples but necessitates some sacrifice in data quality and collection speed. We discuss the pragmatics of our method, its limitations, and future directions for solving the problem of non-naïveté on Mechanical Turk. For those looking to avoid this issue, we recommend setting a maximum HIT restriction of 50 when recruiting participants.