Asymdystopia: The Threat of Small Biases in Evaluations of Education Interventions that Need to be Powered to Detect Small Impacts

Asymdystopia: The Threat of Small Biases in Evaluations of Education Interventions that Need to be Powered to Detect Small Impacts

Published: Oct 03, 2017
Publisher: Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance
Download
Authors

Thomas Wei

Key Findings

Key Findings:

  • For RCTs, evaluators must either achieve much lower rates of missing data than before or offer a strong justification for why missing data are unlikely to be related to study outcomes.
  • For RDDs, state-of-the-art statistical methods can protect against inaccuracies from incorrect regression models, but this protection comes at a cost – much larger sample sizes are needed in order to detect small effects when using these methods. 
Evaluators of education interventions are increasingly designing studies to detect impacts much smaller than the 0.20 standard deviations that Cohen (1988) characterized as “small.” While the need to detect smaller impacts is based on compelling arguments that such impacts are substantively meaningful, the drive to detect smaller impacts may create a new challenge for researchers: the need to guard against smaller biases. The purpose of this paper is twofold. First, we examine the potential for small biases to increase the risk of making false inferences as studies are powered to detect smaller impacts, a phenomenon we refer to as asymdystopia. We examine this potential for two of the most rigorous designs commonly used in education research—randomized controlled trials (RCTs) and regression discontinuity designs (RDDs). Second, we recommend strategies researchers can use to avoid or mitigate these biases.

How do you apply evidence?

Take our quick four-question survey to help us curate evidence and insights that serve you.

Take our survey