Asymdystopia: The Threat of Small Biases in Evaluations of Education Interventions that Need to be Powered to Detect Small Impacts
Publisher: Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance
Oct 03, 2017
- For RCTs, evaluators must either achieve much lower rates of missing data than before or offer a strong justification for why missing data are unlikely to be related to study outcomes.
- For RDDs, state-of-the-art statistical methods can protect against inaccuracies from incorrect regression models, but this protection comes at a cost – much larger sample sizes are needed in order to detect small effects when using these methods.
You may also like...
What Works for Whom? A Bayesian Approach to Channeling Big Data Streams for Public Program Evaluation
The Impact of Healthy Harlem on the Prevalence of Child Overweight and Obesity and Contributing Factors: Interim Evaluation Report