Guidelines for Impact Evaluation in Education Using Experimental Design
MetadataShow full item record
There has been extensive research and investment in strategies that aim to improve the quality of education around the world. But despite rigorous evidence, the question of what to do in a specific context usually remains only partially answered at best. As a result, numerous impact evaluations are being conducted with the goal of learning what works. The goal of the guidelines presented here is to summarize the evidence and provide references in a single document in order to save time for those implementing education impact evaluations through randomized control trials. The guidelines focus on supporting evaluations that help policymakers determine where to assign limited resources to improve the quality of education. The document has five sections. The first reviews empirical findings to provide a general idea of why education is important, which inputs matter, and how to conduct an impact evaluation. The second section provides guidance on how to define the impact evaluation hypothesis. The third section presents a methodology for selecting the sample and setting up a randomized impact evaluation. The fourth section provides information on what data to collect for the evaluation, and the final section looks at how to analyze data and strategies to adjust the analysis for changes in the original design. The guidelines are not exhaustive: their main contribution is to present a methodology to design an impact evaluation, outline the necessary inputs to start the design of such an evaluation, and provide a structured source of rigorous references.