Skip to Content

Building Evidence for Educator Effectiveness

5-Minute Evaluation Resource Series logo Evaluation Lifecycle: Plan and Design; Identify and Follow Participants; Collect and Store Data; Analyze Data; Report and Use Findings.

Welcome to the 5-minute evaluation series, designed to provide you with clear, easy to access information on how to build evidence on programs that support educator effectiveness. The series includes videos and resources written in plain language and designed to be watched or read in 5 minutes or less.

To learn more, watch the first video Evaluation for educators.

To view all videos in the series, visit our YouTube channel.

  • Evaluation for educators

    Watch the Evaluation for educators video or read the illustrated video script [PDF 1 MB]. This video is designed for people who are not evaluators but want to understand important evaluation issues as they consider evaluating an education program or policy.
    Review the user’s guide [PDF 1 MB] to get the most out of the 5-minute evaluation resource series.
  • Life cycle of an evaluation

    Watch the Life cycle of an evaluation video or read the illustrated video script [PDF 1 MB] This video provides an overview of the evaluation life cycle and WWC standards for rigorous education evaluation.
    View this one-page infographic Life cycle of an evaluation [PDF 1 MB] that provides definitions for each stage.
  • What Works Clearinghouse (WWC)

    What is the What Works Clearinghouse [ PDF 1 MB ] This WWC resource provides the why, who, what, how, and where of the U.S. Department of Education’s trusted source for scientific evidence for what works in education.
    WWC e-brochure [PDF 1 MB] This WWC publication describes what the WWC does and why it is important.
  • Introductory tools for understanding rigorous evaluations

    Life cycle of an evaluation [PDF 1 MB] is a one-page infographic that provides definitions for each stage of an evaluation.
    Group evaluation comparison chart [PDF 1 MB] This infographic provides a side-by-side presentation of three group evaluation designs.
    Evaluation terms for a non-technical audience [PDF 1 MB] This glossary uses non-technical language to define terms often used in evaluation literature.
  • More resources

    Database for Evaluation Resources This database, designed for both non-technical and technical audiences, can be filtered by each step and stage of the evaluation life cycle. It can also be filtered by focus on educator effectiveness, length, audience, and type of resource.
    User’s guide for the database [PDF 1 MB] A guide to getting the most out of the Database for Evaluation Resources.
    Definitions of fields for the database A list of definitions for the filter fields of the database.

  • Pathway to effective evaluation – Stage 1: Plan and Design

    Pathway to evaluation [PDF 2 MB] Guidance on the four steps of the plan and design stage of the evaluation life cycle. Includes resources for more information about each step.
  • Step 1. Determine your research question

    Evaluation programs for strengthening teaching and leadership [PDF 5 MB] In the context of educator effectiveness, pages 18-30 of this resource provide an in-depth explanation of how to develop your research question, including types of outcomes and example research questions.
    How to develop the right research question for program evaluation [PDF 1 MB] A PowerPoint slide deck created by the Corporation for National and Community Service that provides detailed steps for developing appropriate research questions.
  • Step 2. Develop your logic model

    Theory of change [ PDF 1 MB ] is a two-page excerpt from Resch, A., Berk, J., & Akers, L. that explains the importance of developing a theory of change at the beginning of a study and provides an example.
    Evaluation programs for strengthening teaching and leadership [PDF 5 MB] Pages 3-17 of this resource provide an in-depth explanation of logic model development, including descriptions of logic model aspects and an example logic model from the TIF program.
  • Step 3. Carefully describe the people and context of your program.

    Video: The case for comparison groups and its illustrated video script [PDF 1 MB] This video discusses the importance of choosing the right comparison groups and outlines the most important factors in these decisions.
    Identifying a valid comparison group [PDF 1 MB] This resource discusses how to choose appropriate comparison groups for your study and why this is so important.
  • Step 4. Select your research design.

    Group evaluation comparison chart [PDF 1 MB] This infographic provides a side-by-side presentation of three group evaluation designs.
    Randomized controlled trials (RCTs) [PDF 1 MB] This resource describes the steps for conducting randomized controlled trials and what types of studies they are appropriate for.
    Regression discontinuity design (RDD) [PDF 1 MB] An infographic that explains regression discontinuity design using an example study.
    Matched-comparison group design [PDF 1 MB] This resource describes which studies are appropriate for a matched-comparison group design and how to effectively design such a study.
    Difference-in-differences design [PDF 1 MB] An infographic that explains difference-in-differences design using an example study.
  • More resources

    Database for evaluation resources This database, designed for both non-technical and technical audiences, can be filtered by each step and stage of the evaluation life cycle. It can also be filtered by focus on educator effectiveness, length, audience, and type of resource.
    Don’t forget Suite 1 resources that provide an overview of the 5-minute Evaluation Resource Series and of the entire evaluation life cycle. Access Suite 1 by opening the previous section.

  • Pathway to effective evaluation – Stage 2: Identify and Recruit Participants

    Pathway to evaluation – Stage 2 [PDF 2 MB] Guidance on the 5 interrelated steps of the ‘identify and recruit participants’ stage of the evaluation life cycle. Includes resources for more information on each step.
  • Step 5. Identify your comparison groups

    Identifying a valid comparison group [PDF 1 MB] This resource discusses how to choose appropriate comparison groups for your study and why this is so important.
    WWC standards brief: Baseline equivalence [PDF 1 MB] Two-page What Works Clearinghouse (WWC) brief discusses why baseline equivalence is essential for a study to meet WWC standards.
  • Step 6. Determine your sample size

    Video: The power of sample size – part 1 and its illustrated video script [PDF 1 MB] This 6-minute video explains why an adequate sample size is important to ensure that your results are meaningful, and describes the role that the minimum detectable effect (MDE) plays estimating the sample size you will need.

    Video: The power of sample size – part 2 and its illustrated video script [PDF 1 MB] This 6-minute video describes how different decisions related to cluster experimental designs affect sample size as well as the availability of pre-intervention data.
    The power of sample size [PDF 1 MB] This two-page brief summarizes some of the key points of the two videos.
  • Step 7. Recruit your study participants

    Excerpt from Recognizing and conducting opportunistic experiments in education: a guide for policymakers and researchers [PDF 1 MB], U.S. Department of Education, REL (2014). This one-page excerpt provides suggestions and other considerations related to recruiting participants in opportunistic studies (where there already happens to be an intervention taking place).
  • Step 8. Address changes in your sample

    WWC standards brief: Attrition standards [PDF 1 MB], U.S. Department of Education, REL (2014). This two-page What Works Clearinghouse brief discusses two types of attrition, overall and differential (when the difference in the number of people leaving the study differs between the intervention group and the comparison group) and provides WWC’s standards related to attrition.
  • Step 9. Reduce bias in your comparison group

    WWC standards brief: Confounding factors [PDF 1 MB] Confounding factors make it impossible to determine whether observed changes in your intervention group were due to your program or to something else. This 3-page What Works Clearinghouse brief defines confounding factors, discusses why it matters, and provides examples.
  • More resources

    Database for evaluation resources This database, designed for both non-technical and technical audiences, can be filtered by each step and stage of the evaluation life cycle. It can also be filtered by focus on educator effectiveness, length, audience, and type of resource.
    Don’t forget Suite 1 resources: The Evaluation Life Cycle and Suite 2 resources: Plan and Design. You can access these resources above by opening their previous sections.