Skip to Content

Evaluation Resources

Below, we offer a set of resource documents to help you understand, design, and implement evaluation methods for educator development programs in a way that is consistent with the Department of Education What Works Clearinghouse (WWC) standards.

How to use this page: To find resources that have the most relevance to your interests, use the filters on the right side of the page. You can find the definition for each field as you hover over the field name. Click here for more detailed definitions of these fields (or click here to download a PDF version). To learn more about each resource, click on the resource title. Click here to download a User Guide on this database.

Resource # Title Author(s) Hosting/Publishing Organization Short Description Year of Publication Weblink Type 1 Type 2 Length Audience References WWC Standards Short Cycle Low Cost Educator Development Other Education Topics RCT QED RDD SCD Identifying Comparison Groups Determining Sample Sizes Recruiting Study Participants Addressing Changes in your Sample Reducing Bias in Comparison Groups Acquiring Administrative Data Selecting Appropriate Outcome Measures Collecting New Data Combining Data Systems Understanding Data Analytic Models Addressing Analysis Challenges Reporting Findings Visualizing Data Student Achievement(measure) Student Behavior(measure) Teacher(measure) Principal/School(measure) District(measure) Associated Keywords Length Value Group Any Examples of Outcome Measures Is Brief or Summary? Is Guide? Is Tool? Is Methods Reports? Is Video? Is Webinar? Is Slide Presentation? Initial Planning
1 Checklist For Reviewing an RCT of a Social Program or Project, To Assess Whether It Produced Valid Evidence Coalition for Evidence-Based Policy Coalition for Evidence-Based Policy The Coalition for Evidence-Based Policy developed this tool to identify key items to help researchers assess whether the results of a randomized controlled trial of a social program, project, or strategy (“intervention”) produced valid evidence on the intervention’s effectiveness. The tool is a (non-exhaustive) checklist that addresses designing a study, gauging the equivalence of the intervention and control groups, selecting outcome measures, reporting the intervention’s effects, and determining whether there is enough evidence to assert that the intervention is effective. This checklist is a valuable aid, but good judgment may still be needed to gauge, for example, whether a deviation from one or more checklist items is serious enough to undermine the study’s findings. 2010 Link to Resource Tool 8 pages R True False False False False True False False False True True False False True False True False False False False True False False False False False False intervention study group groups program effects randomized sample control outcomes illustrative examples controlled trial random assignment randomized controlled randomly assign housing projects 2 False False False True False False False False False
2 Demystifying the What Works Clearinghouse: A Webinar for Developers and Researchers Constantine, J.; Cody, S.; Seftor, N.; Lesnick, J.; and McCallum, D. Institute of Education Sciences: What Works Clearinghouse at the U.S. Department of Education The What Works Clearinghouse (WWC) developed this webinar with materials to increase researchers’ knowledge of the key features of the WWC review process, including how they identify research studies and evaluate them against WWC standards. This discussion also provided details on: the role of the WWC, why it is important to have standards for research, how WWC reviews are conducted, what is done to ensure the quality and accuracy of WWC reviews and reports, how to submit questions to the WWC, how to submit a study for review, and WWC resources for researchers and developers. This webinar is a valuable resource for researchers who want to understand the features of high quality research, especially if they intend to submit their work for WWC review. 2013 Link to Resource Webinar 15 slides, 1 hour R True False False False False False False False False False False False False False False False False False False False False False False False False False False 3 False False False False False False True False False
3 Designing Strong Studies: Developing Studies Consistent with What Works Clearinghouse Evidence Standards Agodini, R.; Constantine, J.; Seftor, N.; & Neild, R.C. Institute of Education Sciences An Institute of Education Sciences webinar to help researchers design and execute studies more likely to meet What Works Clearinghouse’s (WWC) standards for research studies. The webinar focuses on several key aspects of successfully conducting randomized controlled trials and quasi-experimental study designs in an educational setting, including such issues as attrition. The webinar draws on WWC’s resources to explain how to identify and maintain a research sample, how to collect the necessary data, how to conduct analysis consistent with WWC standards, and strategies to address some of the common pitfalls in effectiveness research. 2014 Link to Resource Webinar 88 min., 34 slides R True False False False False True True False False True True True True True False True False False False True True False True False False False False you study what standards attrition intervention treatment wwc assignment comparison quasi experimental confounding factors qualitative conclusion causal statements 3 True False False False False False True False False
4 Gold-Standard Program Evaluations, on a Shoestring Budget Baron, J. Education Week An Education Week article advocating for the use of low-cost randomized controlled trials (RCT) in education research. The article describes two examples of recent RCTs that were conducted at low cost, yet produced findings of policy and practical importance. The author suggests that with improvements in information technology and the availability of high-quality administrative data, it is now more feasible to conduct gold-standard randomized evaluations on a shoestring budget. 2011 Link to Resource Brief or Summary 2 pages P False False True False False True False False False False False False False False True False False False False False False False False False False False False already collected student achievement low cost key outcomes program study cost education student data group achievement conducted 1 False True False False False False False False False
5 Toward School Districts Conducting Their Own Rigorous Program Evaluations: Final Report on the “Low Cost Experiments to Support Local School District Decisions” Project Newman, D. Empirical Education Inc. Empirical Education Inc. developed this report to help school systems increase the efficiency and lower the cost of randomized experiments. The report focuses on how to design and conduct a randomized experiment, on school system decision-making processes, and on the challenges to using research findings. This report is targeted to three audiences: “1) educators with little training or interest in research methodologies but who are in decision-making positions, such as senior central office administrators, 2) district research managers with some, but often rusty or outdated, understanding of research methodology and statistical analysis, and 3) technical experts who may be called upon to review the evidence.” (p. 2) 2008 Link to Resource Methods Report 47 pages R False False True False False True False False False True True True False False True False True False True False True False False False False False False Bay Area Child Left Behind Paired Randomization Professional Development research district program decision evidence school education experiment report 3 False False False False True False False False False
6 When Is It Possible to Conduct a Randomized Controlled Trial in Education at Reduced Cost, Using Existing Data Sources? A Brief Overview Coalition for Evidence-Based Policy Council for Excellence in Government The Council for Excellence in Government developed this guide to advise researchers, policymakers and others on when it’s possible to conduct a low-cost, high-quality randomized controlled trial (RCT) in education. The guide includes two main sections: 1) describing the conditions that enable researchers to conduct an RCT at reduced cost (e.g. high-quality administrative data and an active partnership with senior school or district officials), and 2) providing examples of well-designed RCTs conducted at a reduced cost. This guide is a valuable resource for policymakers, researchers, and others who automatically discard RCTs as an approach because they believe such studies are always too costly and too burdensome on schools to be practical. 2007 Link to Resource Guide Brief or Summary 13 pages P False False True False False True False False False False False False False False True False False False False False False False True False False False False Costs, Scores, Data, Research Design, Evaluators, Evaluation Methods, Guides, Outcomes of Education, Program Evaluation, Research Methodology, Evidence, Educational Policy, Federal Programs, Controlled Trial Randomized Controlled grade retentions struggling readers Project Officials professional development special education Mathematica Policy Solicit Rigorous Evidence-Based Policy data school trial students cost intervention randomized study measure 2 True True True False False False False False False
7 Which Study Designs Are Capable of Producing Valid Evidence About A Program’s Effectiveness? A Brief Overview Coalition for Evidence-Based Policy Coalition for Evidence-Based Policy The Coalition for Evidence-Based Policy developed this guide to help policy officials, program providers, and researchers go identify effective social programs and ways to assess a program’s effectiveness. The guide provides a brief overview of which studies (randomized controlled trials and quasi-experiments, such as matched comparison groups or regression discontinuity designs) can produce valid evidence about a program’s effectiveness. The final section identifies resources for readers seeking more detailed information or assistance. This guide is a valuable resource for policy officials, program providers, and researchers who want to understand the conditions under which different evaluation approaches are most likely to provide an accurate assessment of a program’s effectiveness. 2014 Link to Resource Guide Brief or Summary 8 pages P False False False False False True True True False True False False False True False True False False False False False False False False False False False Regional Assistance Services Task Study Designs Task Force Designs Are carried out linked here well-conducted pre-program program groups comparison studies study education evidence group rcts 2 False True True False False False False False False
8 What Works Clearinghouse Procedures and Standards Handbook Version 3.0 What Works Clearinghouse Institute of Education Sciences The Institute of Education Sciences developed this guide to help researchers design studies that meet the standards to claim causal inference. The guide focuses on randomized controlled trials and quasi experimental designs (it also briefly introduces standards for regression discontinuity and single case designs). The guide defines the basic steps used to develop a review protocol, identify the relevant literature, assess research quality, and summarize evidence of effectiveness. It also briefly describes the set of products that summarize the results. This guide is a valuable resource for researchers and practitioners who want to understand why studies meet or do not meet the standards and why education programs, policies, and practices are deemed effective or not. 2014 Link to Resource Guide 91 pages R True False False False False True True False False True True False True True False True False False False True True False False False False False False peer-reviewed Learning Disabilities Pilot Single Single-Case New York Electronic Databases Works Clearinghouse full text intervention wwc study effect review studies data group level design 3 False False True False False False False False False
9 Rigorous Program Evaluations on a Budget: How Low-Cost Randomized Controlled Trials Are Possible in Many Areas of Social Policy Coalition for Evidence-Based Policy Coalition for Evidence-Based Policy The Coalition for Evidence-Based Policy developed this guide to illustrate the feasibility and value of low-cost randomized controlled trials (RCT) for policy officials and researchers, by providing concrete examples from diverse program areas. The guide provides a brief overview of what RCTs are and how to reduce their cost. It then presents five examples of quality, low-cost RCTs in four areas: criminal justice, child welfare, a community-wide parenting intervention, and a teacher-incentive program. This guide is a valuable resource for policymakers, researchers, and others who automatically discard RCTs as an approach because of the perception that such studies are always too costly and too administratively burdensome on schools to be practical. 2012 Link to Resource Guide Brief or Summary 10 pages P False False True False True True False False False False False False False False True False False False False False False False True True False False False Task Force Waiver Demonstration Community Supervision New York Recovery Coach Services Task carried out program study cost data group low control administrative outcomes randomized 2 True True True False False False False False False
10 Making the Most of Opportunities to Learn What Works: A School District’s Guide. Akers, L.; Resch, A.; & Berk, J. Regional Education Laboratory Program, Institute for Education Sciences The Regional Educational Laboratory program developed this guide to help school district leaders assess the effectiveness of programs through a randomized controlled trial (RCT). The guide provides details on conducting an RCT, describing three key steps: 1) identifying opportunities to conduct an RCT, while minimizing the time and resources required, 2) gauging the feasibility of conducting an RCT, and 3) following the key steps in conducting an RCT. 2014 Link to Resource Guide 6 pages P False False True False False True False False False True True True False True False False False False False False False False False False False False False U.S. Department moderate risk What Works apples-to-apples comparison staggered rollout what works program students district may schools rct group school reading districts 2 False False True False False False False False False
11 Addressing Attrition Bias in Randomized Controlled Trials: Considerations for Systematic Evidence Reviews Deke, J.; Sama-Miller, E.; & Hershey, A. U.S. Department of Health and Human Services This U.S. Department of Health and Human Services paper discusses whether an attrition standard based on information from education research is appropriate for use with research that the Home Visiting Evidence of Effectiveness Review (HomVEE) examines. The paper also provides an example of how to assess whether the attrition standard for one systematic evidence review fits other systematic reviews, along with considerations for adopting or modifying the standard for alternative contexts. 2015 Link to Resource Methods Report 21 pages R True False False False False True False False False False True False True False False False False False False False False False False False False False False Controlled Trials Primary caregiver’s Randomized Controlled What Works Clearinghouse follow-up Evidence Reviews Human Services Systematic Evidence attrition bias homvee boundary standard effect program rate outcomes 3 False False False False True False False False False
12 Estimating causal effects using experimental and observational designs Schneider, B.; Carnoy, M.; Kilpatrick, J.; Schmidt, W.; & Shavelson, R. American Educational Research Association This American Educational Research Association (AERA) report is intended to help researchers, educators, and policymakers understand causal estimation by describing the logic of causal inference and reviewing designs and methods that allow researchers to draw causal inferences about the effectiveness of educational interventions. The purpose of the report is to explain the value of quasi-experimental techniques that can be used to approximate randomized experiments. The report does not address many of the nuances of experimental and quasi-experimental designs. 2007 Link to Resource Methods Report 158 pages R True False True False False True True True False True False False True True True False False False True True False False True False False False False American Children’s Cognitive Cognitive Growth Left Behind North Carolina Rosenbaum characteristics coverage curriculum kinder effects students research causal treatment school achievement education student schools 3 True False False False True False False False False
13 Limitations of Experiments in Education Research Schanzenbach, D. Association for Education Finance and Policy The Association for Education Finance and Policy developed this brief as a caution to the growing traction of randomized experiments in education circles in recent years. The brief succinctly describes significant limitations of experiments, including feasibility, a program’s complexity, time, generalizability of results, lack of insight into what makes a program work, behavior changes due to participation in an experiment, cost, ethics, fidelity of implementation, the temptation to mine the data to obtain desired results, and the fact that experiments may not be superior to other evaluation approaches. This brief also suggests ways to mitigate these limitations when designing a study. 2012 Link to Resource Brief or Summary 14 pages P False False True False False True False False False False False False False False False True False False True True False False False False False False False experiments may impact program school policy group research experimental evaluation 2 False True False False False False False False False
14 New Empirical Evidence for the Design of Group Randomized Trials in Education Jacob, R.; Zhu, P.; & Bloom, H.S. MDRC This MDRC working paper is a practical guide for designing group randomized studies to measure the impacts of educational interventions. The paper provides information about the values of parameters that influence the precision of impact estimates (intra-class correlations and R-squared values), and includes outcomes other than standardized test scores and data with a three-level, rather than a two-level, structure. The paper also discusses the role of error in estimates of design parameters and the implications error has for design decisions. 2009 Link to Resource Methods Report 57 pages R False False False False True True False False False False True False False False True True False False True True True False True True False False False not estimable Pediatric Symptom Symptom Checklist gene reliability Sage Publications Students Per randomized square root level class intra school schools outcomes correlations sample number correlation 3 True False False False True False False False False
15 The core analytics of randomized experiments for social research Bloom, H. MDRC This MDRC paper on research methodology examines the core analytic elements of randomized experiments for social research. Its goal is to provide a compact discussion for researchers of the design and analysis of randomized experiments. Design issues considered include choosing the size of a study sample and its allocation to experimental groups, using covariates or blocking to improve the precision of impact estimates, and randomizing intact groups instead of individuals. 2006 Link to Resource Methods Report 41 pages R False False False False False True False False False False True False True True False False False False True True False False False False False False False Evidence Head Start Local Average Web site covariates identically distributed Evolving Analytic Thousand Oaks student achievement Improve Precision treatment group effect randomized groups sample research randomization impact experimental 3 False False False False True False False False False
16 Understanding Variation in Treatment Effects in Education Impact Evaluations: An Overview of Quantitative Methods Schochet, P. Z.; Puma, M.; & Deke, J. Institute of Education Sciences An Institute of Education Sciences-developed report to help education researchers evaluate programs using randomized controlled trials or quasi-experimental designs with treatment and comparison groups. This report summarizes the research literature on quantitative methods for assessing the impacts of educational interventions on instructional practices and student learning and provides technical guidance about the use and interpretation of these methods. This report is a valuable resource for education researchers with an intermediate to advanced knowledge of quantitative research methods who could benefit from summary information and references to recent papers on quantitative methods. The report examines variations in treatment effects across students, educators, and sites in education evaluations. 2014 Link to Resource Methods Report 50 pages R False False False False False True True True False False False False False False False False False False True True False False False False False False False National Center Working Paper intervention Mathematica Policy take-up exclusion restriction Regional Assistance U.S. Department job training treatment subscript effects intervention methods example subgroup control group student 3 False False False False True False False False False
17 Recognizing and Conducting Opportunistic Experiments in Education: A Guide for Policymakers and Researchers. REL 2014-037 Resch, A.; Berk, J.; & Akers, L. Institute of Education Sciences National Center for Education Evaluation and Regional Assistance An Institute of Education Sciences-developed guide designed to help researchers recognize and conduct opportunistic experiments. The authors begin by defining opportunistic experiments and providing examples, then discuss key steps and issues to consider when choosing potential experiments. The authors also outline the critical steps to successfully completing an opportunistic experiment and the potential to conduct such studies at a relatively low cost. 2014 Link to Resource Guide 28 pages P False False True False False True False False False True False True False True False True False False False False True False True False False False False Pell Grants Power Program Mathematica Policy Works Clearinghouse random causal chain Regional Assistance Completion Project Data use schools data research random assignment district opportunistic study researchers intervention 3 True False True False False False False False False
18 Designing Quasi-Experiments: Meeting What Works Clearinghouse Standards Without Random Assignment: What Works Clearinghouse Lesnick, Joy; Seftor, Neil; Knab, Jean Institute of Education Sciences: What Works Clearinghouse at the U.S. Department of Education The Webinar aims to respond to questions from the What Works Clearinghouse about standards and procedures for studies that do not use random assignment. The intent is to inform high-quality non-experimental research design. Within a QED, baseline equivalence must be established for the control and treatment groups. WWC determines that effect size (ES) units 0-.05 ES is satisfactory, between .05 and .25 ES is moderate, and < .25 ES does not satisfy baseline equivalence. The Webinar overview includes the What Works Clearinghouse and Standards, Quasi-experimental designs (QEDs) characteristics as well as tips and cautions, resources, and staying informed. The Webinar closes with questions and answers. 2015 Link to Resource Webinar 1 hour, 13 slides R True False False False True True True False False True False False False False False True True False True True False False False False False False False 3 False False False False False False True False False
19 Which comparison-group (“quasi-experimental”) study designs are most likely to produce valid estimates of a program’s impact: A brief overview and sample review form Coalition for Evidence-Based Policy Coalition for Evidence-Based Policy The Coalition for Evidence-Based Policy developed this guide to help researchers select comparison-group (quasi-experimental) studies that are most likely to provide the same level of rigor as a randomized controlled trial. The guide describes examples of these studies and how to identify comparison groups in a way that increases the likelihood of producing valid results. It identifies regression-discontinuity designs as an example of approaches that can meet the required conditions for validity of results. The guide also provides a form that can be used to help determine whether a comparison-group study produces scientifically valid estimates of a program’s impact. 2014 Link to Resource Guide 7 pages P False False False False False False True True False True False False False True False False False False True True False False False False False False False Brief Overview composite rating been carried carried out controlled trials one composite give one Please give program comparison group groups study studies valid estimates methods design 2 False False True False False False False False False
20 A Practical Guide to Regression Discontinuity Jacob, R., Zhu, R., Somers, M-A., & Bloom, H. MDRC This MDRC methodology paper is intended to serve as a practitioners’ guide to implementing regression discontinuity (RD) designs. It uses an approachable language and offers best practices and general guidance to those attempting an RD analysis. This paper discusses in detail the following: 1) approaches to estimation, 2) assessing the internal validity of the design, 3) assessing the precision of an RD design, and 4) determining the generalizability of the findings. In addition, the paper illustrates the various RD techniques available to researchers and explores their strengths and weaknesses, using a simulated dataset. 2012 Link to Resource Methods Report 100 pages R False False False False False False True True False True False False False True False False False False True True True True False False False False False Internal Validity decision makers independently academic warning broadly generalizable difference observations Web site gender regression rating discontinuity data simulated ratings treatment distribution plot bin 3 False False False False True False False False False
21 Modern Regression Discontinuity Analysis Bloom, H. Journal of Research on Educational Effectiveness MDRC developed this paper to provide researchers with a detailed discussion of the theory and practice of modern regression discontinuity (RD) analysis. The paper briefly chronicles the history of RD analysis and summarizes its past applications; it explains how, in theory, an RD analysis can identify an average effect of treatment for a population and how different types of RD analyses — “sharp” versus “fuzzy” — can identify average treatment effects for different conceptual subpopulations; it introduces graphical methods, parametric statistical methods, and nonparametric statistical methods for estimating treatment effects from RD data plus validation tests and robustness tests for assessing these estimates; it considers generalizing RD findings and presents several views on and approaches to the issue; and it notes some important issues to pursue in future research about or applications of RD analysis. This paper is a valuable resource for researchers who want to use RD analysis for estimating the effects of interventions or treatments. 2009 Link to Resource Methods Report 62 pages R False False False False False False False True False False False False False False False False False False True True True True False False False False False treatment discontinuity regression point cut group effect control average outcomes 3 False False False False True False False False False
22 What Works Clearinghouse. Single-Case Design Technical Documentation Kratochwill, T. R.; Hitchcock, J.; Horner, R. H.; Levin, J. R.; Odom, S. L.; Rindskopf, D. M.; & Shadish, W. R. Institute of Education Sciences: What Works Clearinghouse at the U.S. Department of Education The What Works Clearinghouse (WWC) developed this guide to help researchers implement effective single-case designs (SCDs). The guide focuses on the features of SCDs, threats to internal validity, standards for SCDs, the visual analysis of the data on which SCD researchers traditionally have relied to answer research questions, and the estimation of effect sizes. This guide is a valuable resource for researchers who want to understand how to design, implement, and evaluate an SCD that meets WWC standards. 2010 Link to Resource Guide Methods Report 34 pages R True False False False False False False False True False False False False False False False False False True True False True False False False False False Projected Comparison straight line best fitting distribution theory give reasons randomized controlled Examine Observed error structures intervention data phase baseline effect design standards case designs phases 3 False False True False True False False False False
23 Program Evaluation: A Variety of Rigorous Methods Can Help Identify Effective Interventions. Report to Congressional Requesters. GAO-10-30 Kingsbury, N. U.S. Government Accountability Office The U.S. Government Accountability Office (GAO) initially developed this report to help congressional requesters choose which federal social programs to fund. After the private, nonprofit Coalition for Evidence-Based Policy undertook the Top Tier Evidence initiative to help federal programs identify interventions for which randomized experiments (REs) show sizable, sustained benefits to participants or society, GAO was asked to examine (1) the validity and transparency of the Coalition’s process, (2) how its process compared to that of six federally supported efforts to identify effective interventions, (3) the types of interventions best suited for assessment with randomized experiments, and (4) alternative rigorous methods used to assess effectiveness. The report assesses the Top Tier initiative process and standards, confirms that REs can provide the most credible evidence of effectiveness under certain conditions, and describes available rigorous alternatives (quasi-experimental comparison groups including regression discontinuity designs, statistical analyses of observational data including interrupted time-series, and in-depth case studies). This report is a valuable resource for researchers who aim to demonstrate program effectiveness using a variety of rigorous methods, especially as they seek to understand when a RE is necessary versus when other methods are suitable for demonstrating program effectiveness. 2009 Link to Resource Methods Report 49 pages R True False False False False True True True True True False False False True False False False False True True False False False False False False False Nancy Kingsbury Task Force Academies Press Conference proceedings Steps Seven broadcast media remain separate Major journals Nominations solicited evidence interventions intervention program evaluation studies top tier effectiveness outcomes 3 False False False False True False False False False
24 90-Day Cycle Handbook Park, S.; & Takahashi, S. Carnegie Foundation for the Advancement of Teaching The Carnegie Foundation for the Advancement of Teaching developed this handbook as an introduction to 90-day, short-duration research cycles. This handbook provides a broad overview of the purpose and essential characteristics of a 90-day cycle, followed by a discussion of the processes and steps involved in conducting a 90-day cycle (post-cycle). In addition, the handbook includes tips and examples to facilitate their execution. The handbook concludes with a description of the roles and responsibilities of participants in a 90-day cycle. 2013 Link to Resource Guide 32 pages P False True False False False False False False False False False False False False False False False False False False False False False False False False False SANDRA PARK SOLA TAKAHASHI driver diagrams literature short list big picture picture goal broad overview cycle day team product phase scan process topic 3 False False True False False False False False False
25 A Conceptual Framework for Studying the Sources of Variation in Program Effects Weiss, M.J.; Bloom, H.S.; & Brock, T. MDRC This MDRC paper presents a conceptual framework for designing and interpreting research on variation in program effects and the sources of this variation. The goals of the framework is to enable researchers to offer better guidance to policymakers and program operators on the conditions and practices that are associated with larger and more positive results. Throughout the paper, the authors include concrete empirical examples to illustrate points. 2013 Link to Resource Methods Report 59 pages R False False False False False True True False False False False False False False False False False False False False False False True False False False False Latino males eleventh graders prima faci Head Start Construct Possible Home Visiting Scholarship Demonstration Thomas Brock cognitive behavioral program treatment effects implementation services group contrast example variation research 3 True False False False True False False False False
26 An Environmental Scan of Tools and Strategies that Measure Progress in School Reform Griffin, P.; Woods, K.; & Nguyen, C. Victorian (AU) Department of Education and Training The Victorian (AU) Department of Education and Training developed this guide to help professionals evaluate school reform. The guide begins with an overview of tools and methods for measuring the progress and impact of school reform initiatives in the U.S. and other countries. It reviews the trend for “evidence- based” reform in education and discusses the importance of rigor in the design of evaluation studies, sampling and data collection procedures, analysis and interpretation of evidence. After focusing on what not to do, the guide switches to providing detailed guidelines for evaluation that can be applied to a range of reform programs. One section discusses the evaluation of the implementation, processes, and outcomes of learning standards in Victoria. The final section briefly describes three case studies in which measurement and monitoring of student outcomes have been incorporated into program evaluation. This guide is a valuable resource for evaluators who need to identify appropriate indicators of successful implementation, including student academic outcomes. 2005 Link to Resource Guide 87 Pages P False True False False False False False False False False False False False False False True False False True True True False False False False False False Manpower Bureau Environmental Scan General Accounting Primary Native Flagship Strategy Family Trust Fischer Family Quality Assurance Development Branch Native English-speaking reform outcomes evaluation data indicators section factors implementation students report 3 False False True False False False False False False
27 Checklist for Assessing USAID Evaluation Reports USAID USAID The U.S. Agency for International Development Evaluation Report Checklist helps evaluators review and strengthen draft evaluation reports. The report details 15 critical factors (noted in boldface type) that should be addressed in early drafts of the evaluation report. The authors recommend assessing the final report against all 76 factors to ensure high technical quality, a strong executive summary, and the targeting of recommendations for decision-making purposes. 2013 Link to Resource Tool 7 pages P False False False False False False False False False False False False False False False False False False False False False False False False False False False evaluation report project data information questions include evaluation report evaluation questions data collection report identify evaluation team collection instruments cost structure evaluation design 2 False False False True False False False False False
28 Approaches to Evaluating Teacher Preparation Programs in Seven States Meyer, S.J.; Brodersen, R.M.; & Linick, M.A. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory Central conducted this qualitative study to inform policymakers, researchers, educators, and others who are concerned with the quality of preparing teachers for their profession. The report focuses on how seven central states evaluate their teacher-preparation programs and the changes they are making to improve the states’ approaches to evaluation. Most changes involve paying more attention to the performance of program graduates, developing common data-collection tools and data systems, and developing new ways to report evaluation data. This report is a valuable resource for policymakers and practitioners and provides a good example of a qualitative research study that can inform decision-making. 2014 Link to Resource Guide 40 pages P False True False True False False False False False False False True False False False False True False False True True False False False True False False Colorado Public Instruction American Progress Washington standards bachelors degree on-site visit lack formal count multiple teacher programs state preparation program education evaluation data states department 3 True False True False False False False False False
29 Forming a Team to Ensure High-Quality Measurement in Education Studies Kisker, E.; & Boller, K. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) program developed this brief to help identify the expertise needed to formulate an appropriate and effective measurement strategy and to collect high-quality data. The primary audience for this brief is research teams – such as those at RELs – who work in partnership with school districts or states. The brief provides tips for finding staff and consultants with the needed expertise, and outlines their main responsibilities. It also outlines typical measurement tasks and discusses how measurement team members can work together to complete these tasks successfully. 2014 Link to Resource Guide 18 pages P True False False False False False False False False False False False False False False False False False False False False False False False False False False data professional networks Analytic Technical Works Clearinghouse study U.S. Department statistical power strategies listed measurement measures collection expert team measure constructs 2 False False True False False False False False False
30 A Guide for Monitoring District Implementation of Educator Evaluation Systems Cherasaro, T.; Yanoski, D.; & Swackhamer, L. Regional Education Laboratory Program, Institute of Education Sciences This Regional Educational Laboratory (REL) Central-developed guide walks users through a three-step process that states can use to monitor district implementation of educator evaluation systems to comply with the requirements of the Elementary and Secondary Education Act flexibility requests. The guide also offers example tools developed together with the Missouri Department of Elementary and Secondary Education that states can adapt to provide districts with greater clarity on expectations about educator evaluation systems. 2015 Link to Resource Guide 38 pages P False False False True True False False False False False False False False False False False True False False False False False False False False False False Description Early childhood Educational Laboratory NEE/MU Regional Educational adequate duration either formally goals aimed starting point Authors’ creation checkbox evaluation district teacher practice criteria student system principal 3 False False True False False False False False False
31 Webinar Series on Developing a Successful Professional Development Program Evaluation: Webinar 1 of 5: Critical Elements of Professional Development Planning & Evaluation REL Southeast Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Southeast developed a series of five recorded webinars with materials designed to help professionals rigorously evaluate professional-development programs. The series focuses on key aspects of the evaluation process, including planning and design, identifying and measuring target outcomes, analyzing results, interpreting results, and presenting findings. This series is a valuable resource for program directors who want to understand how to design and implement a high-quality evaluation. The series is also useful for seasoned evaluators who want to review important evaluation issues. This first webinar introduces practitioners to What Works Clearinghouse standards for randomized controlled trials and quasi-experimental designs. It also provides recommendations so practitioners have the resources necessary to design an evaluation aligned to best practices for making causal conclusions. 2016 Link to Resource Webinar Slide presentation 35 slides, 62 minutes P True False False False False True True False False True False False False True False False False False True True True False False False False False False Development Planning Guide for Study Authors Logic Model Professional Development Reporting Guide for a Logic Barbara Foorman study # WWC Reporting professional development wwc http gov study ies ncee sid groups design 3 False False False False False False True True False
32 Webinar Series on Developing a Successful Professional Development Program Evaluation: Webinar 2 of 5: Going Deeper into Planning the Design REL Southeast Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory Southeast developed a series of five recorded webinars with materials designed to help professionals rigorously evaluate professional-development programs. The series focuses on key aspects of the evaluation process, including planning and design, identifying and measuring target outcomes, analyzing results, interpreting results, and presenting findings. This series is a valuable resource for program directors who want to understand how to design and implement a high-quality evaluation. The series is also useful for seasoned evaluators who want to review important evaluation issues. This second webinar builds on the first in the series and provides an in-depth discussion of the critical features of randomized controlled trials and quasi-experimental designs, including the random assignment process, attrition, baseline equivalence, confounding factors, outcome eligibility, and power analysis. 2016 Link to Resource Webinar Slide presentation 22 slides, 49 minutes P True False False False False True True False False True False False True True False True False False True True True False False False False False False Cluster-level RCT professional development QED 3 False False False False False False True True False
33 Webinar Series on Developing a Successful Professional Development Program Evaluation: Webinar 3 of 5: Going Deeper into Identifying & Measuring Target Outcomes REL Southeast Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory Southeast developed a series of five recorded webinars with materials designed to help professionals rigorously evaluate professional-development programs. The series focuses on key aspects of the evaluation process, including planning and design, identifying and measuring target outcomes, analyzing results, interpreting results, and presenting findings. This series is a valuable resource for program directors who want to understand how to design and implement a high-quality evaluation. The series is also useful for seasoned evaluators who want to review important evaluation issues. This third webinar mainly focuses on identifying evaluation outcomes and describes an example from Mississippi which created two measures – a classroom observation tool, and a teacher knowledge measure – to evaluate professional development around a literacy initiative. The third webinar also discusses participant recruitment of participants. 2016 Link to Resource Webinar Slide presentation 20 slides, 49 minutes P False False False True False False True False False True False True False False False True True False False False False False False False True False False items test reliability may measure teachers tool schools professional study professional development teacher knowledge initiative target schools point biserial the assessment coaches observation tool target state 3 True False False False False False True True False
34 Webinar Series on Developing a Successful Professional Development Program Evaluation Webinar 4 of 5: Going Deeper into Analyzing Results REL Southeast Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory Southeast developed a series of five recorded webinars with materials designed to help professionals rigorously evaluate professional-development programs. The series focuses on key aspects of the evaluation process, including planning and design, identifying and measuring target outcomes, analyzing results, interpreting results, and presenting findings. This series is a valuable resource for program directors who want to understand how to design and implement a high-quality evaluation. The series is also useful for seasoned evaluators who want to review important evaluation issues. This fourth webinar mainly focuses on the considerations for quantitative and qualitative analyses. For quantitative analysis, the webinar focuses on calculating attrition, calculating baseline equivalence, and statistical adjustments. For qualitative analysis, the webinar focuses on triangulating data by using multiple methods of data on the same question, using different interviews to avoid biases of different data collectors and interviewers working alone, and using multiple perspectives to interpret a set of data. 2016 Link to Resource Webinar 31 slides, 38 minutes P True False False True False True True False False False False False True True False True False False True True False False False False False False False observed data/number without observed strong studies estimated effects evidence standards must have have low statistical significance studies that baseline characteristics data attrition wwc ncee professional development 3 False False False False False False True False False
35 Webinar Series on Developing a Successful Professional Development Program Evaluation: Webinar 5 of 5: Going Deeper into Interpreting Results & Presenting Findings REL Southeast Regional Education Laboratory Program, Institute for Education Sciences The Regional Educational Laboratory (REL) Southeast developed a series of five recorded webinars with materials designed to help professionals rigorously evaluate professional-development programs. The series focuses on key aspects of the evaluation process, including planning and design, identifying and measuring target outcomes, analyzing results, interpreting results, and presenting findings. This series is a valuable resource for program directors who want to understand how to design and implement a high-quality evaluation. The series is also useful for seasoned evaluators who want to review important evaluation issues. This fifth webinar mainly focuses on interpreting results, especially using effect sizes and statistical significance. It also briefly addresses the presenting findings to different groups of practitioners and mentions a REL Northeast and Islands resource for those interested in the details on this topic. 2016 Link to Resource Webinar Slide Presentation 15 slides, 24 minutes P True False False False False True True True False False True False False False False True False False False False True False False False False False False Districts Working External Researchers Islands Toolkit REL Northeast Results Externally Disseminating Research Externally http Module 5 Research Results Working with professional development effects external student positive wwc results working effect research researchers 3 False False False False False False True True False
36 Partially Nested Randomized Controlled Trials in Education Research: A Guide to Design and Analysis Lohr, S.; Schochet, P.Z.; & Sanders, E. Institute of Education Sciences An Institute of Education Sciences report for applied education researchers to provide guidance to education researchers on how to recognize, design, and analyze data from partially nested, randomized controlled trials to rigorously assess whether an intervention is effective. The paper addresses design issues, such as possibilities for random assignment, cluster formation, statistical power, and confounding factors that may mask the contribution of the intervention. The paper also discusses basic statistical models that adjust for the clustering of treatment students within intervention clusters, associated computer code for estimation, and a step-by-step guide, using examples, on how to estimate the models and interpret the output. 2014 Link to Resource Methods Report 179 pages R False False False False False True True True False True True False True True False True False False True True False False False False False False False 21st Century Cov Parm Deviation Component softball team treatment group students control school design rct squared variance sigma squared control group 3 False False False False True False False False False
37 Precision Gains from Publically Available School Proficiency Measures Compared to Study-Collected Test Scores in Education Cluster-Randomized Trials Deke, J.; Dragoset, L.; & Moore, R. Institute of Education Sciences The Institute of Education Sciences developed this report to help researchers determine whether pre-test scores or publicly available, school-level proficiency data yield the same precision gains in randomized controlled trials. The paper defines the key measures used and describes the data sources. It compares the precision gains associated with study-collected data of students’ test scores to the gains associated with school-level proficiency data. It explores approaches to reducing the costs of collecting baseline test scores. Finally, it examines the potential implications for attrition bias of not collecting pre-test data. This series is a valuable resource for those who fund and commission research, as publicly available, school-level proficiency data may be available at lower cost than pre-test scores are. 2010 Link to Resource Methods Report 44 pages R False False True False True True False False False False False False True False True True False False True True False False True False False False False Publically Available Random Assignment School-Level proficiency Bandeira de Effect Size New York Percent Subsample School-Level de Mello level school test proficiency study data baseline students model 3 True False False False True False False False False
38 Statistical Power Analysis in Education Research Hodges, L.V.; & Rhoads, C. Institute of Education Sciences An Institute of Education Sciences paper that provides an introduction to the computation of statistical power for education field studies and to discuss research-design parameters that directly affect statistical power. The paper shows how to use the concepts of operational effect sizes and sample sizes to compute statistical power. It also shows how clustering structure, described by intra-class correlation coefficients, influences operational effect size and therefore, statistical power. In addition, this paper details how the use of covariates can increase power by operational effect size. 2010 Link to Resource Methods Report 88 pages R False False False False False True False False False True True False False False False False False False False True False False False False False False False design power effect sample designs level size statistical covariates Hierarchical Potential Conflicts Psychological Methods RR assigned costs while effect randomized-block while maintaining 3 False False False False True False False False False
39 City Connects: Building an Argument for Effects on Student Achievement with a Quasi-Experimental Design Walsh, M.; Raczek, A.; Sibley, E.; Lee-St. John, T.; An, C.; Akbayin, B.; Dearing, E.; Foley, C. Society for Research on Educational Effectiveness This Society for Research on Educational Effectiveness (SREE) report shares the results from three analyses that provide different pieces of evidence for a causal relationship between a City Connects intervention and student achievement. The authors provide an overview of a multi-year effort for the evaluation of a large-scale, school-based intervention for which there are rich, longitudinal data and many opportunities to exploit design features to ask: does the intervention promote the achievement of children in high-poverty, urban schools? The authors recommended that other school-based interventions that cannot feasibly use a randomized control design instead employ a range of methods aimed at reducing endogeneity and getting closer to understanding whether a causal relationship might exist between the intervention and its outcomes. 2015 Link to Resource Brief or Summary 6 pages R False False False False False False True True False True False False False True False False False False False True False False True False False False False Academic Achievement, Quasiexperimental Design, Intervention, Elementary School Students, Urban Schools, Low Income Students, Longitudinal Studies, Program Effectiveness, Hierarchical Linear Modeling, Regression (Statistics), Student Support Not included causal inference interrupted time 2015 Conference SREE Spring Spring 2015 report card Abstract Template city connects students school intervention student schools comparison effects outcomes 2 True True False False False False False False False
40 Designing and Conducting Strong Quasi-Experiments in Education. Version 2 Scher, L.; Kisker, E.; & Dynarski, M. Institute of Education Sciences The Institute of Education Sciences (IES) developed this guide to help researchers design and implement strong quasi-experimental designs (QED) when assessing the effectiveness of policies, programs, or practices. The guide first discusses the issues researchers face when choosing to conduct a QED, as opposed to a more rigorous randomized controlled trial design. Next, it documents four sets of best practices in designing and implementing QEDs, including: 1) considering unobserved variables, 2) selecting appropriate matching strategies for creating comparison groups, 3) following general guidelines for sound research, and 4) addressing the What Works Clearinghouse (WWC) standards. The guide then presents 31 frequently asked questions related to QEDs meeting WWC standards with reservations, and discusses common pitfalls that cause studies not to meet WWC standards. Topics covered in the frequently-asked questions section include: study design/group formation, outcomes, confounding factors, baseline equivalence, sample loss and power, and analytic techniques. A detailed checklist provides valuable items for researchers to consider when designing a strong QED study, and a table clarifies why each issue is important and how it relates to WWC standards. The guide is also useful for researchers looking for additional detail, as it provides links to related IES resources. 2015 Link to Resource Guide Tool 27 pages R True False False False False False True False False True True False True True False True False False True True False False False False False False False Educational Research, Quasiexperimental Design, Best Practices, Predictor Variables, Check Lists, Comparative Analysis, Standards, Clearinghouses, treatment administrative records random assignment Design Characteristics developmental math mirror image completely aligned field settings best practices wwc study treatment standards comparison group groups outcomes researchers intervention 3 False False True True False False False False False
41 Using an Experimental Evaluation of Charter Schools to Test Whether Nonexperimental Comparison Group Methods Can Replicate Experimental Impact Estimates Fortson, K.; Verbitsky-Savitz, N.; Kopa, E.; Gleason, P. Institute of Education Sciences This National Center for Education Evaluation and Regional Assistance (NCEE) report estimates the impacts of charter schools using the four common comparison group methods and assesses whether these nonexperimental impact estimates are the same as the experimental impact estimates. The authors find that the use of pre-intervention baseline data that are strongly predictive of the key outcome measures considerably reduces but might not completely eliminate bias. Regression-based nonexperimental impact estimates are significantly different from experimental impact estimates, though the magnitude of difference is modest. 2012 Link to Resource Methods Report 68 pages R False True True False False True True False False True False False True True False False False False True True False False True False False False False DEPARTMENT OF EDUCATION Emma Kopa Natalya Verbitsky-Savitz ad justed experimental Supplemental Tables Working Paper group comparison treatment students group test experimental nonexperimental impact baseline estimates 3 True False False False True False False False False
42 The Role of Between-Case Effect Size in Conducting, Interpreting, and Summarizing Single-Case Research. NCER 2015-002 Shadish,W.R.; Hedges, L.V.; Horner, R.H.; & Odom, S.L. National Center for Education Research The National Center for Education Research developed this report to increase professionals’ knowledge of, and interest in, single-cased design (SCD) as a strong experimental-design option. The report addresses when to use an SCD, the different forms it can take, its ability to yield causal inference, analysis issues, and how to calculate, use, and report on effect sizes. This paper adds a tool to the repertoire of SCD researchers. Those who review literatures to identify what works (SCD researchers, other academic scholars, research firms, or governmental bodies interested in effective interventions) will benefit from statistical recommendations about effect-size calculations and their use in meta-analysis and pertinent examples to encourage the inclusion of SCD studies in evidence-based practice reviews. Suggestions for future directions are useful for those in policy positions with resources and an interest in improving the use of SCDs both as a method in their own right and as one of many methods that can contribute to knowledge of what works. Finally, the report can help scholars with a statistical and methodological bent to address tackle the theoretical issues the analysis and meta-analysis of SCD research raise. 2015 Link to Resource Methods Report 109 pages R False True False False False False False False True False False False False False False False False False True True True False False False False False False Effect Size, Case Studies, Research Design, Observation, Inferences, Computation, Computer Software, Meta Analysis, Ruling Out Thousand Oaks shed light Mean Difference Sir Ronald Van den Noortgate Heterogeneity Testing Pivotal Response effect case size sizes scd studies treatment analysis scds 3 False False False False True False False False False
43 The Role of Effect Size in Conducting, Interpreting, and Summarizing Single-Case Research William R. Shadish, Larry V. Hedges, Robert H. Horner, Samuel L. Odom IES The National Center for Education Research (NCER) and National Center for Special Education Research (NCSER) commissioned a paper by leading experts in methodology and SCD. Authors William Shadish, Larry Hedges, Robert Horner, and Samuel Odom contend that the best way to ensure that SCD research is accessible and informs policy decisions is to use good standardized effect size measures—indices that put results on a scale with the same meaning across studies—for statistical analyses. Included in this paper are the authors’ recommendations for how SCD researchers can calculate and report on standardized between-case effect sizes, the way in these effect sizes can be used for various audiences (including policymakers) to interpret findings, and how they can be used across studies to summarize the evidence base for education practices. 2016 Link to Resource Methods Report 109 pages R False True True False False False False False False False False False False False True False True True False False True True False False False False False Education Research Thomas, Special Education Research Joan McLaughlin, University of California, Institute of Education Sciences Ruth Neild, Northwestern University, University of Oregon, National Center, University of North Carolina, report, Department of Education Arne Duncan, Summarizing Single-Case Research, single-case design research, Case Effect, Project Officer, Meredith Larson, Phill Gagné, Kimberley Sprague, public domain, Deputy Director, Shadish, use of effect sizes, Odom, Commissioner, Chapel Hill, Policy, Interpreting, Robert, Horner, Larry, Hedges, paper, William, Merced, Samuel, Contract ED-IES, Conducting, Delegated Duties, Authors, views, Brock, Secretary, positions, NCSER, NCER, Westat, Role, opinions, December, Disclaimer, Authorization 3 False False False False True False False False False
45 An educator’s guide to questionnaire development Harlacher, J. Institute of Education Sciences This Regional Educational Laboratory Central guide designed for education administrators offers a five-step, questionnaire-development process: determining the goal(s) of the questionnaire, defining the information needed to address each goal, writing the questions, reviewing the questionnaire for alignment with goals and adherence to research-based guidelines for writing questions, and organizing and formatting the questionnaire. 2016 Link to Resource Tool 22 pages P False False False True False False False False False False False False False False False True True False False False False False False True False True False New York emotionally charged released publicly Authors compilation Public Instruction Retrieved September Sage Publications Thousand Oaks personality traits mutually exclusive questions questionnaire information respondents example question response responses data may 3 True False False True False False False False False
46 Challenges and Strategies for Assessing Specialized Knowledge for Teaching Orrill, C.H.; Kim, O.K.; Peters, S.A.; Lischka, A.E.; Jong, C.; Sanchez, W.B.; & Eli, J.A. Mathematics Teacher Education and Development Journal The authors of this Journal of Mathematics Teacher Education article wrote it to help policy makers, grant-funding agencies, mathematics teachers, and others to understand the knowledge necessary for effective teaching of mathematics and how to assess it. The guide begins with an overview of what is known about measuring teachers’ knowledge. It then highlights the challenges inherent in creating assessment items that focus specifically on measuring teachers’ specialized knowledge for teaching: creating items with appropriate difficulty levels, creating items for the target constructs, using precise language, incorporating pedagogical concerns appropriately, and writing clear stems and distracters. The article also offers insights into three practices that the authors have found valuable in their own work: creating an interdisciplinary team to write the assessment, using interviews to understand whether an item is measuring the appropriate construct and whether it is at an appropriate level of difficulty by providing insight into participants’ thinking, and adopting an iterative process to item writing. This guide is a valuable resource for those who create assessment items for measuring teachers’ knowledge for teaching. 2015 Link to Resource Guide 18 pages P False False False True False False False False False False False False False False False True False False False False False False False False False False False Teacher Knowledge, Mathematics’ Teacher Knowledge, Measurement, Item Development, TEDS-M Testing Service United States high-quality Ferrini-Mundy subject matter rational numbers concerns appropriately story problem Elementary School knowledge teacher mathematics item items teachers teaching teachers’ assessment education 2 False False True False False False False False False
47 Conducting Implementation-Informed Evaluations: Practical Applications and Lessons from Implementation Science REL Mid-Atlantic Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Mid-Atlantic developed this webinar to help districts and schools define new initiatives and measure changes needed to ensure that new skills are learned and used. It begins with a reminder of why implementation is difficult and which three components are required for educational outcomes to change. The focus then turns to two active-implementation frameworks to change a system and effectively implement evidence-based practices. The first is implementation drivers: the critical supports needed to make change happen and what the infrastructure is. The second is improvement cycles: how we can create more hospitable environments, efficiently solve problems, and get better. The webinar also explores practical applications and examples of an implementation-informed, decision-support-data system, including fidelity assessments. It is a valuable resource for educators who want to identify, implement, and evaluate initiatives with a high level of fidelity and meet the goal of improving student outcomes. The webinar is useful for identifying how improvement cycles (rapid-cycle problem solving and usability testing) can be used to put effective practices into operation. 2015 Link to Resource Webinar Slide Presentation 71 slides, 87 minutes P False True False False False False False False False False False False False False False True True False False False False False False False False False False Administration Decision Support Data System Change Integrated Improved Student Infrastructure Improved Intervention Facilitative Administration Decision Support Selection Systems Intervention Facilitative Coaching Training Selection Systems Regional Educational next set Division H 3 False False False False False False True True False
48 Educator Evaluation System Data: What Exists and How do we use it? Lemke, M.; Livinston, T.; & Rainey, K. Regional Education Laboratory Program, Institute of Education Sciences This Regional Educational Laboratory (REL) Midwest webinar presents research on different designs of teacher evaluation systems and provide examples of how educator evaluation data is being used to support decision-making and inform professional development. The presenters also share insights on the implementation of evaluation systems and use of data at the state and district levels. 2015 Link to Resource Webinar Slide presentation 68 slides, 78 minutes P False True False True True False False False False False False False False False False False False False False False False False False False False False False Educator Effectiveness implementation research data assessment effective leads cohort practices circle 3 False False False False False False True True False
49 Forum Guide to Decision Support Systems: A Resource for Educators. NFES 2006-807 U.S. Department of Education U.S. Department of Education This document was developed by educators for educators to remedy the lack of reliable, objective information available to the education community about decision support systems. The authors hope it will help readers better understand what decision support systems are, how they are configured, how they operate, and how they might be implemented in an education institution. The Decision Support System Literacy Task Force hopes that readers will find this document useful, and that it helps improve data-driven decision making in schools, school districts, and state education agencies across the nation. This document addresses the following broad questions: (Part I) What is a Decision Support System, and how does a decision support system differ from a data warehouse and a data mart? (Part II) What components, features, and capabilities commonly comprise a decision support system? How does each broad category of these components and features contribute to system operation? (Part III) How does an education organization buy or develop a decision support system, and how are stakeholders trained to use it? 2006 Link to Resource Guide 34 pages P False False False True False False False False False False False False False False True False True True True False True False False False False False False Department of Education, Education Statistics website, secondary education, NATIONAL COOPERATIVE EDUCATION STATISTICS SYSTEM, National Forum, National Center, local education agencies, early childhood education, state, Forum World Wide, Cooperative System, data, NCES World Wide Web Home Page, NCES World Wide Web Electronic Catalog, uniform information, local levels, principles of good practice, meeting, activities, Publications, policymaking, purpose, products, views, endeavors, formal review, resources, opinions, September 3 False False True False False False False False False
50 Key Items to Get Right When Conducting Randomized Controlled Trials of Social Programs Evidence-Based Policy Team of the Laura and John Arnold Foundation Laura and John Arnold Foundation The Laura and John Arnold Foundation developed this checklist to help researchers and sponsors of research ensure that items critical to the success of a randomized controlled trial (RCT) in producing valid findings about a social program’s effectiveness are included. Items in this checklist are categorized according to the following phases of an RCT: planning the study, carrying out random assignment, measuring outcomes for the study sample, and analyzing the study results. This tool is a valuable resource for researchers who need a list of key items to get right when conducting an RCT to evaluate a social program or practice. 2016 Link to Resource Tool 12 pages P False False False False False True False False False True True True True True False True True False True True True False False False False False False Evaluating Public Experimental Methods With Experimental Larry L. Orr Programs With Public Programs get right into account take into evaluation educator teacher student data teachers learning district effectiveness system 2 False False False True False False False False False
51 Performance Management: Aligning Resources to Reforms Reform Support Network Reform Support Network The Reform Support Network (RSN) developed this document as part of a series designed to help education agencies pursue performance management of their key education reforms. Performance management is a systematic approach to ensure quality and progress toward organizational goals by aligning structures, processes, and routines through a set of reinforcing activities that enable an agency to methodically and routinely monitor the connection between the work underway and the outcomes sought. This brief addresses “alignment of resources,” i.e., how to direct or redirect resources (time, money, technology and people) to priority efforts that produce results and establish clear roles and responsibilities. Alignment of resources is the second of the four elements of performance management described in a Sustainability Rubric that RSN created to help education agencies improve their performance management practices (the rubric’s three categories are system capacity, performance management, and context for sustaining reform; the performance management category includes four elements: clarity of outcomes and theory of action, alignment of resources, collection and use of data, and accountability for results). The rubric offers a template through which agencies can identify key elements of sustainability and assess strengths and weaknesses to address in their sustainability planning. While developed for state education agencies (SEAs), the rubric is a valuable resource for policymakers and practitioners who want to gauge how well they are answering key questions related to aligning resources for student improvement. The brief also describes early, promising work from Hawaii and Tennessee that embodies the basic elements of performance management. 2014 Link to Resource Guide 4 pages P False False False False False False False False False False False False False False False False False False False False False False False False False False False effective teachers This brief must have produce results complex area staff members education agencies Harris Top States complex areas program study control group treatment sample members assignment random outcomes 1 False False True False False False False False False
52 Performance Management: Setting Outcomes and Strategies to Improve Student Achievement Reform Support Network Reform Support Network The Reform Support Network (RSN) developed this document as part of a series designed to help education agencies pursue performance management of their key education reforms. Performance management is a systematic approach to ensure quality and progress toward organizational goals by aligning structures, processes, and routines through a set of reinforcing activities that enable an agency to methodically and routinely monitor the connection between the work underway and the outcomes sought. This brief addresses “clarity of outcomes and theory of action,” i.e., how to establish and widely communicate priorities and set ambitious, clear, and measurable goals and outcomes with aligned strategies and activities. Clarity of outcomes and theory of action are the first of the four elements of performance management described in a Sustainability Rubric that RSN created to help education agencies improve their performance management practices (the rubric’s three categories are system capacity, performance management, and context for sustaining reform; the performance management category includes four elements: clarity of outcomes and theory of action, alignment of resources, collection and use of data, and accountability for results). The rubric offers a template through which agencies can identify key elements of sustainability and assess strengths and weaknesses to address in their sustainability planning. While developed for state education agencies (SEAs), the rubric is a valuable resource for policymakers and practitioners who want to gauge how well they are answering key questions related to clarifying expected outcomes for student improvement. The brief also describes early, promising work from Massachusetts and Tennessee that embodies the basic elements of performance management. 2014 Link to Resource Guide 6 pages P False False False False False False False False False False False False False False False False False False False False False False False False False False False Delivery Unit Strategic Plan after high This brief grade reading grade look at Kang aligned strategies must have outcomes plan strategic student performance four management goals Tennessee education 2 False False True False False False False False False
53 Professional Experiences of Online Teachers in Wisconsin: Results from a Survey about Training and Challenges Zweig, J.; Stafford, E.; Clements, M.; & Pazzaglia, A. M. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Midwest developed this report to increase policymakers’ and educators’ knowledge of the range of professional learning experiences available to online teachers and the challenges they face teaching online. The survey instrument (Appendix A) is a valuable resource for state online learning programs, state departments of education, and employers of online teachers who want to examine the training and professional development provided to online teachers in their jurisdictions. The data and methodology section (Appendix D) is a useful example of how to develop a survey, administer it, and collect and analyze the data. The body of the report focuses on the results of the survey, which was administered to a virtual school in Wisconsin, and information about how online teacher training can vary in timing, duration, format, and content. In the current climate, in which few states have specific requirements about training for online teachers, these findings may interest policymakers; district and school administrators; and parents in other states, districts, and schools. 2015 Link to Resource Tool Guide 40 pages P False False False True False False False False False False False False False False False True True False False True True False False False True False False Online Courses, Teaching Experience, Teacher Surveys, Training, Teacher Competencies, Barriers, Teacher Education, Faculty Development, Student Behavior, Educational Methods, Elementary School Teachers, Secondary School Teachers, Technology Uses in Education, Academic Persistence, Student Responsibility, Act 222 Advanced Placement development little guidance grade levels professional working conditions online checkbox professional development percent teachers school training virtual Wisconsin 3 True False True True False False False False False
54 Protocol for a Systematic Review: Teach For America (TFA) for Improving Math, Language Arts, and Science Achievement of Primary and Secondary Students in the United States: A Systematic Review Turner, H.; Boruch, R.; Ncube, M; & Turner, A. The Campbell Collaboration This Campbell Collaboration protocol provides researchers with an example of a proposal for a systematic review of a program. The purpose of a systematic review is to sum up the best available research on a question, which is done by synthesizing the results of several studies. A systematic review uses transparent procedures to find, evaluate, and synthesize the results of relevant research. Procedures are explicitly defined in advance to ensure that the exercise is transparent and can be replicated. This practice is also designed to minimize bias. Studies included in a review are screened for quality, so that the findings of a large number of studies can be combined. Peer review is a key part of the process; qualified independent researchers check the author’s methods and results. This protocol is a valuable resource for researchers who want guidance on structured guidelines and standards for summarizing research evidence. This example focuses on the Teach For America peer preparation program and its effect on student academic outcomes. As such, the protocol is also useful for professionals interested in the strategies and goals of interventions that aim to recruit teachers and improve their effectiveness. 2016 Link to Resource Guide 33 pages R False False False True False False False False False False False False False False False True False False True True True False True False True False False Language Arts Postal Code USA Phone accept responsibility minimum dosage United States absolute value confidence intervals take place Mackson Ncube tfa studies review effect study analysis outcome research teacher outcomes 3 True False True False False False False False False
55 Rapid Cycle Evaluation: Race to the Top-District Tools & Strategies Ceja, B.; & Resch, A. Reform Support Network The Reform Support Network (RSN) developed this webinar to help practitioners evaluate personalized learning tools using Rapid Cycle Evaluation and Opportunistic Experiments. The webinar first recognizes how these quick turnaround methods can be used to build knowledge in a context of scarce resources. It defines these methods, the conditions under which they are appropriate, and the identification of comparison groups. The webinar then goes into an example using Cognitive Tutor, a software intended to improve high school student math achievement. It focuses on an opportunity for Race to the Top grantees to apply for the RSN to conduct an evaluation, but this webinar is also a valuable resource for practitioners who want to understand how the results of quick turnaround experiments might inform the continued implementation of learning tools and/or future purchasing decisions. 2015 Link to Resource Webinar 27 slides; 55 minutes P True True False False False False False False False True False True False True True True False False False False False False True False False False False Leadership: Evaluation of Program Outcome (Revisions); Program Evaluation, random assignment Cycle Evaluation Improve achievement Rapid Cycle Cognitive Tutor comparison group can be subset of or schools rsn cognitive comparison students 3 True False False False False False True False False
56 Request for Proposals Evaluation Guide Reform Support Network Reform Support Network The Reform Support Network developed this guide to help state and local education agencies in defining the evaluation process for a Request for Proposal (RFP). The guide addresses the major components of an RFP, including involving stakeholders in its evaluation, scoring proposals, considering costs, planning vendors’ demonstrations, checking references, and establishing timelines. Tools valuable to education agencies include sample-scoring matrices for different components and overall, a checklist for the agency to evaluate vendors’ written proposals and one for vendors to identify the extent to which they meet key functional requirements, a committee signoff sheet, and a sample evaluation timeline. 2012 Link to Resource Guide 12 pages P False False False False False False False False False False False False False False False False False False False False False False False False False False False Scoring Matrix rather than Technical local education Rank Order Final Rank Functional Requirements vendor evaluation rfp cost agency team score phase 2 False False True False False False False False False
57 RTT-D Guidance: Implementing Performance Metrics for Continuous Improvement that Support the Foundational Conditions for Personalized Learning Johnson, J.; Kendziora, K.; & Osher, D. American Institutes for Research The American Institutes for Research developed this guide to help districts implement a continuous improvement process to personalize education for all students in their schools, focusing on classrooms and the relationship between educators and students. The guide begins by recalling the three foundational conditions for personalized learning – building the capacity of teachers to create highly effective classroom learning environments, developing a rigorous capacity for student support, and establishing the organizational efficacy of leadership and management to implement personalized learning environments. It then describes an approach to continuous improvement that builds on the conditions for learning, personalization, and achievement for all students. The guide proposes a key set of performance metrics and summarizes them in a table on pages 21-23. The appendix presents a sample on-track measure. This guide is a valuable resource for district and school staff who want to gauge how well they provide teachers with the information, tools, and supports that enable them to meet the needs of each student and substantially accelerate and deepen each student’s learning. Through this guide, district and school staff can gauge whether they have the policies, systems, infrastructure, capacity, and culture to enable teachers, teacher teams, and school leaders to continuously focus on improving individual student achievement and closing achievement gaps; whether they set ambitious yet achievable performance measures that provide rigorous, timely, and formative information; and the extent to which implementation is progressing. 2012 Link to Resource Guide 40 pages P False True False False True False False False False False False False False False False True True False False False False False True True True True False Civil Rights Disease Control Healthy Kids Longer Term Term Sustainability juvenile justice Action Brief BMI school students student track learning received classes above high 3 True False True False False False False False False
58 Smarter, Better, Faster: The Potential for Predictive Analytics and Rapid-Cycle Evaluation to Improve Program Development and Outcomes Cody, S.; & Asher, A. Mathematica Policy Research The Mathematica Policy Research report proposes using predictive modeling and rapid-cycle evaluation to improve program services while efficiently allocating limited resources. The authors believe that these methods – both individually and together – hold significant promise to improve programs in an increasingly fast-paced political environment. They propose two actions: first, agency departments with planning and oversight responsibilities should encourage the staff of individual programs to conduct a thorough needs assessment, and second, federal agencies should take broad steps to promote predictive analytics and rapid-cycle evaluation. 2014 Link to Resource Brief or Summary 12 pages R False False False False False False False False False False False False False False False False False False False False False False False False False False False Predictive Analytics, Eligibility Assessment Resources Administration Andrew Asher frontline workers Address Poverty Hamilton Projecrookings U.S. Department Scott Cody decision making program predictive analytics programs data rapid cycle evaluation 2 False True False False False False False False False
59 Graphic Design for Researchers Institute of Education Sciences (ED); Decision Information Resources, Inc.; Mathematica Policy Research, Inc. Regional Education Laboratory Program, Institute of Education Sciences The Institute of Education Sciences developed this guide that offers a basic overview on how researchers can effectively use design to create engaging and visually appealing Regional Educational Laboratory products. This guide also touches on how researchers can use data visualization to make complex concepts accessible. 2014 Link to Resource Guide 14 pages R False False False False False False False False False False False False False False False False False False False False False True False False False False False What Works Clearinghouse approaching benchmarks benchmarks category transition courses visually appealing design elements example http data visual clip art 2 False False True False False False False False False
60 Reporting What Readers Need to Know about Education Research Measures: a Guide Boller, K.; & Kisker, E.E. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) program developed this guide to help researchers make sure that their research reports include enough information about study measures to allow readers to assess the quality of the study’s methods and results. The guide provides five checklists to assist authors in reporting on a study’s quantitative measures, a sample write-up to illustrate the application of the checklists, and a list of resources to further inform reporting of quantitative measures and publication of studies. The checklists address the following measurement topics: (1) measure domains and descriptions; (2) data collection training and quality; (3) reference population, study sample, and measurement timing; (4) reliability and construct validity evidence; and (5) missing data and descriptive statistics. 2014 Link to Resource Guide Tool 26 pages R True False False False False False False False False False False False False False False False False False False False True False False False False False False Child Trends Early Childhood overly aligned New York Certification criteria Standards Handbook measures reliability study evidence training 3 False False True True False False False False False
61 Gathering Feedback for Teaching: Combining High-Quality Observations with Student Surveys and Achievement Gains (Study) Kane,T.J.; & Staiger, D.O. Bill & Melinda Gates Foundation The Bill & Melinda Gates Foundation developed this guide to help policymakers and practitioners effectively implement observations of teacher practice. The guide reviews minimum requirements for high-quality classroom observations and investigates two properties of five instruments: reliability (the extent to which results reflect consistent aspects of a teacher’s practice and not the idiosyncrasies of a particular observer, group of students, or lesson) and validity (the extent to which observation results are related to student outcomes). This guide is a valuable resource for policymakers and practitioners at every level who are intensely focused on improving teaching and learning through better evaluation, feedback, and professional development. It is one outcome of the Gates’ Measures of Effective Teaching project, which investigates a number of alternative approaches to identifying effective teaching: systematic classroom observations, surveys collecting confidential student feedback, a new assessment of teachers’ pedagogical content knowledge, and different measures of student achievement. Those wanting to explore all the technical aspects of the study and analysis are referred to a companion research report. 2012 Link to Resource Guide 36 pages P False False False True False False False False False False False False False False False False False False False False False False True False False False False Teacher Effectiveness, Achievement Gains, Evaluation Methods, Teaching Methods, Observation, Feedback (Response), Video Technology, Research Papers (Students), Student Surveys, Teacher Competencies, Student Attitudes, Academic Achievement, Faculty Development, Teacher Improvement, Standards, Scoring, Language Arts, Mathematics Instruction, Test Validity, Test Reliability, student teachers scores teaching project met teacher observation observations instruments met project value added combined measure effective teaching student achievement observation scores competencies classroom observations achievement gains 3 True False True False False False False False False
62 Using Administrative Data for Research: A Companion Guide to “A Descriptive Analysis of the Principal Workforce in Florida Schools.” REL 2015-049 Folsom, J.S.; Osborne-Lampkin, L.; Herrington, C. D. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Southeast developed this guide to help professionals extract and describe information from a State Department of Education database. The document describes the process of data cleaning, merging, and analysis, including: identifying and requesting data, creating functional datasets, merging datasets, and analyzing data. Although focused on the demographic composition, certifications, and career paths of Florida school leaders, the processes described in this guide can be modified to explore other education datasets. The document also provides an example of a data request and data dictionary. This guide will be useful to anyone using administrative datasets for to create summary statistics and charts, or conducting other analyses from raw administrative data. 2014 Link to Resource Guide 38 pages P False False False True True False False False False False False False False False True True False True True True False True False False False True False Education Evaluation, Department of Education, Regional Assistance, Institute of Education Sciences, Reading Research, results of research, research-based technical assistance, National Center, Regional Educational Laboratory Southeast, publication, unbiased large-scale evaluations of education programs, Florida schools, commercial products, Osborne-Lampkin, REL report, Florida State University, policies of IES, ED-IES, principal workforce, Using administrative data, mention of trade names, descriptive analysis, organizations, policymakers, practices, educators, widespread dissemination, content, permission, companion guide, endorsement, Herrington, Folsom, synthesis, public domain, DC, NCEE, funds, United States, Contract, Washington, views, December, Government, edlabs 3 True False True False False False False False False
63 Properties of the Multiple Measures in Arizona’s Teacher Evaluation Model. REL 2015-050 Lazarev, V.; Newman, D.; Sharp, A. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) West developed this report to explore the relationship among assessments made through teacher observations, student academic progress, and stakeholder surveys to investigate how well the model differentiated between high- and low-performing schools. The study found only a few correlations between student academic progress and teacher observation scores, suggesting that a single aggregated observation score might not adequately measure independent aspects of teacher performance. This report can be useful to evaluators and education stakeholders who are considering using teacher observation scores as an evaluation measure or wish to gauge the quality of their agency’s educator evaluation system. In addition, appendix E in the report provides instructional information on how to detect nonlinear relationships between observation item scores and student academic progress metrics. 2014 Link to Resource Methods Report 42 pages R False False False True True False False False False False False False False False False True False True False True False False False False True False False Teacher Evaluation, Observation, State Departments of Education, Correlation, Teacher Competencies, Academic Achievement, Teacher Influence, Scores, Surveys, Mathematics Tests, Reading Tests, Mathematics Achievement, Reading Achievement, Teacher Effectiveness, Teacher Characteristics, Evaluation Methods, Public Schools, School Districts, Charter Schools, Planning, Classroom Environment, Teaching Methods, Teacher Responsibility, Standardized Tests, Elementary Schools, Secondary Schools, Grade 3, Grade 4, Grade 5, Grade 6, Grade 7, Grade 8, Grade 9, Grade 10, Grade 11, Grade 12, Student Attitudes, Teacher Attitudes, Parent Attitudes 3 True False False False True False False False False
64 The Utility of Teacher and Student Surveys in Principal Evaluations: An Empirical Investigation Liu, K.; Springer, J.; Stuit, D.; Lindsay, J.; Wan, Y. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Midwest developed this report to describe the findings from a study of a district’s principal evaluation model and the relationships between component measures and student growth. The relevant sections of the report are: the analyses appendix (C) and the Tripod survey tool (appendix E). Appendix C reports on supplemental analyses: the reliability and validity of the principal evaluation instrument, additional correlations, and power analysis. Appendix E lists the publicly-released Tripod Student Perception Survey items. These sections are a valuable resource for program directors who want to better gauge the quality of their agency’s educator evaluation system. In particular, they provide a way to assess how well evaluation components predict student growth. 2014 Link to Resource Methods Report Tool 50 pages R False False False True True False False False False False True False False False False True True False True True False False True True True True False Evaluation, Principals, Students, Student Evaluation, Teachers 3 True False False True True False False False False
65 Teacher evaluation and professional learning: Lessons from early implementation in a large urban district Shakman, K.; Zweig, J.; Bailey, J.; Bocala, C.; Lacireno-Pacquet, N. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Northeast and Islands developed this report to describe the findings from a study of the alignment of educator evaluations and professional development. The relevant sections of the report are the survey and interview tools (Appendices D, F, and G). Appendix D describes the coding dictionary that – a list of professional development activities with descriptions – allowed the study team to summarize and quantify the professional activities in the narratives and to match the activities prescribed to the activities reported in the survey to study the alignment between what evaluators prescribed and what teachers reported. Appendix F lists the survey items, which are about the types of professional development that teachers engaged in during the study period; the standards and indicators that the professional development addressed; the type and duration of the professional development activities; and school climate. Appendix G provides the interview protocol. Teachers were asked about the prescriptions they received, how they were selected, what they did, and their experience with the resulting professional development. Educators were asked similar questions about the prescriptions they wrote, their role in the process, what teachers did and how successfully, and their evaluation of the resulting professional development. These sections are a valuable resource for program directors who want to better gauge the quality of the feedback administrators give teachers regarding how to improve. They also help track whether teachers take the recommended steps and how these may affect subsequent evaluation results. 2016 Link to Resource Tool 8 pages P False False False True True False False False False False False True False False False True True False True False False False False False True False False Teachers, Education/Preparation, Performance Evaluations, Professional Development, Urban Schools 2 True False False True False False False False False
66 Survey methods for educators: Collaborative survey development (part 1 of 3) Irwin, C. W., & Stafford, E. T. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Northeast and Islands developed a series of three guides to help educators administer surveys to collect unavailable data they need. This guide, the first in the series, focuses on collaborative survey development. The document describes a five-step process: step 1 – identify topics of interest, step 2 – identify relevant, existing survey items, step 3 – draft new survey items and adapt existing survey items, step 4 – review draft survey items with stakeholders and content experts, and step 5 – refine the draft survey using cognitive interviewing. Appendix A (page A-1) provides additional resources for sampling and survey administration, including references to textbooks and professional organizations and university departments with expertise in these topics. Appendix B (pages B-1 and B-2) provides a survey blueprint or table of specifications i.e., a document that outlines the topics and subtopics to be included on a survey and serves as an outline for developing survey items. Appendix C (page C-1) describes how to develop an analysis plan, to be used prior to calculating summary statistics and ensure that the resulting information will help address the survey development team’s topics of interest, and provides a sample. Appendix D (pages D-1 and D-2) provides a sample feedback form, which allowed a survey development team to obtain input on item wording and importance across all stakeholders despite limited time to meet with the stakeholder group. Appendix E (pages E-1 to E-4) provides a sample cognitive interview protocol, which includes is the step-by-step instructions for gathering feedback during cognitive interviews. Appendix F (pages F-1 to F-3) provides a sample codebook i.e., a record of the codes used to categorize feedback gathered during cognitive interviewing. 2016 Link to Resource Guide Tool 33 Pages P False False False False True False False False False False False False False False False False True False False False False False False False False False False Achievement, States 3 False False True True False False False False False
67 Survey methods for educators: Selecting samples and administering surveys (part 2 of 3) Pazzaglia, A. M., Stafford, E. T., & Rodriguez, S. M. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Northeast and Islands developed a series of three guides to help educators administer surveys to collect unavailable data they need. This guide, the second in the series, focuses on selecting a sample and administering a survey. The document describes a five-step sample selection and survey administration process: step 1 – define the population, step 2 – specify the sampling procedure, step 3 – determine the sample size, step 4 – select the sample, and step 5 – administer the survey. Appendices provide steps on how to use Microsoft Excel to obtain a random sample (appendix B on pages B-1 to B-5), and sample survey invitations and reminders (appendix D on pages D-1 to D-11). This guide is a valuable resource for educators who want to understand a larger population by surveying a subgroup of its members. The first guide in the series covers survey development, and the third guide in the series covers data analysis and reporting. 2016 Link to Resource Guide Tool 29 pages P False False False False True False False False False False True True False False False False True False False False False False False False False False False Achievement, Academic Achievement,Achievement (student), Data Interpretation, States 3 False False True True False False False False False
68 Survey methods for educators: Analysis and reporting of survey data (part 3 of 3) Pazzaglia, A. M., Stafford, E. T., & Rodriguez, S. M. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Northeast and Islands developed this guide to help educators analyze survey data and report the results in a way that is accessible and useful to interested stakeholders. This guide will help readers use survey data to inform decisions about policy or practice in schools and local or state education agencies. The document describes a five-step survey analysis and reporting process: step 1 – review the analysis plan, step 2 – prepare and check data files, step 3 – calculate response rates, step 4 – calculate summary statistics, and step 5 – present the results in tables or figures. This guide is the third in a three-part series of survey method guides for educators. The first guide in the series covers survey development, and the second guide in the series covers sample selection and survey administration. 2016 Link to Resource Guide Tool 37 pages P False False False False True False False False False False False False False False False False True True True True True True True False False False False Achievement, Achievement (student), Data Interpretation, States 3 True False True True False False False False False
69 Redesigning Teacher Evaluation: Lessons from a Pilot Implementation Riordan, J.; Lacireno-Paquet, N.; Shakman, K.; Bocala, C.; Chang, Q. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Northeast & Islands developed useful surveys as part of this report, which describes a study of the implementation of new teacher evaluation systems in New Hampshire’s School Improvement Grant (SIG) schools. They find that, while the basic system features are similar across district plans, the specifics of these features vary considerably by district. Key findings are listed on the second half of page i. The relevant section is in appendix D (pages D-1 to D-11), which provides survey protocols. The goal of the surveys is to understand what is working well and what is challenging during the pilot implementation of a new evaluation system. There is a teacher professional climate and implementation survey and an evaluator survey. Both ask for background information and about implementation and stakeholder support. The teacher survey also includes questions on the professional school climate (i.e., inclusive leadership, teacher influence, teacher–principal trust, coordination and quality of professional development, reflective dialogue, and focus on student learning). 2015 Link to Resource Tool 11 Pages P False False False False True False False False False False False False False False False False True False False False False False False False True False False Evaluation, Teachers 2 True False False True False False False False False
70 Measurement instruments for assessing the performance of professional learning communities Blitz, C. L., & Schulman, R. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Mid-Atlantic developed this tool to help researchers, practitioners, and education professionals plan, implement, and evaluate teacher professional learning communities (PLCs). The PLC-related measurement instruments identified in this project include 31 quantitative and 18 qualitative instruments that assess a range of teacher/principal-, team-, and student-level variables. Each is described in detail in appendix D (pages D-1 to D-52). Appendices B (page B-1) and C (pages C-1 to C-2) are most relevant for users interested in identifying and applying an appropriate measurement instrument that can answer their questions about teacher PLCs. The document suggests proceeding in three steps: step 1) consult appendix B (the PLC logic model) to determine what information you want and for what purpose (planning, implementation, or evaluation); step 2) depending on your goal (planning, implementation, or evaluation), use the table in appendix C to identify the key tasks you are interested in (column 1) and select the key indicators you are most interested in measuring (column 2) — the class or classes of measurement instruments most relevant to measuring the indicators of interest are in column 3; step 3) click on the hyperlink of a specific instrument in appendix D’s table of contents to retrieve the information about the instrument, determine whether it meets your needs, and find out how to obtain a copy. Notably the body of the document is relevant for readers interested in better understanding logic models. 2016 Link to Resource Tool 72 pages P False False False False True False False False False False False False False False False False True False False False False False False False True True False Evaluation, Professional Learning, Teachers 3 True False False True False False False False False
71 Evaluation of the Effectiveness of the Alabama Math, Science, and Technology Initiative (AMSTI) Newman, D.; Finney, P. B.; Bell, S.; Turner, H.; Jaciw, A.; Zacamy, J.; Feagans G. L. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Southeast developed this report to describe the findings from a study of a two-year intervention intended improve student achievement in math, science, and technology. An overview of findings starts in the last paragraph of page xxiv and ends at the bottom of page xxvii. The relevant sections of the report are selection and random assignment of schools (appendix C on pages C-1 to C-3), a teacher survey of math and science classroom practices and professional development and availability of technology (appendix G on pages G-1 to G-15), data cleaning and data file construction (appendix H on page H-1), and sections on attrition (appendix I — steps through which the analytic sample was selected for each confirmatory outcome on pages I-1 to I-8, N — attrition associated with the SAT reading outcomes, teacher content knowledge in mathematics and in science, and student engagement in mathematics and science through study stages for samples used in Year 1 exploratory analysis on pages N-1 to N-8, and R — on attrition through study stages for samples contributing to estimation of two-year effects on pages R-1 to R-9). 2012 Link to Resource Methods Report 44 Pages R False False False False True True False False False True False False True True True False False True True True False False True False True False False Student Achievement, Mathematics, Science 3 True False False False True False False False False
72 The English Language Learner Program Survey for Principals. REL 2014-027. Grady, M. W.; O’dwyer, L. M. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Northeast & Islands developed this survey tool – The English Language Learner Program Survey for Principals – to help state education departments collect consistent data on the education of English language learner students. Designed for school principals, the survey gathers information on school-level policies and practices for educating English language learner students, the types of professional development related to educating these students that principals have received and would like to receive, principals’ familiarity with state guidelines and standards for educating these students, and principals’ beliefs about educating these students. 2014 Link to Resource Tool 28 Pages P False False False False True False False False False False False False False False False False True False False False False False False False False True False Education Evaluation, Department of Education, Education Development Center, Institute of Education Sciences, t i o, National Center, r t m e n t o f E d u c, Regional Assistance, unbiased large-scale evaluations of education programs, D e p, research-based technical assistance, results of research, commercial products, Grady, O’Dwyer, English Language Learner Alliance, policies of IES, REL report, Boston College, mention of trade names, ED-IES, Learner Program, Survey, Regional Educational Laboratory Northeast, organizations, policymakers, educators, public domain, widespread dissemination, practices, content, endorsement, Laura, Matthew, publication, Islands, synthesis, NCEE, DC, funds, United States, Principals, Contract, Washington, views, permission, June, collaboration, Government 3 True False False True False False False False False
73 The Effects of Connected Mathematics 2 on Math Achievement in Grade 6 in the Mid-Atlantic Region Martin, T.; Brasiel, S.; Turner, H.; Wise, J. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Mid-Atlantic developed this report to describe the results of a study of the effect of a mathematics intervention on the mathematics achievement and engagement of grade six students. It found that students who received the intervention did not have greater mathematics achievement or engagement than comparison students who experienced other curricula. The relevant sections of the report are appendices on the statistical power analysis conducted to determine the number of schools, class sections, and students needed to detect a minimum detectable effect size for an outcome (appendix B on pages B-1 and B-2), an illustration of how random assignment procedures were implemented using Microsoft Excel (appendix C on pages C-1 to C-3), the student math interest inventory used to understand what students think about the work they do for mathematics class and the associated confirmatory factor analysis (appendix D on pages D-1 to D-4), and teacher surveys including an initial survey for background information and then monthly surveys and a final surveys about how intervention elements were used and perceived (appendix E on pages E-1 to E-4). 2012 Link to Resource Methods Report Tool 13 Pages R False False False False True True False False False True True False False False False False True True True True False False True False False False False Student Achievement, Mathematics, Professional Development 2 True False False True True False False False False
74 Self-study Guide for Implementing Early Literacy Interventions Dombek, J.; Foorman, B.; Garcia, M.; Smith, K. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Southeast developed this guide to help district and school-based practitioners conduct self-studies for planning and implementing early literacy interventions for kindergarten, grade1, and grade 2 students. Although the guide is not designed for rigorous evaluations, it can be used for continuous ongoing improvement. The guide is designed to promote reflection about current strengths and challenges in planning for implementation of early literacy interventions, spark conversations among staff, and identify areas for improvement. Schools and districts interested in implementing an early literacy program can use the guide to help assess needs in areas such as student selection, content and instruction, professional development, communication and other areas of importance. 2016 Link to Resource Guide Tool 27 Pages P False True True False True False False False False False False False False False False False False False False False False False False False False False False Early Interventions, Education Interventions, Literacy 3 False False True True False False False False False
75 We are Better Together: Researchers & Educators Partner to Improve Students’ Math Skills REL Midwest Regional Education Laboratory Program, Institute of Education Sciences The Regional Education Laboratory (REL) Midwest developed this podcast to help practitioners use quick turnaround research to get feedback on their improvement efforts in a timely manner. This podcast describes a continuous improvement process implemented through a networked improvement community – a group of individuals who are committed to working on the same problem of practice. It describes an example from a network of Michigan schools. This video will be helpful to practitioners and researchers interested in implementing and collaborating on a continuous improvement process and wanting an overview and a real-life example. 2016 Link to Resource Video Guide 18 min P False True False False True False False False False False False False False False False False False False False False False False False False False False False 2 False False True False False True False False True
75 Benchmarking Education Management Information Systems Across the Federated States of Micronesia Cicchinelli, L. F.; Kendall, J.; Dandapani, N. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Pacific developed this report to provide information on the current quality of the education management information system (EMIS) in Yap, Federated States of Micronesia, so that data specialists, administrators, and policy makers might identify areas for improvement. The relevant sections of the report are in the appendices: data sources and methodology (appendix B) and data collection and instruments (appendix C). These sections are valuable resource for program directors who want to assess the quality of their education management information system (EMIS) or who want to see an example of a data collection effort that attempts to quantify answers from interviews. In particular, this helps ensure that education policy, planning, and strategy decisions are grounded in accurate information. 2016 Link to Resource Guide Tool 34 Pages P False False False False True False False False False False False False False False False False True False False False False False False False False False False education management information system, aspects of system quality, Accessibility of education data, overall system, system overall, reliability of education statistics, Department of Education, Integrity of education statistics, aspects of quality, Institute of Education Sciences, data reporting, scores, place, t i o, Data specialists, r t m e n t o f E d u c, state of Yap, D e p, John, Prerequisites of quality, King, benchmark levels, relevance, timeliness, Deputy Director, Policy, Acting Secretary, example of best practice, McREL International, Key findings, Louis, Cicchinelli, Nitara Dandapani, Delegated Duties, Ruth Neild, Serviceability, consistency, Federated States of Micronesia, indicators, rubric, Research, institutional frameworks, process of implementation, Accuracy, supporting resources, stakeholders, meeting standards, Kendall, follows, February 3 False False True True False False False False False
76 Understanding Program Monitoring: The Relationships among Outcomes, Indicators, Measures, and Targets. REL 2014-011 Malone, N.; Mark, L.; Narayan, K. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Pacific developed this guide to help educators, program managers, administrators, and researchers monitor program outcomes more effectively. The guide provides concise definitions of program monitoring components, explains how these components relate to one another, and illustrates a logic model framework for assessing the progress of a program. The guide demonstrates through examples the relationships between outcomes, indicators, measures, benchmarks, baselines, and targets. This guide is a useful resource for educators who want to identify the achievement of program goals. This guide is one of a four-part series on program planning and monitoring. 2014 Link to Resource Guide 5 Pages P False False False False True False False False False False False False False False False True False False False False False False False False False False False program components, program monitoring framework, program managers, monitoring program outcomes, Understanding program monitoring, program evaluation, definitions of program monitoring components, assessing program progress, Mid-continent Research, measures, educators, indicators, better data-informed decisions, Education, effectiveness, Krishna Narayan, Island Innovation, LLC, better monitor, researchers, resource, baselines, building capacity, targets, including benchmarks, Learning, administrators, crucial step, guide, relationships, Nolan Malone, Lauren Mark, means, practitioners, ongoing plan, programs, Examples, following, Policymakers 1 False False True False False False False False False
77 How to Facilitate the Logic Models Workshop for Program Design and Evaluation Rodriguez, S. & Shakman, K. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Northeast & Islands developed this webinar as a guide for educational leaders who want to introduce logic models for program design, implementation, and evaluation using a workshop toolkit available from REL Northeast & Islands (the toolkit is also in this database – see “Logic models for program design, implementation, and evaluation: Workshop toolkit”. During the webinar, the authors of the toolkit focus on the facilitation, organization, and resources necessary to conduct the workshop virtually or in person. 2015 Link to Resource Webinar Slide presentation 1 hour 13 minutes P False False False True True False False False False False False False False False False False False False False False False False False False False False False 3 False False False False False False True True False
78 Logic Models: A Tool for Designing and Monitoring Program Evaluations Lawton, B., Brandon, P. R., Cicchinelli, L., & Kekahio, W. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Pacific developed this tool to help educators understand how to use logic models as they plan and monitor program evaluations using logic models. This tool provides an introduction to how program components are connected using a logic model, and provides an example to demonstrate how an education program’s resources, activities, outputs, and outcomes are illustrated through a logic model. 2014 Link to Resource Tool 5 pages P False False False True True False False False False False False False False False False False False False False False False False False False False False False Evaluation, Outcome 1 False False False True False False False False False
79 The Effectiveness of a Program to Accelerate Vocabulary Development in Kindergarten Goodson, B.; Wolf, A.; Bell, S.; Turner, H.; Finney, P. B. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Southeast developed this report to describe the findings from the study of the 24 week long K-PAVE program and its impact on students’ vocabulary development and academic knowledge and on the vocabulary and comprehension support that teachers provided during book read-aloud and other instructional time. The relevant sections of the report are the random assignment and the survey appendices. Appendix C describes the matching of schools within blocks for random assignment and Appendix G includes the teacher survey. These resources are valuable for grantees who wish to conduct similar research. Other appendices focus on quantitative research methods that grantees might find useful. 2010 Link to Resource Methods Report 9 Pages R False False False False True True False False False True False False False False False False True False False False False False True False True False False Education Interventions, Kindergarten, Reading 2 True False False False True False False False False
80 Logic models for program design, implementation, and evaluation: Workshop toolkit Shakman, K, & Rodriguez, S. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Northeast & Islands developed this toolkit to help practitioners use logic models as a tool for building a relevant evaluation design or working more effectively with evaluators whom they engage to conduct evaluations on their behalf. The toolkit describes two workshops, one (two hours long) to learn about logic models, and the other (one and half hour long) to move from logic models to program evaluation. Appendix one provides a very simple graphical representation of a logic model (page A-1). Appendix B (page B-1) is a template. Appendix C (page C-1) is a sample. Appendix D (page D-1) is an example of making the connection with a theory of action as part of using a logic model to evaluate a program. This toolkit is a useful resource for practitioners who want to understand how to use a logic model as an effective tool for program or policy design, implementation, and evaluation. It provides an opportunity to practice creating a logic model and using it to develop evaluation questions and indicators of success. Finally, it provides guidance in how to determine the appropriate evaluation for a specific program or policy. 2015 Link to Resource Tool 118 Pages P False False False True True False False False False False False False False False False False False False False False False False False False False False False Education Evaluation, Department of Education, Education Development Center, Institute of Education Sciences, Regional Assistance, program evaluation, t i o, r t m e n t o f E d u c, research-based technical assistance, unbiased large-scale evaluations of education programs, Logic models, National Center, D e p, participant workbook, Associate Commissioner, results of research, facilitator workbook, Regional Educational Laboratory Northeast, Workshop toolkit, Sue Betka, Acting Director, Chris Boccanfuso, Officer, Amy Johnson, Editor, Arne Duncan, Secretary, Karen Shakman, Rodriguez, Sheila, overall purpose, Joy Lesnick, Ruth Curran Neild, appropriate steps, policymakers, educators, practices, widespread dissemination, help practitioners, different elements, implementation, slide deck, ED-IES, Islands, synthesis, NCEE, funds, United States, Contract, design, report, Tools, Overview, REL 3 False False False True False False False False False
81 Understanding Program Monitoring: The Relationships among Outcomes, Indicators, Measures, and Targets. REL 2014-011 Malone, N.; Mark, L.; Narayan, K. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Pacific developed this guide to help educators, program managers, administrators, and researchers better monitor program outcomes. The document provides concise definitions of program monitoring components and a framework for assessing program progress. Examples demonstrate the relationships among program components: outcomes, indicators, measures (including benchmarks and baselines), and targets. This guide will be useful to policymakers and practitioners who want to better monitor and evaluate programs and thus to make better data-informed decisions. 2014 Link to Resource Guide 5 pages P False False False False True False False False False False False False False False False False False False False False False False False False False False False Capacity Building, Program Evaluation, Program Effectiveness, Outcomes of Education, Educational Indicators, Benchmarking, Family Involvement, Family School Relationship, College Readiness, Career Development, Performance Factors, Measurement, College Entrance Examinations, Scores 1 False False True False False False False False True
82 Ways to Evaluate the Success of Your Teacher Incentive Fund Project in Meeting TIF Goals Milanowski, A.; Finster, M. U.S. Department of Education, Teacher Incentive Fund The U.S. Department of Education developed this guide to help Teacher Incentive Fund (TIF) grantees collect evidence about how well they are accomplishing TIF goals (improving student achievement by increasing teacher and principal effectiveness; reforming teacher and principal compensation systems so that teachers and principals are rewarded for increases in student achievement; increasing the number of effective teachers teaching poor, minority, and disadvantaged students in hard-to-staff subjects, and creating sustainable performance-based compensation systems). The document provides examples of charts that users can create to visualize trends and comparisons. It also suggests survey questions that can be added to existing surveys to get at stakeholder perceptions of the program. It provides links to additional resources on program evaluation available on the TIF Community of Practice website. This guide will be useful to program staff who want to track progress toward program goals. However, the methods described in the guide will not provide definitive evidence to measure the program’s impact on outcomes. 2016 Link to Resource Tool Guide 12 pages P False False True True True False False False False False False False False False False True True False False False True True True False True False False Program Evaluation, Incentives, Grants, Compensation (Remuneration), Program Effectiveness, Academic Achievement, Sustainability, Teacher Effectiveness, Equal Education, Management Systems, Human Capital, Teachers, Principals, Evaluation Methods 2 True False True True False False False False False
85 Graphical Models for Quasi-Experimental Designs Kim, Y.; Steiner, P.M.; Hall, C.E.; Su, D. Society for research on Educational Effectiveness This conference abstract was written to introduce researchers to graph-theoretical models as a way to formalize causal inference. They describe mechanics and assumptions of the causal graphs for experimental and quasi-experimental designs — randomized controlled trials, and regression discontinuity, instrumental variable, and matching and propensity score designs. For each design, they show how the assumptions required for identifying a causal effect are encoded in the graphical representation. They argue that causal graphs make the assumptions more transparent and easier to understand for applied researchers. This abstract will help users in getting a better understanding of the designs’ assumptions and limitations and conducting better causal studies. It will be useful for researchers who want to represent the features of the study designs and data collection process that affect the identification of causal effects. 2016 Link to Resource Brief or Summary 11 pages R False False False False False True True True False False False False False False False False False False True False True True False False False False False causal estimands of, threats, University of Wisconsin-Madison, Quasi-Experimental Designs, ubin Causal Model, causal inference, Abstract Title Page, Abstract Body Limit, Conference Abstract Template, Campbell, convenient way of defining causal estimands, conceptualization of internal validity, RCM, deriving identification assumptions, Mexico Public Education Department, SREE Spring, fields, central role, estimating cause-effect relationships, history, maturation, Hall, page count, Authors, Affiliations, Background, Context, stable-unit-treatment-value assumption, instrumentation effects, strong ignorability, potential outcomes frameworks, Shadish, behavioral sciences, pages single-spaced, practical point of view, selection, tradition of ruling, researchers, psychology, propensity scor, SUTVA, Cook, Steiner, Holland, formal representation, Courtney, Yongnam Kim, Dan Su, Graphical Models, Peter 2 False True False False False False False False False
86 An SEA Guide for Identifying Evidence-Based Interventions for School Improvement Lee, L.; Hughes, J.; Smith, K;. Foorman, B. Florida Center for Reading Research, Florida State University The Florida Center for Reading Research developed this guide to help state education agencies consider the evidence supporting intervention options. Of particular use for evaluation purposes, the guide describes levels of evidence as defined by ESSA and informed by WWC on pages 11-18. The guide’s main focus is to describe the steps of a collaborative self-study process to identify interventions. Self-study is a process that facilitates thoughtful investigation and discussion of an issue or topic so that decisions can be made collaboratively. The guide also provides a wealth of tools, including role definitions, an intervention scoring template, a scoring guide, a consensus rating form, a planning form for providing guidance to districts, and a sample logic model. Individual scoring guides are provided in a range of areas — systemic change, leadership, instruction, staff development and retention, and school climate. 2016 Link to Resource Guide Tool 89 pages P True False False True True False False False False False False False False False False False False False False False False False False False False False False FLORIDA CENTER, Florida Department of Education, Grant Foundation, Casey Foundation, Self-Study Guide, FLORIDA STATE UNIVERSITY, Overdeck Family Foundation, SEA Guide, Kevin Smith, READING RESEARCH, School Improvement, Self-Study Process, Tennette Smith, Barbara Foorman, John Hughes, Laurie Lee, Identifying Evidence-Based Interventions, UPDATED NOVEMBER, Funding Foundations, Sandra Dilger, Holly Edenfield, Jason Graham, Shannon Houston, Eileen McDaniel, Sonya Morris, Melissa Ramsey, Michael Stowell, Jennifer Morrison, Kim Benton, Robin Lemonis, Roy Stehle, Nathan Oakley, Sonja Robertson, Table of Contents, Mississippi Department of Education, Responsibility, Roles, South Carolina Department of Education, Annie, Flexibility, Acknowledgments, iii, Introduction 3 False False True True False False False False True
87 A Guide for Monitoring District Implementation of Educator Evaluation Systems Cherasaro, T. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Central developed this video to help professionals determine whether the associated report, The Examining Evaluator Feedback Survey, would be useful to them. The video provides an overview of the iterative process used to develop the survey, a three-step process which includes developing state guidelines that align district systems to state expectations, developing data collection methods for both policy and practice data primarily through surveys, and developing adherence criteria and review data against criteria. The video includes a description of the contents of the survey and how states and districts can use the survey. Finally, there is a short discussion about resources that are available to help those interested in using the survey. This video will be useful to program staff interested in developing their own monitoring systems or adapting the tools provided. 2016 Link to Resource Video Brief or Summary 4min P False False False True False False False False False False False False False False False False False False False False False False False False False False False 1 False True False False False True False False False
88 Continuous Improvement in Education Excerpt 4: Practical Measures and the Driver Diagram REL Northeast & Islands Regional Education Laboratory Program, Institute of Education Sciences The Regional Education Laboratory (REL) Northeast & Islands developed a webinar series to help professionals put in place a process to implement and continually monitor programs. This fourth excerpt provides guidance on selecting process and outcome measures with a focus on measures that are useful to practitioners and embedded in existing practice. It introduces a tool that can be used to identify measures – the driver diagram – and illustrates its use with examples. This video will be helpful to program staff interested in implementing a continuous improvement process and anyone looking to identify a sequence of testable items that have an effect on outcomes. 2016 Link to Resource Video Tool 5 min P False True False False False False False False False False False False False False False True False False False False False False False False False False False 1 False False False True False True False False True
88 A Practical Approach to Continuous Improvement in Education Rodriguez, S.; Shakman, K. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Northeast and Islands developed this webinar to help professionals considering implementing a continuous improvement process potentially as part of a network. Presenters describe continuous improvement as a way of thinking about systems change and quality improvement and its underlying six principles. Listeners for whom this approach is appropriate will then receive a set of implementation tools they can use in their own settings, such as how to define a problem, select key questions, set goals, and leverage tools such as Plan-Do-Study-Act cycles, aim statements, and fishbone and driver diagrams. The webinar provides a hands-on opportunity to practice as well as examples from school districts. This webinar will be useful for those interested in leveraging tools to help in the planning of a continuous monitoring process. 2016 Link to Resource Webinar Guide 1h21min P False True False False True False False False False False False False False False False False False False False False False False False False False False False 3 False False True False False False True False True
89 Continuous Improvement in Education Excerpt 2: The Plan Do Study Act (PDSA) Cycle REL Northeast & Islands Regional Education Laboratory Program, Institute of Education Sciences The Regional Education Laboratory (REL) Northeast & Islands developed a webinar series to help professionals put in place a process to implement and continually monitor programs. This second excerpt provides a brief overview of the Plan-Do-Study-Act process, a four-step approach to systematically implement, monitor, and improve programs. It mentions tools that help in its implementation and provides an example in an education setting. This video will be helpful to program staff interested in implementing a continuous improvement process and wanting a very basic introduction. 2016 Link to Resource Video Tool 3 min P False True False False True False False False False False False False False False False False False False False False False False False False False False False 1 False False False True False True False False True
90 Continuous Improvement in Education Excerpt 3: Fishbone Diagram REL Northeast & Islands Regional Education Laboratory Program, Institute of Education Sciences The Regional Education Laboratory (REL) Northeast & Islands developed a webinar series to help professionals put in place a process to continually monitor programs. This third excerpt introduces the participants to the fishbone diagram, a tool that helps identify the causes of a problem. It illustrates its use with an example. This video will be helpful to program staff interested in implementing a continuous improvement process and anyone looking to explore several layers of causes of an issue. 2016 Link to Resource Video Tool 3 min P False True False False False False False False False False False False False False False False False False False False False False False False False False False 1 False False False True False True False False False
89 Addressing Challenges and Concerns about Opportunistic Experiments Hartog, J. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) developed a video series that explains how schools, districts, states, and their research partners can use a cost-effective approach, known as “opportunistic experiments,” to test the effectiveness of programs. Opportunistic experiments take advantage of a planned intervention of policy change. This video describes four common concerns that come up before starting an opportunistic experiment; such as fairness, access, expense, and disruption, and three key challenges that are common once the experiment in underway; study-exempt participants, building support, and observing long-term outcomes. This video will be useful to professionals considering opportunistic experiments and using research to make decisions and improve programs. 2016 Link to Resource Video 11min P False True True False True True False False False True False False False False False False False False False False False False False False False False False 2 False False False False False True False False False
90 Recognizing Opportunities for Rigorous Evaluation Tuttle, C. Regional Education Laboratory Program, Institute of Education Sciences and Mathematica Policy Research The Regional Educational Program (REL) developed a video series to help schools, districts, states, and their research partners use a cost-effective approach, known as “opportunistic experiments,” to test the effectiveness of programs. This video describes key characteristics of opportunistic experiments and introduces situations that provide good opportunities for embedding experiments in existing programs with minimal disruption. The video will be useful to professionals considering rigorous evaluation that will generate evidence for informing their education decisions in a cost-effective manner. 2016 Link to Resource Video 8 min P False False True False False True False False False False False False False False False False False False False False False False False False False False False 2 False False False False False True False False True
92 Applying the WWC Standards to Postsecondary Research What Works Clearinghouse Institute of Education Sciences The What Works Clearinghouse (WWC) developed this webinar to help researchers conduct effectiveness evaluations in postsecondary settings. The facilitators provide an overview of the WWC and discuss how the WWC determines study ratings, with special considerations for postsecondary settings. They recall eligible designs (randomized control trials and some quasi-experimental designs) and their potential to meet WWC standards. They also explain how certain aspects of a study, such as attrition, group comparability, and outcome measurements, can impact the WWC’s rating of a study. This webinar provides guidance on common, but frequently avoidable, challenges in postsecondary research that prevent studies from meeting standards. This webinar will be helpful for researchers who want to know what to look for in strong studies in postsecondary research. 2016 Link to Resource Video Guide 37 min R True False False False False True True True False True False False True True False True False False False False True False False False False False False 3 False False True False False True False False False
93 How (and What) to Train Teachers and Administrators on Education Data Privacy Vance, A. National Association of State Boards of Education The National Association of State Boards of Education developed this presentation to train teachers and administrators on the protection of student data. This video reviews state trends, common reasons for data breaches, legislation, what administrators need to know when using, saving, and sharing data and working with vendors, and ways to prevent and handle student data disclosure. This video provides several resources, most of them available online; lessons learned; and a communications toolkit for best practices while communicating with various stakeholders. Key points and resources can be easily accessed by scrolling along the timing line on screen and pausing on the slides the presenter shows. This video will be helpful to those working with sensitive data. 2016 Link to Resource Video Guide 44 min P False False False False True False False False False False False False False False True False False False False False False False False False False False False 3 False False True False False True False False False
94 Continuous Improvement in Education Excerpt 1: The Model for Improvement REL program Regional Education Laboratory Program, Institute of Education Sciences The Regional Education Laboratory (REL) Northeast & Islands developed a webinar series to help professionals put in place a process to implement and continually monitor programs. This first excerpt introduces the three essential questions that guide the model for improvement: what problem are we trying to solve, what changes may we introduce and why, and how will we know that the change is actual improvement. It provides an example in an education setting. This video will be helpful to program staff interested in implementing a continuous improvement process and wanting a very basic introduction. 2016 Link to Resource Video Webinar 2 min P False True False False True False False False False False False False False False False False False False False False False False False False False False False 1 False False False False False True True False False
95 Continuous Improvement in Education Excerpt 5: Practitioner Testimonial REL Northeast & Islands Regional Education Laboratory Program, Institute of Education Sciences The Regional Education Laboratory (REL) Northeast & Islands developed a webinar series to help professionals put in place a process to implement and continually monitor programs. This fifth excerpt features a principal who discusses her school’s use of continuous improvement to improve math discourse. She notes the types of data teachers collected, the challenges they encountered, and how they addressed them through multiple Plan-Do-Study-Act cycles. This video will be helpful to program staff considering implementing a continuous improvement process and who could benefit from a real-life example and how the experience was valuable. 2016 Link to Resource Video Webinar 4 min P False True False False True False False False False False False False False False False False False False False False False False False False False False False 1 False False False False False True True False False
100 Developing Surveys Using the Collaborative Survey Development Process Excerpt 1: Survey Research Process and Collaborative Survey Development Irwin, C.; Stafford, E. Regional Education Laboratory Program, Institute of Education Sciences The Regional Education Laboratory (REL) Northeast & Islands developed a video series to help professionals collaboratively develop and administer surveys. The series very briefly summarize information from a related guide. This video mentions the three stages of the survey research process: survey development, sample selection and survey administration, and data analysis and reporting. It then asks participants why they think the development process should be collaborative. Next, it defines collaborative survey development as a process that includes educators, researchers, and content experts, which allows the leveraging of expertise, reduces the burden on anyone individual, and promotes survey quality. This video will be helpful to those considering the utility of developing surveys as part of a collaboration. 2016 Link to Resource Video 5 minutes P False False False False True False False False False False False False False False False False True False False False False False False False False False False 1 False False False False False True False False False
101 Developing Surveys Using the Collaborative Survey Development Process Excerpt 2: Rewriting Items Irwin, C.; Stafford, E. Regional Education Laboratory Program, Institute of Education Sciences The Regional Education Laboratory (REL) Northeast & Islands developed a video series to help professionals collaboratively develop and administer surveys. The series briefly summarizes information from a related guide. This video proposes three ways to phrase a survey question and asks participants to list their weaknesses. This video will be helpful to those wanting to build their awareness of the complexity of item writing. 2016 Link to Resource Video Brief 4 minutes P False False False False True False False False False False False False False False False False True False False False False False False False False False False 1 False True False False False True False False False
102 Developing Surveys Using the Collaborative Survey Development Process Excerpt 3: Introductory Description of Cognitive Interviewing Irwin, C.; Stafford, E. Regional Education Laboratory Program, Institute of Education Sciences The Regional Education Laboratory (REL) Northeast & Islands developed a video series to help professionals collaboratively develop and administer surveys. The series briefly summarizes information from a related guide. This video defines cognitive interviewing as a process to identify and correct problems with surveys by watching respondents take one. This can reveal issues such as confusing items, selection bias, people who open the survey and close it almost immediately. The authors have used this process as a way to pilot/pre-test a survey and recommended conducting this prior to a full-on pilot. This video will be helpful to those who have absolutely no knowledge of cognitive interviewing. 2016 Link to Resource Video 4 min P False False False False True False False False False False False False False False False False True False False False False False False False False False False 1 False False False False False True False False False
103 Developing Surveys Using the Collaborative Survey Development Process Irwin, C.; Stafford, E. Regional Education Laboratory Program, Institute of Education Sciences The Regional Education Laboratory (REL) Northeast & Islands developed a video series to help professionals develop, administer, and analyze surveys in practical education contexts. The series builds off on a series of guides available online. The presentation (from about 0:08:55 to 1:15:00) addresses collaborative survey development. It defines it as a process that includes educators, researchers, and content experts, which allows the leveraging of expertise, reduces the burden on anyone individual, and promotes survey quality. The video describes the establishment of a development team and the five steps in the process: identifying topics of interest, identifying survey items, drafting new items and adapting existing ones, reviewing draft items with stakeholders and content experts, and refining the draft survey with pretesting using cognitive interviewing. It provides practical examples, tools, and activities for each step. This video will be helpful to those interested in developing and administering surveys as a means of collecting data to inform education policy and practice. 2016 Link to Resource Video Guide 1 hour 32 min P False False False False True False False False False False False False False False False False True False False False False False False False False False False 3 False False True False False True False False False
104 Using Student Surveys to Monitor Teacher Effectiveness Regional Education Laboratory Program Regional Education Laboratory Program, Institute of Education Sciences The Regional Education Laboratory (REL) Mid-Atlantic developed this webinar to give participants access to a student survey to monitor teacher effectiveness. The featured presentation (from about 00:8:00 to 1:35:00) focuses on the Tripod survey, a student survey about teaching practices, student engagement, and school climate. At its core are the 7 Cs (caring, captivating, conferring, clarifying, consolidating, challenging, and controlling), a set of teaching practices that peer-reviewed research has linked to student engagement (effort and behavior) and achievement (gains on standardized tests). The video explores each measure using survey examples, results from the field for various student groups, and graphical representations. It discusses an example of use of the survey in the Measures of Effective Teaching project, which examined how evaluation methods could best be used to identify and develop great teaching. This video will be helpful to schools and districts interested in using student surveys to improve teaching and learning. 2016 Link to Resource Webinar 1 hour 55 min P False False False True True False False False False False False False False False False False True False False False True True False True True False False 3 True False False False False False True False False
105 Webinar: Examining Evaluator Feedback in Teacher Evaluation Systems Cherasaro, T.; Martin, T. Regional Education Laboratory Program, Institute of Education Sciences The Regional Education Laboratory (REL) Central developed this webinar to give professionals access to a tool to assess the quality of evaluator feedback in a teacher evaluation context. Presenters describe the survey, guiding research questions, the underlying theoretical model, and definitions. They provide an overview of the iterative steps in the survey development process, including selecting an advisory panel, testing the survey with teacher cognitive interviews, examining the reliability and validity of the survey, piloting it, and getting to a final version. They go over sample research questions the survey can help address and how to analyze data. They also provide links to pdf, Word, and Google versions of the survey. Finally, they provide a sample data display and how to interpret it. This video will be useful more generally to those interested in creating their own evaluator feedback survey or even surveys in other areas. 2016 Link to Resource Webinar 28 min P False False False True False False False False False False False False False False False False True False False False True True False False True False False 3 True False False False False False True False True
106 Supporting a Culture of Evidence Use in Education Institute of Education Sciences Institute of Education Sciences This video describes the different research resources that individuals can access through the Institute of Education Sciences (IES). The video describes the products offered through the What Works Clearinghouse (WWC) and the online search tool for thousands of research studies through the Education Resource Information Center (ERIC). It also provides information about the 10 Regional Education Laboratories (RELS) and Statewide Longitudinal Data Systems (SLDSs). This video will be useful to participants who want to make better use of evidence for education improvement and to leverage IES resources in that effort. 2016 Link to Resource Video 5 min P True False False False True False False False False False False False False False False False False False False False True False False False False False False 1 False False False False False True False False False
107 Teacher Effectiveness Data Use Regional Education Laboratory Program Regional Education Laboratory Program, Institute of Education Sciences The Regional Education Laboratory (REL) West developed this video to help district and school leaders build capacity to better understand, analyze, and apply teacher effectiveness data to make strategic decisions regarding teacher assignment, leadership, and professional development, and to facilitate local planning around the use of these data. The video defines different types of effectiveness data, and discusses ways to use data and interpret research. It introduces a five-step process to analyze teacher effectiveness data, which includes: identifying the question, collecting data, organizing data, making data-based decisions, and communicating the data findings and decisions. The video can be used as a working session by using handouts from the Great Teachers and Leaders Supporting Principals Using Teacher Effectiveness Data Module (note that handouts are numbered differently in the video). This video will be helpful to stakeholders interested in making data-based decisions to help increase teacher effectiveness. HB TO IMPAQ: SUGGEST EXCLUDING AS THIS IS ABOUT DATA USE CAPACITY BUILDING NOT PROGRAM EVALUATION (ALTHOUGH A GOOD RESOURCE) 2016 Link to Resource Video Guide 25 min P False False False True False False False False False False False False False False False False False False False False True False False False True False False 3 True False True False False True False False False
109 Using Classroom Observations to Measure Teacher Effectiveness Regional Education Laboratory Program Regional Education Laboratory Program, Institute of Education Sciences The Regional Education Laboratory (REL) Mid-Atlantic developed this webinar to help participants use classroom observations as one measure of teacher effectiveness in a comprehensive educator support system. The featured presentation (about 0:04:56 to 1:15:00) begins with an overview of the context and general principles for educator evaluations (until about 21:00) and then goes into the effective practice of observations and professional conversations. This video will be helpful to stakeholders interested in putting in place, or improving upon, classroom observations for evaluation purposes and increased professional learning. 2016 Link to Resource Video 1 hour 42 min P False False False True False False False False False False False False False False False False True False False False True False False False False False False 3 False False False False False True False False False
110 Introduction to Group Designs: Module 1, Chapter 1, WWC Training What Works Clearinghouse Institute of Education Sciences This video provides an overview of research designs reviewed under WWC Group Design Standards–randomized controlled trials (RCT) or Quasi-Experimental Designs (QED) 2016 Link to Resource Video 3 min P True False False False True True True False False True False False False False False False False False False False False False False False False False False 1 False False False False False True False False False
111 Randomized Control Trials: Module 1, Chapter 2, WWC Training What Works Clearinghouse Institute of Education Sciences This video goes into detail on what WWC considers as ‘well executed randomization’ and provides examples of randomization that are well executed, and those that are not well executed. For example, choosing students by name, birth date, or class schedule is not considered as randomization. The video also discusses how randomization can be compromised. For example, if a student is originally assigned to control, even if s/he is later moved into the intervention, the study must treat this student as a ‘control’ student. 2016 Link to Resource video 16 min P True False False False True True False False False True False False False False False False False False False False False False False False False False False 2 False False False False False True False False False
112 Quasi-experimental Designs: Module 1, Chapter 3, WWC Training What Works Clearinghouse Institute of Education Sciences This video provides the WWC definition of quasi-experimental design (QED), and describes designs that are potentially acceptable to WWC. All eligible QEDs must compare the intervention group with distinct (non-overlapping) groups that are determined by a non-random process. Comparison groups can be determined through a convenient sample, national rates, or by using statistical methods. The highest WWC rating that a QED can receive is ‘Meets Standards With Reservations.” 2016 Link to Resource video 3 min P True False False False True False True False False True False False False False False False False False False False False False False False False False False 1 False False False False False True False False False
113 Group Design Knowledge Checks: Module 1, Chapter 4, WWC Training What Works Clearinghouse Institute of Education Sciences This video includes a knowledge check for viewers to assess whether they have a clear understanding of what types of designs are eligible for WWC review. Randomized Controlled trials (RCTs) and Quasi-experimental designs (QED) are addressed. 2016 Link to Resource Video 12 min P True False False False True True True False False False False False False False False False False False False False False False False False False False False 2 False False False False False True False False False
115 Identifying and Measuring Attrition: Module 2, Chapter 2, WWC Training What Works Clearinghouse Institute of Education Sciences This video defines attraction in RCTs (WWC does not consider attrition in QEDs) and discusses the risk of bias in a study due to attrition. The video provides examples of different types of attrition, defines overall versus differential attrition, and discusses how the WWC calculates attrition rates. WWC defines high attrition as bias if it is equal or greater than 0.05 standard deviations. 2016 Link to Resource Video 15 min P True False False False True True False False False False False False True False False False False False False False False False False False False False False 2 False False False False False True False False False
116 Attrition Thresholds: Module 2, Chapter 3, WWC Training What Works Clearinghouse Institute of Education Sciences The video discusses WWC’s attrition thresholds, distinguishing between liberal threshold and conservative threshold. The video also points the viewer to additional WWC resources to calculate and understand the attrition thresholds. 2016 Link to Resource Video 8 min P True False False False True True False False False False False False True False False False False False False False False False False False False False False 2 False False False False False True False False False
117 Review Issues Related to Attrition: Module 2, Chapter 4, WWC Training What Works Clearinghouse Institute of Education Sciences This video discusses how ways in which researchers exclude sample members may or may not cause an attrition bias. This video also discusses in what conditions ‘nonparticipation’ is or is not considered attrition. If researchers continue to track individuals who did not participate, and keep them in their originally assigned condition, then it is not considered attrition. 2016 Link to Resource Video 10 min P True False False False True True False False False False False False True False False False False False False False False False False False False False False 2 False False False False False True False False False
118 Attrition Knowledge Checks: Module 2, Chapter 5, WWC Training What Works Clearinghouse Institute of Education Sciences This video provides examples of study design decisions related to IES standards on attrition. Viewers are provided time to choose the best answer before the correct answer is provided. Reasons for answers are well described. 2016 Link to Resource Video 6 min P True False False False True True False False False False False False True False False False False False False False False False False False False False False 2 False False False False False True False False False
119 Why Use Experiments to Evaluate Programs? Resch, A. Regional Education Laboratory Program, Institute of Education Sciences and Mathematica Policy Research The Regional Educational Program (REL) developed a video series to help schools, districts, states, and their research partners use a cost-effective approach, known as “opportunistic experiments,” to test the effectiveness of programs. This video describes why schools, districts, states, and their research partners might want to use experiments to evaluate programs and policies, and what they can learn from them. The video will be useful to professionals considering rigorous evaluation that will generate evidence for informing their education decisions in a cost-effective manner. 2016 Link to Resource Video 6 min P False False True False False True False False False False False False False False False False False False False False False False False False False False False 2 False False False False False True False False True
120 Defining and Measuring Baseline Equivalence: Module 3, Chapter 2, WWC Training What Works Clearinghouse Institute of Education Sciences This video defines and describes the importance of baseline equivalence between intervention and comparison groups in the analytic sample. It provides WWC’s standard for equivalence (if the difference between baseline’s effect size is above 0.25, it does not satisfy baseline equivalence). The video also describes the use of Hedges’ g and Cox’s Index to calculate effect sizes. 2016 Link to Resource Video 10 min R True False False False True True True False False True False False True True False False False False False False False False False False False False False 2 False False False False False True False False False
121 The Role of Review Protocols and Other Review Issues: Module 3, Chapter 3, WWC Training What Works Clearinghouse Institute of Education Sciences This video discusses in detail how to determine which variables must show baseline equivalence on the analytic sample. Most studies must demonstrate baseline equivalence on outcome variables. However, when the outcome variable is impossible to collect at baseline, WWC provides content area ‘review protocols’ that provide acceptable proxies. The video also discusses why the use of propensity score matching cannot be used to determine baseline equivalence. It also addresses when imputed baseline data may or may not be acceptable. 2016 Link to Resource Video 13 min R True False False False True True True False False True False False False True False True False False False False False False False False False False False 2 False False False False False True False False False
122 Statistical Adjustments: Module 3, Chapter 4, WWC Training What Works Clearinghouse Institute of Education Sciences This video discusses acceptable methods to adjust for baseline pretest differences when baseline differences are between .05 -.20 standard deviations. Acceptable methods include regression covariate adjustments (including covariates in HLM), and analysis of covariance (ANCOVA). 2016 Link to Resource Video 3 min R True False False False True True True False False True False False False True False False False False False True False False False False False False False 1 False False False False False True False False False
123 Baseline Equivalence Knowledge Checks: Module 3, Chapter 5, WWC Training What Works Clearinghouse Institute of Education Sciences This video includes a knowledge check on how to define baseline equivalence, when it’s appropriate to use statistical adjustments, and in what baseline conditions can studies receive the ‘meets standards with reservations’ rating. 2016 Link to Resource Video 10 min R True False False False True True True False False True False False False True False False False False False False False False False False False False False 2 False False False False False True False False False
125 Defining Confounding Factors: Module 4, Chapter 2, WWC Training What Works Clearinghouse Institute of Education Sciences Confounding factors are components of a study that make it difficult or impossible to isolate the effect of the intervention. To be identified as a confounding factor, the study component must be observed, aligned completely with only one of the study conditions, and is not part of the intervention that the study is testing. The video provides examples of these conditions. 2016 Link to Resource video 8 min P True False False False True False False False False False False False False True False False False False False False False False False False False False False 2 False False False False False True False False False
126 Common Circumstances in WWC Reviews: Module 4, Chapter 3, WWC Training What Works Clearinghouse Institute of Education Sciences This video explains that for a study to be judged as having a confounding factor, all three conditions of confounding factors must hold: (1) the study component must be observed (2) the component is aligned completely with only of the study conditions, and the component is not part of the intervention that the study is testing. If any of these conditions do not apply, then WWC defines it as a ‘non-confounding factor’ 2016 Link to Resource Video 6 min P True False False False True False False False False False False False False True False False False False False False False False False False False False False 2 False False False False False True False False False
127 Confounding Factors Knowledge Check: Module 4, Chapter 4, WWC Training What Works Clearinghouse Institute of Education Sciences The video provides scenarios of different studies, and the viewer is asked to determine if the scenario illustrates a study with confounding factors. The video carefully explains each correct answer. 2016 Link to Resource Video 6 min P True False False False True False False False False False False False False True False False False False False False False False False False False False False 2 False False False False False True False False False
129 The WWC Outcome Standard: Module 5, Chapter 2, WWC Training What Works Clearinghouse Institute of Education Sciences This video begins by providing WWC’s definition of an outcome, outcome measure, and outcome domain, and then provides examples of how these three terms are used differently. The video then briefly reviews 4 ways in which the WWC rates and outcome study: (1) demonstrate face validity (2) demonstrate reliability (3) avoid over alignment, and (4) use consistent data collection procedures. 2016 Link to Resource Video 3 min P True False False False True False False False False False False False False False False True False False False False False False False False False False False 1 False False False False False True False False False
130 Face Validity and Reliability: Module 5, Chapter 3, WWC Training What Works Clearinghouse Institute of Education Sciences This video defines and provides examples of face validity and reliability. Face validity is ensuring that the chosen measure actually captures data on the outcome of interest. The reliability of a measure ensures that the measure would yield similar scores when used several times, and has a minimum risk of measurement error. 2016 Link to Resource video 6 min P True False False False True False False False False False False False False False False True False False False False False False False False False False False 2 False False False False False True False False False
131 Standardized Measures: Module 5, Chapter 4, WWC Training What Works Clearinghouse Institute of Education Sciences Standardized tests are considered to have face validity and reliability if administered in the way intended. This video describes how reviewers can identify standardized tests and provides examples of how researchers may deviate from the intended administration of the standardized tests. In these cases, the measure would not be considered reliable. 2016 Link to Resource Video 5 min P True False False False True False False False False False False False False False False True False False False False False False False False False False False 1 False False False False False True False False False
132 Over alignment and Data Collection Differences: Module 5, Chapter 5, WWC Training What Works Clearinghouse Institute of Education Sciences The video discusses the problems associated with (1) over alignment of a measure with the intervention and (2) when a measure is collected differently from the intervention and the comparison group. A measure is considered to be over aligned to the intervention when it has been specifically tailored to the intervention. In these cases, the measure can provide an unfair advantage to the intervention group. Measures collected differently between intervention and comparison groups, such as different modes, timing, administering personnel, or construction of scored, will result in a ‘does not meet WWC standards’ rating. 2016 Link to Resource Video 4 min P True False False False True False False False False False False False False False False True False False False False False False False False False False False 1 False False False False False True False False False
133 Other WWC Review Issues: Module 5, Chapter 6, WWC Training What Works Clearinghouse Institute of Education Sciences This video describes how the outcome measure standards apply to baseline measures. For example, pre-intervention measures must satisfy the same reliability criteria as outcome measures. 2016 Link to Resource Video 2 min R True False False False True False False False False False False False False False False True False False False False False False False False False False False 1 False False False False False True False False False
135 Introduction to WWC Training: Intro–Chapter 1, WWC Training What Works Clearinghouse Institute of Education Sciences An introduction to WWC Group Design Training Modules, this video provides a brief summary of the purpose and the products of the WWC. 2016 Link to Resource video 6 min P True False False False True False False False False False False False False False False False False False False False True False False False False False False 2 False False False False False True False False False
136 WWC Training Logistics: Intro–Chapter 2, WWC Training What Works Clearinghouse Institute of Education Sciences This video, designed to provide logistics related to the WWC Group Design Training Modules, introduces different helpful resources that are available on the WWC website. 2016 Link to Resource video 3 min P True False False False True False False False False False False False False False False False False False False False False False False False False False False 1 False False False False False True False False False
137 IWWC Group Design Standards: Intro–Chapter 3, WWC Training What Works Clearinghouse Institute of Education Sciences This video provides a brief overview of the reason for WWC standards, and the three ratings that a study might receive after being reviewed by WWC: (1) Meets standards without reservations (2) Meets standards with reservations, and (3) Does not meet standards. 2016 Link to Resource video 2 min P True False False False True False False False False False False False False False False False False False False False False False False False False False False 1 False False False False False True False False False
206 Analyzing Teacher Retention by Performance Level and School Need Examples From Maricopa County Nicotera, A.; Pepper, M.; Springer, J.; Milanowski, A. Teacher Incentive Fund The Teacher Incentive Fund developed this guide to provide district staff with simple analyses to help identify teacher retention challenges. The body of the guide goes through various measures and presents results for one school district in user-friendly graphics. The appendices describe the data, measures, and analyses. This guide is a valuable resource for district research and human resources staff who want to understand how to understand and address teacher retention patterns. 2017 Link to Resource Guide 13 pages P False False False True True False False False False False False False False False True True False False False False True False False False True True False None 2 True False True False False False False False False
207 Practical Resource Guide for Evaluating STEM Master Teacher Programs Frechtling, J. Teacher Incentive Fund The Teacher Incentive Fund developed this guide to help grantees develop evaluations. It is written in the context of programs to improve science, technology, engineering, and mathematics education but is relevant to a broad range of topics. The guide focuses on key aspects of the evaluation process, including identifying evaluation questions, selecting data collection approaches, developing the evaluation design and analysis plan, and disseminating findings. This guide is a valuable resource for program managers, project implementation teams, and their evaluators who want to understand how to design a high quality evaluation. The guide is also useful for seasoned evaluators who want to introduce practitioners to evaluation. 2014 Link to Resource Guide 22 pages P False False False True True True True False False False False False False False False False False False True False True False False False False False False None 3 False False True False False False False False True
208 Designing Teacher Evaluation Systems: New Guidance from the Measures of Effective Teaching Project Kane, T.; Kerr, K.; Pianta, R. Bill and Melinda Gates Foundation The Bill and Melinda Gates Foundation published this book to share the lessons learned from its multi-year Measures of Effective Teaching project. The book is a compilation of research papers that use a large database the Foundation created in the course of the project. It focuses on three aspects of the study: using data for feedback and evaluation, connecting evaluation measures with student learning, and the properties of evaluation systems (quality, frameworks, and design decisions). This guide is a valuable resource for professionals designing and evaluating educator evaluation systems. The guide is also useful for program designers and evaluators who want to anticipate issues that are likely to arise due to their design decisions. 2015 Link to Resource Methods Report 617 pages R False False False True True False True False False False False True False False True True True False True True True True True False True False False None 3 True False False False True False False False False
209 Dimensions of Dosage: Evaluation Brief for TIF Grantees Bailey, J.; Shakman, K.; Ansel, D.; Greller, S. Teacher Incentive Fund The Teacher Incentive Fund developed this guide to introduce evaluators to dosage as a tool that can be used when it is not possible to identify a good comparison group for evaluating the effects of a program. Dosage refers to the amount of the intervention that is delivered—for example, the number of hours teachers spend with a teacher leader. The guide describes dosage, its dimensions, associated data requirements, and considerations to keep in mind when thinking about measuring dosage. It provides an example from a school district and a list of questions to guide discussions around dosage. This guide is a valuable resource for program directors planning a high quality evaluation. 2016 Link to Resource Guide 8 pages P False False False True True False False False False True False False False False False False False False False False False False False False False False False None 2 False False True False False False False False False
210 A Composite Estimator of Effective Teaching Mihaly, K.; McCaffrey, D.; Staiger, D.; and Lockwood, J. Bill and Melinda Gates Foundation The Bill and Melinda Gates Foundation developed this report to help educator evaluation program designers select weights to place on multiple measures of effectiveness in an effort to most accurately assess teaching quality. The report focuses on a statistical method to select optimal weights and then compares those weights to others, such as ones used in state educator evaluation systems. It also examines the extent to which optimal weights vary depending on how much data are available. This guide is a valuable resource for program designers who want to understand and replicate the statistical analyses that underlie the non-technical “Ensuring Fair and Reliable Measures of Effective Teaching” (Resource # 205). 2013 Link to Resource Methods Report 51 pages R False False False True True False True False False False False False False False True True False False True True True False True False True False False None 3 True False False False True False False False False
211 Approaches to TIF Program Evaluation: Cost-Effectiveness Analysis Finster, M. Teacher Incentive Fund The Teacher Incentive Fund developed this guide to help program directors and evaluators understand cost effectiveness analysis. The one-page guide focuses on a simple, concrete example; then lists the question this type of analysis answers, the data required, how it works, and how to use results. This guide is a valuable resource for practitioners who must choose among different programs. It is also useful for evaluators in the early stages of planning an evaluation. 2016 Link to Resource Guide 1 page P False False False True True False False False False False False False False False False False False False True False False False False False False False False None 1 False False True False False False False False True
212 Approaches to TIF Program Evaluation: Interrupted Time-Series (ITS) Designs Finster, M. Teacher Incentive Fund The Teacher Incentive Fund developed this guide to help program directors and evaluators understand interrupted time series designs. The one-page guide focuses on a simple, concrete example; then lists the question this type of analysis answers, the data required, how it works, and how to use results. This guide is a valuable resource for program directors who want to measure the impact of programs by comparing multiple observations on an outcome before and after an intervention. It is also useful for evaluators in the early stages of planning an evaluation. 2016 Link to Resource Guide 1 page P False False False True True False True False False False False False False False False False False False True False False False False False False False False None 1 False False True False False False False False True
213 Approaches to TIF Program Evaluation: Regression Discontinuity Design Finster, M. Teacher Incentive Fund The Teacher Incentive Fund developed this guide to help program directors and evaluators understand regression discontinuity designs. The one-page guide focuses on a simple, concrete example; then lists the question this type of analysis answers, the data required, how it works, and how to use results. This guide is a valuable resource for program directors who want to measure the impact of programs by comparing outcomes of similar participants on either side of a cut-off point that determines whether the participant participates. It is also useful for evaluators in the early stages of planning an evaluation. 2016 Link to Resource Guide 1 page P False False False True True False False True False True False False False False False False False False True False False False False False False False False None 1 False False True False False False False False True
214 Approaches to TIF Program Evaluation: Difference in Differences Design Finster, M. Teacher Incentive Fund The Teacher Incentive Fund developed this guide to help program directors and evaluators understand difference-in-differences designs. The one-page guide focuses on a simple, concrete example; then lists the question this type of analysis answers, the data required, how it works, and how to use results. This guide is a valuable resource for program directors who want to measure the impact of programs by comparing the difference in outcomes between the intervention and comparison groups before and after the intervention group experienced the intervention. It is also useful for evaluators in the early stages of planning an evaluation. 2016 Link to Resource Guide 1 page P False False False True True False True False False False False False False False False False False False True False False False False False False False False None 1 False False True False False False False False True
215 Better Feedback for Better Teaching: A Practical Guide to Improving Classroom Observations Archer, J.; Cantrell, S.; Holtzman, S..; Joe, J..; Tocci, C.; Wood, J. Bill and Melinda Gates Foundation The Bill and Melinda Gates Foundation developed this guide to help professionals design and implement high-quality observation systems as part of educator evaluation initiatives. The guide focuses on the components of high-quality evaluator training including priorities, delivery methods, the creation of training videos, the effective use of an observation rubric, and data use for continuous improvement. It also provides a series of tools from states and districts to support each step of the process. This guide is a valuable resource for program directors who want to understand how to design and implement a high quality evaluation. It is also useful for evaluators of educator evaluation systems. 2016 Link to Resource Guide Tool 364 pages P False False False True True False False False False False False False False False False False True False False False True False False False True False False None 3 True False True True False False False False True
216 Building Trust in Observations: A Blueprint for Improving Systems to Support Great Teaching [Policy and Practice Brief] Wood, J.; Tocci, C.; Joe, K.; Holtzman, S.; Cantrell, S; Archer, J. Bill and Melinda Gates Foundation The Bill and Melinda Gates Foundation developed this guide to help professionals design and implement high-quality observation systems as part of educator evaluation initiatives. The guide focuses on the components of observation systems that help build trust – observation rubric, evaluator training, observer assessment, and monitoring. Each component comes with a specific set of action steps in a user-friendly table format. This guide is a valuable resource for program directors who want to design and implement a high quality observation system with an eye towards building trust and buy-in. 2014 Link to Resource Guide Tool 28 pages P False False False True True False False False False False False False False False False False True False False False True False False False True False False None 3 True False True True False False False False True
217 Evaluating Programs for Strengthening Teaching and Leadership Milanowski, A.; Finster, M. Teacher and Leadership Programs The Teacher Incentive Fund developed this guide to help grantees evaluate their programs. The guide focuses on key aspects of the evaluation process, including identifying the logic of how the program will lead to the desired outcomes, developing evaluation questions that examine this logic, exploring methods for measuring these evaluation questions, choosing an appropriate evaluation design for assessing impacts, disseminating evaluation findings, and choosing the right evaluator. This guide is a valuable resource for program directors who want to understand how to design and implement a high-quality evaluation. The guide is also useful for seasoned evaluators who want to review important evaluation issues. 2016 Link to Resource Guide 67 pages P False False False True True True True False True True False False False True False True False False True True True False True True True True True None 3 True False True False False False False False True
218 Peer Evaluation of Teachers in Maricopa County’s Teacher Incentive Fund Program Milanowski, A.; Heneman, H.; Finster, M. Teacher Incentive Fund The Teacher Incentive Fund developed this guide to help program designers and evaluators implement a peer observation system. The guide describes a peer observation system in Arizona, including advantages, challenges, and mitigation strategies; peer evaluator recruitment, selection, training, evaluation, and compensation; initial perceived effects; and issues of costs and sustainability. This guide is a valuable resource for program directors who want to include peers among their evaluators to increase the total number of evaluators or reduce the time burden on current evaluators. The guide is also useful for evaluators who want to leverage teachers to collect data on educator evaluations. 2015 Link to Resource Guide 16 pages P False False False True True False False False True False False True False False False False True False False False True False False False True False False None 2 True False True False False False False False False
219 Quick and Easy Ways to Evaluate the Success of Your Teacher Incentive Fund Project in Meeting TIF Goals Milanowski, A.; Finster, M. Teacher Incentive Fund The Teacher Incentive Fund developed this guide to help grantees assess progress towards meeting program goals – increased student achievement through higher educator effectiveness; more equitable distribution of teachers; and improved, performance-based, sustainable educator compensation systems. The guide suggests simple analyses and graphical representations that describe increases in student achievement, the magnitude of awards educators receive depending on their level of effectiveness, and the distribution of teacher effectiveness among schools serving different student populations and among subjects. It suggests surveys and interviews to get at sustainability issues including stakeholder support and the development of human capital management systems. This guide is a valuable resource for program directors who want to supplement their local evaluations with simple analyses to gauge progress towards meeting program goals. The guide is also useful for program directors or evaluators who want to communicate basic information on their program’s effectiveness in a user-friendly way to a range of audiences. 2016 Link to Resource Guide 12 pages P False False False True True False False False False False False False False False False True False False False False True True True False True False False None 2 True False True False False False False False False
220 Seeing It Clearly: Improving Observer Training for Better Feedback and Better Teaching Measures of Effective Teaching (MET) project Bill and Melinda Gates Foundation The Bill and Melinda Gates Foundation developed this guide to help professionals design and implement high-quality observation systems as part of educator evaluation initiatives. The guide focuses on training observers of educators being evaluated. The guide describes how to recruit and prepare observers; what skills to develop; and how to use data to monitor implementation. Each section provides guiding questions on how to lay the foundation and how to improve. The guide also includes a checklist, tips, tools from districts and states, and a planning worksheet. This guide is a valuable resource for program directors who want to design and implement a high quality observation system. 2015 Link to Resource Guide Tool 128 pages P False False False True True False False False False False False True False False False False True False False False False False False False False False False None 3 False False True True False False False False False
221 The Reliability of Classroom Observations by School Personnel Ho, A.; Kane, T. Bill and Melinda Gates Foundation The Bill and Melinda Gates Foundation developed this report to help evaluators of educator observation programs assess the accuracy and reliability of school personnel in performing classroom observations. A particularly useful section of the report is the description of the study design, which includes random assignment (pp. 5-8). The authors examine different combinations of observers and lessons observed, including variation in the length of observations, and the option for teachers to choose the videos that observers see. The bulk of the report (pp. 9-31) presents study results including clear tables and graphs. This guide is a valuable resource for program evaluators who want to understand the breadth of statistical analyses that underlie the non-technical “Ensuring Fair and Reliable Measures of Effective Teaching” (Resource # 205). 2013 Link to Resource Methods report 34 pages R False False False True True True False False False True False True False False False True True False True True True True False False True False False None 3 True False False False True False False False False
222 Tools For Making it Real: Pre-Scoring Video to Clarify Expectations for Effective Teaching Archer, J.; Cantrell, S.; Holtzman, S.; Joe, J.; Tocci, C.; Wood, J. Bill and Melinda Gates Foundation The Bill and Melinda Gates Foundation developed this guide to help professionals design and implement high-quality observation systems as part of educator evaluation initiatives. The guide focuses on creating and using pre-scored videos of teaching practice to build a common understanding of what teaching looks like at particular levels of effectiveness. The guide describes how to identify and acquire appropriate videos; how to recruit and prepare coders; and how to monitor for continuous improvement. Each section provides guiding questions on how to lay the foundation and how to improve. The guide also includes a planning worksheet. This guide is a valuable resource for program directors who want to design and implement a high quality observation system. 2014 Link to Resource Guide Tool 41 pages P False False False True True False False False False False False True False False False False False False False False False False False False False False False None 3 False False True True False False False False False
223 Tools for Making it Real: Pre-Scoring Video to Clarify Expectations for Effective Teaching – DCPS Video Quality Checklist Measures of Effective Teaching (MET) project Bill and Melinda Gates Foundation The Bill and Melinda Gates Foundation developed this tool to help professionals design and implement high-quality observation systems as part of educator evaluation initiatives. Videos of teaching practice are a tool used to train observers. Master observers pre-score videos with a rubric and observers in training compare their own attempts to the master observers’. This tool is a checklist that helps determine whether a video includes sufficient evidence to rate practice and should therefore be pre-scored and used in the training of observers. This tool is a valuable resource for program directors who want to design and implement a high quality observation system. 2014 Link to Resource Tool 2 pages P False False False True True False False False False False False False False False False False False False False False False False False False False False False None 1 False False False True False False False False False
224 Tools for Making it Real: Pre-Scoring Video to Clarify Expectations for Effective Teaching – DCPS Guidelines for Score Rationale Measures of Effective Teaching (MET) project Bill and Melinda Gates Foundation The Bill and Melinda Gates Foundation developed this guide to help professionals design and implement high-quality observation systems as part of educator evaluation initiatives. Videos of teaching practice are a tool used to train observers. Master observers pre-score videos with a rubric. Clear rationales for ratings help observers in training understand what a rubric’s indicators of performance look like in practice, and it lets them compare their own attempts to identify relevant evidence with the correct evidence for rating a segment. This tool is a set of guidelines and specific examples to draft quality rationales when pre-scoring videos of teaching with a rubric. This tool is a valuable resource for program directors who want to design and implement a high quality observation system. 2014 Link to Resource Guide 3 pages P False False False True True False False False False False False False False False False False False False False False False False False False False False False None 1 False False True False False False False False False
225 Tools For Making it Real: Pre-Scoring Video to Clarify Expectations for Effective Teaching – RIFT Master Coding Worksheet Measures of Effective Teaching (MET) project Bill and Melinda Gates Foundation The Bill and Melinda Gates Foundation developed this tool to help professionals design and implement high-quality observation systems as part of educator evaluation initiatives. Videos of teaching practice are a tool used to train observers. Master observers pre-score videos with a rubric. This tool is a form for coders to organize relevant evidence for rating while pre-scoring videos of teaching. This tool is a valuable resource for program directors who want to design and implement a high quality observation system. 2014 Link to Resource Tool 2 pages P False False False True True False False False False False False False False False False False True False False False False False False False False False False None 1 False False False True False False False False False
226 Tools For Making it Real: Pre-Scoring Video to Clarify Expectations for Effective Teaching – DCPS Advice for Facilitating Reconciliation Measures of Effective Teaching (MET) project Bill and Melinda Gates Foundation The Bill and Melinda Gates Foundation developed this tool to help professionals design and implement high-quality observation systems as part of educator evaluation initiatives. Videos of teaching practice are a tool used to train observers. Master observers pre-score videos with a rubric. This tool is a script for reaching agreement on correct ratings when pre-scoring videos of teaching practice. This tool is a valuable resource for program directors who want to design and implement a high quality observation system. 2014 Link to Resource Tool 3 pages P False False False True True False False False False False False False False False False False False False False False False False False False False False False None 1 False False False True False False False False False
227 What It Looks Like: Master Coding Videos for Observer Training and Assessment — Summary McClellan, C. Bill and Melinda Gates Foundation The Bill and Melinda Gates Foundation developed this guide to help professionals design and implement high-quality observation systems as part of educator evaluation initiatives. Observers must know what it looks like when a particular aspect of teaching is demonstrated at a particular level. Master-coded videos—videos of teachers engaged in classroom instruction that have been assigned correct scores by professionals with expertise in both the rubric and teaching practice—help ensure this competency. The guide describes the goals of master coding and the recruitment and training of coders. This guide is a valuable resource for program directors who want to design and implement a high quality observation system. 2013 Link to Resource Guide 4 pages P False False False True True False False False False False False True False False False False True False False False False False False False False False False None 1 False False True False False False False False False
228 Content Knowledge for Teaching and the MET Project – Report Measures of Effective Teaching (MET) project Bill and Melinda Gates Foundation The Bill and Melinda Gates Foundation developed this guide to help professionals design and implement educator evaluation systems that use multiple measures of effectiveness. The guide focuses on pedagogical content knowledge, a subject-specific professional knowledge that bridges content knowledge and knowledge about the practice of teaching. The guide introduces assessments of content knowledge for teaching mathematics and English language arts. These assessments can be administered to teachers in order to establish the links among content knowledge for teaching, instructional practice, and student achievement. This guide is a valuable resource for program directors who want to design and implement a high quality educator evaluation system. 2010 Link to Resource Guide 4 pages P False False False True True False False False False False False False False False False False True False False False False False False False True False False None 1 True False True False False False False False False
229 Danielson’s Framework for Teaching for Classroom Observations Measures of Effective Teaching (MET) project Bill and Melinda Gates Foundation The Bill and Melinda Gates Foundation developed this guide to help professionals design and implement high-quality observation systems as part of educator evaluation initiatives. The guide presents the elements that constitute the Danielson framework, a research-based protocol to evaluate math and English language arts lessons in all grades. This guide is a valuable resource for program directors who want to select a teacher observation tool. 2010 Link to Resource Guide 4 pages P False False False True True False False False False False False False False False False False True False False False False False False False True False False None 1 True False True False False False False False False
230 The CLASS Protocol for Classroom Observations Measures of Effective Teaching (MET) project Bill and Melinda Gates Foundation The Bill and Melinda Gates Foundation developed this guide to help professionals design and implement high-quality observation systems as part of educator evaluation initiatives. The guide presents the elements that constitute the CLASS framework, a research-based protocol to evaluate math and English language arts lessons in all grades. This guide is a valuable resource for program directors who want to select a teacher observation tool. 2010 Link to Resource Guide 4 pages P False False False True True False False False False False False False False False False False True False False False False False False False True False False None 1 True False True False False False False False False
231 The MQI Protocol for Classroom Observations Measures of Effective Teaching (MET) project Bill and Melinda Gates Foundation The Bill and Melinda Gates Foundation developed this guide to help professionals design and implement high-quality observation systems as part of educator evaluation initiatives. The guide presents the elements that constitute the MQI framework, a research-based protocol to evaluate math lessons. This guide is a valuable resource for program directors who want to select a teacher observation tool. 2010 Link to Resource Guide 4 pages P False False False True True False False False False False False False False False False False True False False False False False False False True False False None 1 True False True False False False False False False
232 The PLATO Protocol for Classroom Observations Measures of Effective Teaching (MET) project Bill and Melinda Gates Foundation The Bill and Melinda Gates Foundation developed this guide to help professionals design and implement high-quality observation systems as part of educator evaluation initiatives. The guide presents the elements that constitute the PLATO framework, a research-based protocol to evaluate English language arts lessons in grades four through nine. This guide is a valuable resource for program directors who want to select a teacher observation tool. 2010 Link to Resource Guide 5 pages P False False False True True False False False False False False False False False False False True False False False False False False False True False False None 1 True False True False False False False False False
236 Data Quality Essentials – Guide to Implementation: Resources for Applied Practice Watson, J.; Kraemer, S.; Thorn, C. Center for Educator Compensation Reform The Center for Educator Compensation Reform published this article to help school systems identify, address, and plan for data quality problems before performance decisions are put under scrutiny of system stakeholders. The goal of this article is to identify the dimensions of data quality from a compensation reform perspective, provide Teacher Incentive Fund (TIF) project leaders with a data quality focus, and describe common data quality problems and solutions. The second goal is to help TIF leaders focus their attention on the quality of student-teacher linkage data. The article concludes with a brief discussion of continuous improvement methods through follow-up and feedback cycles for teachers. 2009 Link to Resource Guide 18 pages P False False False False True False False False False False False False False False True False False True False True False False True False True False False None 2 True False True False False False False False False
237 Evaluating Principals Work: Design Considerations and Examples for an Evolving Field Kimball, S.; Clifford, M.; Fetters, J.; Arrigoni, J.; Bobola, K. Teacher Incentive Fund The Teacher Incentive Fund developed this guide to help program directors design principal evaluation systems. The first section addresses leadership effectiveness literature and the documented need for principal evaluation. The second section includes the key components of principal evaluation systems supplemented with examples from innovative school districts and states. This section addresses challenges reformers often confront in the design process and solutions for some of those challenges. The third section summarizes the critical areas of evaluator training and pilot testing. The fourth section highlights stakeholder engagement and communication. The fifth section summarizes training and pilot testing. This guide is also a valuable resource for researchers who want to assess the quality of principal evaluation systems. 2012 Link to Resource Methods Report 23 pages P False False False True True False False False False False False False False False False True True False False False False False True False True True False None 3 True False False False True False False False False
238 Asking Students About Teaching: Student Perception Surveys and Their Implementation – Policy and Practice Brief Measures of Effective Teaching (MET) project Bill and Melinda Gates Foundation The Bill and Melinda Gates Foundation developed this guide to help program directors use student perception surveys as part of educator evaluations. The guide describes how to measure what matters — what teachers do, the learning environment they create, the theory of instruction that defines expectations for teachers in a system; how to ensure accuracy in student responses; how to ensure reliability with adequate sampling and items number; how to support improvement in teaching; and how to engage stakeholders. The guide provides real-life examples, tools, and graphics. This guide is a valuable resource for program directors and evaluators who want to leverage student perception as part of program evaluations. 2012 Link to Resource Guide 4 pages P False False False True True False False False False False False False False False False False True False False False False True False False False False False None 1 False False True False False False False False False
239 National Board Certification as Professional Development: What Are Teachers Learning? Lustick, D.; Sykes, G. Education Policy Analysis Archives This Education Policy Analysis Archives article evaluates the contribution of the professional development aspect of the National Board Certification to teacher learning, providing an example evaluation approach in the process. The report focuses on handling the biases introduced by the fact that teachers self-select into this certification process. It leverages the Recurrent Institutional Cycle Design (RICD), a quasi-experimental design that accounts for the voluntary, self-selected nature of the subjects’ participation while maintaining the pre-post collection of data. RICD helps control for non-random threats to internal validity while providing a means of establishing some degree of causality between the treatment and observed results. The report also discusses how to use interview data to measure evidence of learning. Appendix A lists limitations of the research and ways they could have been mitigated with appropriate resources. This report is a valuable resource for program evaluators who are concerned with self-selection bias and those planning to collect data through interviews. 2006 Link to Resource Methods report 46 pages T False False False True True False True False False True False True False True False False True False True True True False False False True False False Intervention, National Standards, Young Adults, Program Effectiveness, Teacher Certification, Faculty Development, Teacher Competencies, Interviews, Science Instruction, Knowledge Base for Teaching, Teacher Effectiveness, Teaching Skills, Science Teachers, Outcomes of Education, Teacher Improvement, Interrater Reliability, Inquiry, Portfolio Assessment 3 True False False False True False False False False
240 Opportunities for Teacher Professional Development in Oklahoma Rural and Nonrural Schools Peltola, P.; Haynes, E.; Clymer, L.; McMillan, A.; Williams, H. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Southwest developed this report to describe the findings from a principal survey on opportunities for teacher professional development in Oklahoma rural and non-rural schools. The relevant sections of the report are the appendices. Appendix A describes survey development and administration, sample selection, data file processing, and an analysis of nonresponse bias. Appendix B provides the survey. 2009 Link to Resource Tool 50 pages P False False False True True False False False False False False True False False True False True False False True False False False False True False False Rural Education, Rural Schools, Schools: Characteristics of, Teachers, Characteristics of, Professional Development 3 True False False True False False False False False
241 Teaching and Learning International Survey (TALIS) 2013: U.S. Technical Report Strizek, G. A.; Tourkin, S.; Erberber, E.; Gonzales, P. National Center for Education Statistics, U.S. Department of Education The National Center for Education Statistics developed this report to provide researchers with an overview of the design and implementation of a large-scale teacher survey that covers professional development, evaluation, school leadership and climate, instructional beliefs, and pedagogical and professional practices. The report focuses on a comprehensive description of survey development and administration, and data processing. It addresses common, important survey considerations such as sampling and non-response. Appendices provide the survey tool and recruitment documents. This series is a valuable resource for program directors and evaluators who want an introduction to or a review of all the steps involved in collecting survey data. The series is also useful for evaluators who want to use the data from this particular survey (the Teaching and Learning International Survey) to answer some of their research questions. 2014 Link to Resource Guide Tool 257 pages T False False False True True False False False False False True True True False True False True False False True False False False False True True False International Comparisons, Labor Force Experiences, Principals, Professional Development, Public Schools, Schools, Secondary Education, Elementary and Secondary Schools Staff, Teachers:Characteristics of, College Major or Minor, Education/preparation, Job Satisfaction, Perceptions and Attitudes 3 True False True True False False False False False
242 The Texas Teacher Evaluation and Support System Rubric: Properties and Association With School Characteristics Lazarev, V.; Newman, D.; Nguyen, T.; Lin, L.; Zacamy, J. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Southwest developed this report to describe the properties of the Texas Teacher Evaluation and Support System rubric and its association with school characteristics. Two sections of the report are relevant. The first one is study limitations on pages 11 and 12, which discusses collecting data during early stages of implementation, small sample size, single teacher observations, and school- vs. classroom-level analyses. The other is data and methodology in Appendix C where authors describe how they combined datasets and the approach they used for each research question, including descriptive statistics, correlations, factor analysis, and statistical tests. These sections are a valuable resource for program directors and evaluators who want to create or evaluate teacher evaluation rubrics and especially determine whether they truly differentiate among teachers. Appendix C will be useful to evaluators working with multiple datasets. 2015 Link to Resource Methods report Tool 6 pages T False False False True True False False False False False False False False False True False False True True False False False False False True True False Schools: Characteristics of, Teachers: Characteristics of, Performance Evaluations, Quality of Instruction 2 True False False True True False False False False
243 2008/09 Baccalaureate and Beyond Longitudinal Study (B&B:08/09) Field Test Methodology Report – Working Paper Series Wine. J.; Cominole, M.; Janson, N.; Socha, T.; National Center for Education Statistics, U.S. Department of Education The National Center for Education Statistics developed this report to describe the methodology and findings of the Baccalaureate and Beyond Longitudinal study. The relevant section for this data base is appendix C which provides examples of materials related to recruiting and maintaining participants, including parent letters, a data collection announcement letter and flyer, and thank you letters. 2015 Link to Resource Guide Tool 284 pages P False False False False True False False False False False False True False False False False True False False False False False True False False False False 3 True False True True False False False False False
244 An Evaluation of the Data From the Teacher Compensation Survey: School Year 2006–07 Cornman. S.Q.; Johnson. F.; Zhou, L; Honegger,S.; Noel, A.M. National Center for Education Statistics, U.S. Department of Education The National Center for Education Statistics developed this report to provide researchers with compensation, teacher status, and demographic data about all public school teachers from multiple states. Three sections are of use. Section 2 describes the data elements the survey collects. Section 4 describes data quality and section 6 survey limitations. The link to access the data is at the bottom of page 55. This survey is a valuable resource for researchers who want to analyze teacher compensation data and are concerned with the sampling error and self-reporting bias present in sample surveys, as this dataset contains universe data at the teacher level for multiple states. Sections 4 and 6 are also useful for program directors and evaluators who want to assess the quality and completeness of data by providing examples of data quality checks and red flags. 2009 Link to Resource Guide Tool 10 pages P False False False False True False False False False False False False False False False False True False False True False False False False True False False 2 True False True True False False False False False
245 Classroom Assessment for Student Learning: Impact on Elementary School Mathematics in the Central Region Randel, B.; Beesley, A. D.; Apthorp, H., Clark, T.F.; Wang, X.; Cicchinelli, L. F.; Williams, J. M. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Central developed this report to provide evaluators with a detailed description of an implementation and impact evaluation of a professional development program in classroom and formative assessment. The report describes the intervention, study design and implementation, implementation fidelity, and impact on student and teacher outcomes. In the process it raises a number of challenges and limitations such as attrition, non-response, missing data, inability to detect very small effects, and generalizability. Appendices provide a rich level of detail as well as data collection instruments This report is a valuable resource for program evaluators who seek an example of a program evaluation that addresses both implementation and impact and that examines a range of threats to the reliability and validity of results. 2010 Link to Resource Methods report Tool 153 pages T False False False True True True False False False True True True True True False False True False True True True False True False True False False 3 True False False True True False False False False
246 College Enrollment Patterns for Rural Indiana High School Graduates Burke, M. R.; Davis, E.; Stephan, J. L. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Midwest developed this report to describe the findings from a descriptive study of rural and non-rural differences in college enrollment patterns among Indiana’s 2010 public high school graduates enrolling in Indiana public colleges. The relevant sections of the report are the limitations section on pages 18-19 and Appendix B on data and methodology. Limitations include incomplete data, restricted ability to generalize, and inability to make causal inferences. Appendix B describes data sources, analytic sample description, missing data handling, variable creation, descriptive statistics, statistical tests, mapping analyses, a rubric on college selectivity, and regression models. These sections are a valuable resource for program directors and evaluators who want to learn or review a range of analyses that can be conducted with administrative data. 2016 Link to Resource Methods report 16 pages T False False False False True False True False False False False False False False True True False False True True True False True False False True False 2 True False False False True False False False False
247 Data Collection and Use in Early Childhood Education Programs: Evidence from the Northeast Region Zweig, J.; Irwin, C. W.; Kook, J. F.; Cox, J. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Northeast and Islands developed this report to describe preschools’ collection and use of data on early learning outcomes, the amount of time children spend in early childhood education, and classroom quality. The relevant sections of the report are the limitations section on page 18 and the appendices. Limitations include selection bias, small sample size, and missing data. Appendix A describes the sample, recruitment strategy, interview protocol, and procedures for analyzing the interviews. Appendix B describes the sample, variables, and methodology used to analyze the data (descriptive statistics and statistical tests), including how missing data were handled. Figure C1 in Appendix C illustrates the process used to combine data from multiple sources. Appendices D and E provide interview protocols. These sections are a valuable resource for program directors and evaluators who want to learn as much as possible from a limited sample. 2015 Link to Resource Methods report Tool 18 pages T False False False False True False False False False False True True False False True True True True False True False False True False True False False 2 True False False True True False False False False
248 Impacts of Comprehensive Teacher Induction: Final Results from a Randomized Controlled Study Glazerman, S.; Isenberg,E.; Dolfin, S.; Bleeker, M.; Johnson,A.; Grider, M.; Jacobus, M. National Center for Education Statistics, U.S. Department of Education The National Center for Education Statistics developed this report to provide researchers with a description of an evaluation of the impact of a comprehensive teacher induction initiative. The report focuses on the design and implementation of a randomized experiment using data collection from multiple stakeholders. It addresses issues of comparison group selection and bias. Appendix A provides detailed information on the model and how it addresses potential bias. Figures are provided throughout that are examples of various ways of displaying results. 2010 Link to Resource Methods report 272 pages T False False False True True True False False False True True True True True True False True False True True True True True False True True False 3 True False False False True False False False False
249 Incomplete Reporting: Addressing the Prevalence of Outcome-Reporting Bias in Educational Research Trainor, B.; Polanin, J.; Williams, R.; Pigott, T. SREE Spring 2015 Conference Abstract Study authors developed this brief (SREE abstract) to raise researchers’ awareness of biases due to incomplete outcome reporting — the practice of omitting outcome variables from primary studies that were actually collected. Primary studies not reporting on all outcomes may lead to an incomplete understanding of a phenomenon that may be compounded when the study is included in a systematic review of research. The brief focuses on how authors plan to conduct analyses of a set of dissertations to identify factors that might be related to outcome-reporting bias such as whether the outcome was considered a primary or a secondary focus of an intervention, whether the outcome was a negative or potentially harmful effect of the intervention, and whether the measurement strategy for the outcome was reliable and valid. 2015 Link to Resource Methods report Brief 5 pages T False False False False True False False False False False False False False False False False False False False True True False False False False False False 1 False True False False True False False False False
250 Indicators of Successful Teacher Recruitment and Retention in Oklahoma Rural School Districts Lazarev, V.; Toby, M.; Zacamy, J.; Lin, L.; Newman, D. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Southwest developed this report to describe teacher, district, and community characteristics in rural Oklahoma that predict which teachers are most likely to be successfully recruited and retained longer term. Three sections of the report are relevant. The first one is study limitations on page 16, which discusses unavailability of data on key variables that could predict recruitment and retention; a ten-year data time span, which does not allow for long-term analyses; and the non-experimental design, which prevents making causal inferences. Appendix B describes how variables were identified. In Appendix C on data and methodology, the authors describe the sample, datasets, outcome variable creation, treatment of missing data, accounting for the censoring of duration data, and the approach they used for each research question, including descriptive statistics, probabilities, statistical tests, and regression analyses (logistic regression and discrete-time survival analysis). It also describes how results across models were compared to understand the predictive capacity of specific groups of variables, robustness checks, and the calculation of regression-adjusted marginal probabilities to account for the fact that variables are likely to be significant simply because of the size of the database. These sections are a valuable resource for program evaluators and researchers who want to identify patterns in large datasets, especially those that involve duration variables. 2017 Link to Resource Methods report 17 pages T False False False True True False False False False False True False False False True True False False True True False False False False True True True Retention (Of Employees), States, Teachers Attrition/Mobility 2 True False False False True False False False False
251 Theoretical and Empirical Underpinnings of the What Works Clearinghouse Attrition Standard for Randomized Controlled Trials Deke, J.; Chiang, H. SREE Spring 2014 Conference Abstract Study authors developed this brief (SREE abstract) to explain the WWC attrition model, the process for selecting key parameter values for that model, and how the model informed the development of the WWC attrition standard, and can be used to develop attrition standards tailored to other substantive areas. The brief focuses on the model, how it can be used to estimate bias and determine whether differential attrition rates exceeds a maximum tolerable level. 2014 Link to Resource Brief Methods report 9 pages T True False False False False True False False False False False False True True False False False False False True False False False False False False False Attrition (Research Studies), Student Attrition, Randomized Controlled Trials, Standards, Evidence, Models, Statistical Bias, Error of Measurement, Intervention, Educational Research 2 False True False False True False False False False
252 The What Works Clearinghouse: New Strategies to Support Non-Researchers in Using Rigorous Research in Education Decision-Making Seftor, N.; Monahan, S.; McCutcheon, A. SREE Spring 2016 Conference Abstract Study authors developed this brief (SREE abstract) to raise researchers’ awareness of What Works Clearinghouse (WWC) activities to communicate findings of rigorous research to a wide audience in clear and engaging ways. The brief focuses on three tools the WWC publishes: briefs that explain the rules the WWC uses to assess the quality of studies, a database of intervention reports, and practice guides and accompanying materials that describe low-cost actionable recommendations to address common challenges, such as writing or algebra instruction. This brief is a valuable resource for researchers and evaluators who want to leverage rigorous research to guide and support their work and want to access them easily. 2016 Link to Resource Guide 7 pages P False False False False True False False False False False False False False False False False False False False False False False False False False False False Clearinghouses, Federal Programs, Educational Research, Decision Making, Evidence Based Practice, Information Dissemination, Research and Development, Theory Practice Relationship 2 False False True False False False False False True
253 Workshop on Survey Methods in Education Research: Facilitator’s guide and resources Walston, J.; Redford, J.; Bhatt, M. P. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Midwest developed this report for practitioners who want to conduct training on developing and administering surveys. The report provides materials related to various phases of the survey development process, including planning a survey, borrowing from existing surveys, writing survey items, pretesting surveys, sampling, survey administration, maximizing response rates, and measuring nonresponse bias. It also contains a section on focus groups (as part of the survey development process or as a supplementary or alternative data collection method). The materials include a sample workshop agenda, presentation handouts, activities, additional resources, and suggestions for adapting these materials to different contexts. This report is a valuable resource for program directors and evaluators who want to design surveys and focus groups to collect new data. 2017 Link to Resource Guide Tool 146 pages P False False False False True False False False False False True True False False False False True False False False False False False False False False False Educational Research 3 False False True True False False False False False
254 Compliance-Effect Correlation Bias in Instrumental Variables Estimators Reardon, S. F. SREE Spring 2010 Conference Abstract The study author developed this brief (SREE abstract) to help researchers’ appropriately use instrumental variables estimators to identify the effects of mediators in multi-site randomized trials under conditions of heterogeneous compliance. The brief describes how and when instrumental variables can be used under these conditions, provides a stylized example of a randomized trial investigating the impact of teacher professional development on student achievement, presents the structural model and findings, and concludes with the potentially substantial bias misuse of these estimators can yield. 2010 Link to Resource Methods report Brief 8 pages T False False False False False True False False False False False False False True False False False False True True False False False False False False False Social Science Research, Least Squares Statistics, Computation, Correlation, Educational Research, Statistical Bias, Sampling, Research Design, Evaluation Methods, Intervention, Research Methodology, Research Problems, Measurement 2 False True False False True False False False False
255 An Evaluation of Bias in the 2007 National Households Education Surveys Program: Results From a Special Data Collection Effort Van de Kerckhove, W.; Montaquila, J.M.; Carver, P. R.; Brick, J.M. National Center for Education Statistics, U.S. Department of Education The National Center for Education Statistics developed this report to examine bias in estimates from the 2007 National Household Education Surveys Program due to nonresponse from both refusals and noncontact cases, as well as bias due to non-coverage of households that only had cell phones and households without any telephones. The most relevant section for this data base are appendices A through F which provide advance, refusal, and community Letters, as well as four main tools for conducting in-person follow-ups: a household folder, an interviewer observation form, a “sorry I missed you” card, an appointment card, and a non-interview report form. 2008 Link to Resource Tool 53 pages P False False False False True False False False False False False True False False False False True False False False False False False False False False False Bias Analysis; Interviews by Telephone; Random Digit Dialing; Response Rates 3 False False False True False False False False False

How to use these filters. As you check individual field checkboxes, the documents that appear will only be documents that satisfy all of those checkboxes. The more boxes you check, the fewer number of documents.

Resource Characteristics

Evaluation Lifecycle

Additional Focus