Building Evaluation Capacity: How the Coach Empowers School and District Leaders to Meet Educational Goals
What if school districts had ready access to locally generated evidence about what strategies and tools are most effective for their own students? Imagine if any district had the capacity to run rigorous trials of new initiatives — in a matter of months or even weeks — to inform their decision making?
Let’s take educational technology (ed tech) as an example. New ed tech products are released every day and adopted in schools across the nation, and schools and districts spend billions of dollars each year on ed tech. These investments include IT infrastructure, devices used by students and teachers, and instructional products from large-scale curricula to apps used in a single classroom. Districts and schools have access to a lot of data and information — marketing claims, data from software developers, data they collect — but little usable knowledge about which ed tech products are most effective for their students. Most districts don’t have the time or the capacity to conduct rigorous pilots of new technologies or evaluations of existing ones.
As school districts invest in new products and programs, they need timely, reliable evidence on which ones to choose, whether they are effective, and how best to implement them in their schools.
The Ed Tech Rapid Cycle Evaluation Coach
In partnership with the U.S. Department of Education’s Office of Educational Technology (OET), Mathematica Policy Research and SRI International have spent the past two years developing an online toolkit to help users conduct their own quick-turnaround, low-cost evaluations. The Ed Tech Rapid Cycle Evaluation Coach (Coach) is a web-based suite of tools that empowers school and district leaders to use their own data and rigorous methods to answer the question, Is this initiative or product achieving our goal?
To help users determine whether their ed tech is having the desired impact, the Coach guides them through a five-step process, from recommending an evaluation design and developing a specific research question to analyzing data and producing a brief that summarizes their findings. The Coach supports two types of evaluation designs, randomized pilots and matched comparison group studies, that are rigorous and can work for a wide range of circumstances.
In these two years, we have worked with school districts from across the country to test and refine the Coach. In the process we have learned a lot about the needs of districts and how the Coach fits into their existing work. We have also found that several features of the Coach have made it particularly innovative and useful.
Access. The Coach is free and online, and its interface is user-friendly. As a result, the toolkit has been accessed by users from all 50 states and every continent across the globe. The Coach evaluation findings are also presented in easy-to-interpret ways that can feed directly into district and school decision-making. Although the backend statistical analyses are complex, the results — how confident can I be that this worked? — are intuitive. Finally, the code that was used to develop the Coach and run the statistical analyses is open source and therefore can be used and adapted by anyone who can code.
Training. The Coach does not presume prior knowledge of research, evaluation, or statistics. Instead, each tool within the Coach walks users step-by-step through a series of questions. Guidance and training are embedded in each tool. More advanced users can refer to supplemental tools and technical explanations of the research methods and statistics. Over time, the Coach builds capacity of users to understand and independently conduct both randomized pilots and matched comparison design evaluations. We are helping to train a new generation of school and district staff to use evidence when making everyday decisions. We’ve also seen promising examples of universities using the Coach in teacher training programs.
Flexibility. The Coach has the flexibility to meet the needs of users. For instance, it can answer many types of research questions. Which product is more effective at improving student math achievement? Is this literacy practice improving student reading outcomes? Which is the more effective way to implement this program? Is this free resource as effective as the paid version we’re currently using? Although the Coach was originally designed to evaluate ed tech in schools, users are finding that the Coach helps them evaluate many different kinds of programs, products, and practices in many different settings.
The blog series
We are launching a blog series over the next 7 weeks to share what we have learned and highlight complementary work that is happening through other organizations. The topics include the need for a culture of evidence in districts, how the Coach can be used to inform procurement/renewal processes, the importance of rapid cycle evaluation champions within districts, the challenge of measuring non-academic outcomes, and emerging technologies. Like the Coach, reading this blog series will build evaluation capacity and the capacity for schools and district staff to be smart consumers of emerging learning technologies.
Bernadette Adams is a Senior Policy Advisor in the Office of Educational Technology at the U.S. Department of Education where she works to provide authentic STEM learning experiences for students, broaden student engagement and participation in STEM, and promote evidence-based adoption of emerging technologies and innovations for learning.
Tim Kautz, Researcher, Mathematica Policy Research
Kate Place, Researcher, Mathematica Policy Research
Alexandra Resch is Director of State and Local Education Partnerships at Mathematica Policy Research.
Rebecca Griffiths is a Principal Education Researcher in SRI International’s Center for Technology in Learning.
Kate Place is a Researcher at Mathematica Policy Research