What We’ve Learned About Teacher Evaluation
Nine years ago, three metropolitan school districts and four charter-school management organizations set out with an intriguing idea: if teachers were given more meaningful feedback and support to improve their craft, with increased pay and greater roles and responsibilities for teachers who earned tenure and with the most effective teachers being concentrated where they were needed most, would the overall number of effective teachers increase and help to improve student outcomes?
The answer came last month in the form of “Improving Teaching Effectiveness,” a 526-page study by the RAND Corporation and the American Institutes for Research, based on seven school years of research. Among the conclusions: Teachers in all of the targeted schools generally believed that the professional development activities in which they participated were useful for improving student learning. A majority of teachers thought the evaluation measures were a valid indicator of their effectiveness as a teacher, particularly a component that included direct classroom observation. And most teachers thought the evaluation system helped them improve their teaching.
Other findings of the study were less encouraging. Students whose teachers were provided with these additional evaluation and professional development activities did not post higher graduation rates compared with those whose teachers didn’t receive the support. On average, low-income and minority students at targeted schools did not have higher achievement than those at similar schools not participating in the initiative. The researchers also found that this initiative did not result in low-income and minority students getting increased access to effective teachers.
That leaves us with an important question: Do we believe this was a waste of resources? To which our answer is a resounding no. While it’s important to acknowledge that the student outcomes are not what we hoped for, we learned a tremendous amount about what it takes to provide high-quality feedback to teachers based on evidence of classroom practice, and what tradeoffs are involved in trying to do that consistently and fairly, district-wide. Additionally, certain schools recorded noticeable gains in student outcomes, as was the case with reading scores of Pittsburgh high school students. Each of the districts modified its recruitment and hiring policies during the initiative, and most continue to use improved teacher evaluation as part of their regular practice for recruitment and hiring.
When we make an investment like this one, be it in education, a new vaccine or a new farming approach, we believe in sharing the lessons we learn with the field, no matter the result. As a learning organization, we believe we have a responsibility to make these lessons known. So we will continue to gather data on the impact of these systems and encourage the use of all of those tools that helped teachers improve their practice. Our commitment to research and evaluation in our own work remains.
We also see great value in what the research taught us about listening to those closest to the classrooms, and we take that to heart in our work. We believe that locally driven solutions are most effective in helping schools to better serve their students and teachers, and that evidence and data are powerful tools to help to improve student outcomes. We support schools to develop interventions that best fit their own needs.
Our role as a foundation is to serve as an incubator and catalyst for good ideas. We work every day to help educators, administrators and policymakers find and test the most promising paths to improve teaching and student learning. We continue to be driven by the same guiding principle: all students, and especially low-income students and students of color, must have equal access to a great public education that prepares them for success in adulthood.