Last week, we posed a big question we’ve been wrestling with at TNTP for two decades: How can we better prepare new teachers for success in the classroom?
For years, we worked hard to prepare our Teaching Fellows — recent college graduates and career-changers across the country — for the classroom. We wanted Fellows to enter the classroom performing at a higher level than their peers from across the country, but we found that that wasn’t happening consistently. Instead, our results were often about as good as any other provider of teacher prep: Some of our teachers did really well, some did not so well, and most fell in the middle.
The problem wasn’t just identifying and recruiting stronger candidates to our training programs. The bigger problem was that even when a candidate looked very strong according to our rigorous selection criteria, that didn’t mean they were going to do any better in the classroom than another candidate who just barely met the criteria. Certainly, there’s a floor on selection (below which candidates just aren’t meant for the classroom), but even as we tinkered with our selection model over more than fifteen years, we found it was really hard to use it to predict who was going to excel in the classroom, without actually seeing them teach.
So we built the Assessment of Classroom Effectiveness (ACE), which uses a variety of metrics to help us get a clear picture of how our teachers are performing in the classroom throughout their first year. Instead of certifying all the teachers who went through our training programs and met the requirements along the way, we started certifying only those who met our bar for first-year teacher performance. With ACE, we started prioritizing what really matters — how well a teacher actually teaches — rather than proxies like seat time and paperwork.
Here’s the bottom line: There is no way to truly understand how well we are preparing teachers to do the tough work of teaching unless we collect concrete data on how they’re actually performing in the classroom, with students. That’s the point of ACE — to paint a real, actionable picture of how our Teaching Fellows are doing, so we can do a better job supporting their growth and make sure we only certify teachers who are ready for the classroom.
There is no way to truly understand how well we are preparing teachers to do the tough work of teaching unless we collect concrete data on how they’re actually performing in the classroom, with students. That’s the point of ACE.
So what does ACE tell us?
ACE gives us a steady stream of data on what skills our teachers have mastered and where they still need to improve. It tells us what it’s like to be a student in a Fellow’s classroom, thanks to student survey data. It captures what principals and coaches see in action, and it takes into account student achievement data, too. In other words, it offers a really robust picture of how our teachers are progressing throughout their first year. And it provides our first-year teachers with a clear set of expectations and frequent feedback to help them meet those expectations.
As we collected that data over time, it showed us something really important: that a teacher’s first year matters. It’s not just that “survival” year; it’s a useful predictor of how they’re going to perform with students over time. When we compared ACE data to how Fellows are doing in the classroom beyond the first year, we found a strong link. Even though teachers do a ton of growing in their first several years in the classroom, the first year is a pretty good indicator of their long-term success.
“It’s like the playoffs — you cannot simulate the intensity, but you still need that practice to do well. I always knew that if I implemented what I learned through the program, I’d be all right.”
-Michael Russom, 2010 Nashville Teaching Fellow
Using a 2010 i3 Validation grant, we were able to take ACE to scale across seven cities where we were training new teachers. In Baltimore, Charlotte, Chicago, D.C., Fort Worth, Nashville, and New Orleans, we were able to make sure the new teachers we recruited and prepared through our Teaching Fellows programs were only certified to teach if they were consistently helping students learn. In the four years since we rolled out ACE at scale, we’ve seen about an 80 percent initial pass rate; we’ve offered extension plans to about another 10 percent, some of whom then pass in their second year. As of fall 2014, 170,000 students have been taught by teachers who passed the ACE screen in those seven cities.
We’ve learned so much about new teacher development from ACE that our whole approach to training new teachers has shifted as a result. Now, we focus our pre-service, summer training on a discrete set of core skills we’ve identified as most essential for new teachers when they first set foot in the classroom. Those foundational skills lay the groundwork for developing more complex skills throughout the first year. We’ve also transformed our training program to give new teachers more time for practicing skills, instead of just learning about them in the abstract, and we’ve defined a coaching model that gives teachers more hands-on guidance while they’re actually teaching. Together, thanks to the data we’re gathering from ACE and the changes we’ve made to our training as a result, we’re producing better new teachers than when we began.
“My coach gives me strategies that I can apply the very next day. I initially had trouble grouping my students for classwork. My coach helped me cluster students so I could target key areas for improvement. On the next assessment, all of my students met or exceeded their goals.”
-Kelsey Rieck, 2012 Chicago Teaching Fellow
That’s just the beginning of our teacher development story, though. As we learned more about new teachers’ development arcs, we started to wonder what happens beyond the first year. How do we ensure that the teachers we train are not only strong practitioners when they earn their certification, but also that we’ve laid the groundwork for their continued growth? How do teachers really improve over time, and what does it take to move the needle on teacher performance at scale?
Tell us what you think. We’ll share some of our latest thinking in our next post.