Teacher Preparation Regulations

by Michael Marder, Executive Director, UTeach Austin
[Originally posted on the UTeach Blog on February 9, 2017.]

On October 31, 2016, the U.S. Department of Education published a lengthy document describing new regulations on teacher preparation. The regulations are at best unnecessary and difficult to implement, and at worst could exacerbate teacher shortages in key areas. They should be repealed.

Dr. Marder is not alone in calling for this repeal. See the AACTE Ed Prep Matters blog.

STEM teacher preparation in the U.S. does have major problems. The biggest problem of all is that too few people want to become or remain teachers in key shortage areas such as STEM and special education, while universities have little incentive to put resources into preparing them. The discussion and regulations do not acknowledge this problem. Their concern is rating preparation programs acceptable or unacceptable and shaming or punishing programs that fall below the line. For STEM teachers, at least, this will not address the primary problem of teacher supply and could make the problem even worse.

Federal regulation of teacher preparation is not completely new. Since 2001, every state has been required to “provide the Secretary with an annual list of low-performing teacher preparation programs and an identification of those programs at risk of being placed on such list, as applicable.” It was very unlikely for a teacher preparation program to be placed on this list; in 2015, only 45 programs were listed, out of nearly 28,000. The new rules set up a detailed process that will greatly increase effort and awareness both on the part of states, and on the part of the teacher preparation programs that must follow the new regulations.

The problem with the new regulations is that they spring from a series of flawed assumptions that make the requirements difficult or impossible to implement, and they seem to presume that there is no accountability now, when there are already multiple levels of accountability.

Accountability for teacher preparation is not at all a bad thing. Any decent teacher preparation program should welcome objective ratings. High-quality ratings would let students know which programs are good when they enroll and let principals know which programs’ graduates are good when they hire. The problem with the new regulations is that they spring from a series of flawed assumptions that make the requirements difficult or impossible to implement, and they seem to presume that there is no accountability now, when there are already multiple levels of accountability.

One justification for the new regulations is that “. . . while the current title II reporting system produces detailed and voluminous data about teacher preparation programs, the data do not convey a clear picture of program quality as measured by how program graduates will perform in a classroom. This lack of meaningful data prevents school districts, principals, and prospective teacher candidates from making informed choices, creating a market failure due to imperfect information.”

Thus, every state has to convene a committee that will establish criteria by which to rate every teacher preparation program in the state. The committee has to include specific stakeholders, and the ratings have to include learning outcomes of students taught by program graduates, graduate job placement and retention rates, and employer surveys.

This may sound reasonable on the surface, but it will prove to be difficult, or in some cases impossible, to implement. The task of measuring student learning outcomes attributable to a particular teacher is technically challenging, and only possible with robust and complete longitudinal data. Credible methods require measuring individual student test scores in two consecutive years and associating the change to a teacher. Many states do not currently have data systems that collect data in a way that makes this possible at all.

Even in states where the data systems exist, many teachers teach classes for which consecutive tests simply do not exist. This is obviously true for subjects such as music or drama, but it is also true for many high school subjects such as physics or economics; most states do not have standardized tests in these subjects, and even if the standardized tests exist, there is no obvious or acceptable choice for a pre-test that can be used to measure the growth due to the teacher. How can a program be judged based on the performance of its graduates’ students through standardized tests that do not exist or on locally generated tests that are not standardized?

The regulations include other ways to judge programs that are more straightforward: rates of job placement and job retention of graduates. But here the rules make a curious choice, not persuasively explained, which is to allow states to exempt alternative certification programs. Why are for-profit certification providers being exempted from the only measures that can easily be applied to all programs?

It is easier to order something to be measured than to actually do it.

It is easier to order something to be measured than to actually do it. An excellent illustration is the Title II reporting system that the regulations are aiming to expand. This system has been collecting and reporting data on teacher preparation programs for more than a decade. One would think that at this point something simple — such as counting numbers of teachers — would be under control, but that is not the case. Institutions appear and disappear unpredictably from the dataset from year to year. Whole state numbers fluctuate wildly up and down. For example, Texas teacher production is said to have jumped up by 40% from 2009 to 2010 and back down again in 2011, although state records do not confirm this. Illinois teacher production is said to have dropped in half from 2010 to 2011. This is not plausible and is not corroborated by other data sources.

Information and accountability for teacher preparation programs are not as deficient as often assumed. The people who most need information about teacher preparation programs and the teachers they produce are the principals who hire them. The principals already know more about the record of program graduates than any new rating system will ever tell them because they see their teachers in action and already have access to all available student test scores and much more.

As for accountability, university preparation programs are already inspected periodically by regional accreditation organizations for their universities as a whole; they are separately audited and licensed by state education agencies; many programs are also reviewed periodically by the national organization CAEP; and graduates have to pass state or national exams to be licensed. Complying with the existing auditing and reporting already generates a constant stream of work that increasingly diverts time and resources from the task of preparing students to become teachers.

In one way, the recent regulations serve an important purpose. They dispel the illusion, if anyone recently had it, that teacher preparation programs are presumed to be doing a good job. Although many people point to a great teacher as an important influence in their life, the programs that prepare these teachers are not well-regarded or trusted. This loss of trust cannot be dismissed or ignored. Whether the new regulations are retained or repealed, asking seriously how the trust was lost and what teacher preparation programs must do to get it back is an urgent and immediate task, almost as urgent as continuing to prepare teachers for the next generation.