Inspired Ideas
Published in

Inspired Ideas

Myths Debunked: Unpacking ESSA’s Evidence-Based Research Tiers

ducators and curriculum decision makers are constantly striving to make the best, most educated decisions for their students. Since being enacted in 2015, the Every Student Succeeds Act (ESSA), has set forth guidelines and required states to develop individual plans to better set their own parameters for funding, accountability, and goals for their students. The goal is to improve outcomes for students at all school districts, whether they be a high-needs district or a high-performing district.

The complexity of this law, specifically around its evidence-based tier system, the method for rating research on an educational program, can often pose a challenge for curriculum decision makers. We’re aware of several myths surrounding ESSA’s evidence-based tier system that we feel are critical to unpack. This blog seeks to provide some clarity around this critical element of ESSA.

ESSA’s Evidence-Based Tiers Diagram

Figure — Diagram of Every Student Succeeds Act (ESSA) Evidence-based Tiers, as Published by the Software Information & Industry Association (SIIA).

The diagram shares ESSA’s four evidence-based tiers. ESSA and the US Department of Education outline four levels of evidence-based research distinguished by the methodological design of the study:

  • Strong (Tier I)
  • Moderate (Tier II)
  • Promising (Tier III)
  • Demonstrates A Rationale (Tier IV)

Before we jump into myths surrounding this tier system, it is crucial to note:

Each evidence level corresponds to the relative strength of the research design that went into the study, rather than the magnitude of the effect of the program on student outcomes.

Myth 1: “ESSA tiers are ratings of the program.”

A fundamental misconception of the ESSA tiers is that the study’s quality and a program’s effect is solely related to the level of evidence achieved by research on that program. This is problematic because ESSA evidence tiers are, by definition,

categorizations of the research on an educational program, not a categorization of the program itself.

Although this may seem like a subtle distinction, it is critical to the understanding that “high-quality” programs can be the subject of less rigorous research, especially if the study is on a program that is new to the market and has not had time to be studied.

An unintended implication of this myth is that some independent third-party reviewers of research and educational programs like Evidence for ESSA, can reinforce this misconception by listing the program name next to the tier of evidence on their website, implying that programs classified as a higher ESSA tier, are “stronger” and higher quality. However, this portrayal can be misleading and we encourage curriculum decision makers to be proactive in their analysis of all programs.

Myth 2: “We need to use ESSA’s evidence tiers as the primary screening tool when considering purchasing curriculum.”

The appeal of conforming to the notion of “evidence-based” programs, and the challenge presented by sifting through the vast number of available materials from which to choose, causes some states and districts to modify their curriculum purchasing processes to prioritize a sort of “ESSA rating” screening phase. During this theoretical screening phase, programs without the backing of certain levels of ESSA evidence are disqualified from consideration before the material review stage. We discourage this as such processes tend to oversimplify the complexities of the ESSA evidence tier criteria, and seem to run counter to the 5-step approach advocated by the U.S. Department of Education (1) and the best practices identified by several state departments of education (2).

Myth 3: “Evidence tiers 1–3 are better than evidence tier 4.”

This is a myth we hear often, and it is not true in all cases. Some states and districts have decided to expand ESSA’s requirement that only programs meeting ESSA’s top 3 tiers would qualify for Title I Section 1003 funds, to apply to all purchases of curriculum materials. This approach seems to be based on the false premise that programs with evidence that meets the criteria for tiers 1–3 are irrefutably better than programs with evidence that meets the criteria for tier 4 or programs with research evidence that does not meet the criteria for evidence tiers established by ESSA.

The research methodologies identified at each ESSA tier are not without their limitations. States and districts seem to have questions about the trade-offs that are implicitly required when selecting from the different evidence tiers.

EXAMPLE: The methodologies required for evidence to meet tiers 1 & 2 (RCT and quasi-experimental design) emphasize control and consistency in the research settings (i.e., schools or classrooms) at the expense of more naturalistic environments. Thus, it is often difficult to generalize tier 1 and 2 studies to typical classroom environments and to different settings due to the emphasis placed on isolating the effects of the program being studied.

Myth 4: “We need ESSA evidence-based tier information for all the instructional materials we are purchasing.”

While it is true that ESSA encourages the use of evidence-based programs and establishes a four- tiered system for understanding the quality of evidence, ESSA does not require that states or districts restrict ALL of their purchases to programs with evidence meeting one of those four tiers. There are exceptions. One such exception is for funds set aside under Title I, Section 1003 for school improvement, which can only be used on programs with evidence that meets tier 1, 2, or 3, under ESSA.

An unintended consequence of this myth is that states and districts have confusion about what programs they can purchase with their own funds, as compared to federal funds. An added layer of confusion arises when state ESSA plans require programs to meet more stringent evidence requirements than is required by the federal policy.

Myth 5: “Randomized Control Trials (i.e., True experiments), are the best and only way for educational programs to demonstrate their effectiveness.”

ESSA identifies evidence from “well-designed and well-implemented” randomized control trials (RCTs) as the highest tier of evidence under the law but also recognizes the value of quasi- experimental studies (Tier 2) and correlational studies (Tier 3). The notion of RCTs as the gold standard for research on educational programs is based on the prominence and utility of RCTs in clinical research because of their ability to establish causality. Despite their utility in other arenas, RCTs are not the only — or most well-suited — type of design for research on all education curriculum programs for the follow reasons:

1. In an educational setting, there is no such thing as a placebo curriculum to compare a treatment curriculum to. As a result, the outcome of RCT research is dependent on the comparison/control curriculum being used and not just on the treatment curriculum being used.

2. RCTs are traditionally very costly and time consuming making them difficult to undertake on new and innovative programs.

3. There are ethical questions surrounding the assigning of students to conditions when belief is that a program is beneficial. Some students will be placed at a presumed (dis)advantage with no way for parent or student to influence placement.

4. Highly controlled RCT studies are often difficult to generalize to typical classroom environments and to different geographic or socioeconomic settings due to the emphasis placed on isolating the effects of the program being studied.

Myth 6: “We need to use ESSA’s evidence tiers to evaluate all educational programs.”

According to the U.S. Department of Education, the tier-based system for categorizing evidence-based research established by ESSA applies to “activities, strategies, and programs” (3). Such a broad definition means that the same tiered structure applies to all forms of programs whether they be designed for implementation at the district, school, classroom, teacher, or individual student-level, without regard for scale or scope of the program.

Therefore, when programs are compared to one another based solely on their evidence tier and effect size, without regard for their scope and scalability, it increases the likelihood that educators and administrators may underestimate the effectiveness of some programs while overestimating the effectiveness of others.

Myth 7: “As long as one study meets the criteria for an ESSA tier that study can outweigh the body of evidence for a program’s effectiveness.”

According to the U.S. Department of Education ESSA evidence guidance document, states, districts, and schools should consider “the entire body of relevant evidence” when selecting a new program (4). In contrast to this statement however, the evidence tier-related provisions of ESSA make clear that only a single study is necessary for an program to demonstrate evidence at the tier 1, 2, or 3 level. As a result of this provision, the incentive for publishers and curriculum providers is to fund or conduct a single, highly controlled tier 1 or 2 study with extensive implementation support and little concern for the generalizability of the findings; then, given statistically significant positive results, to never fund or conduct another study on the program because additional studies could only weaken the evidence base of the program according to ESSA.

The implication of this myth is that the one-study approach established by ESSA oversimplifies the complexities of research on educational programs and has already turned ESSA’s evidence-based recommendations into a mere check-the- box approach in some states and districts.

Myth 8: “We will look on the ESSA website to find programs that meet the ESSA tier system ratings.”

Presently there is no way for a research study to be “reviewed by ESSA.” The Institute of Educational Sciences established the What Works Clearinghouse (WWC) to provide information about educational research for specific curriculum and programs, but does not address the design of the research study using the evidence tiers under ESSA. The Center for Research and Reform in Education at Johns Hopkins University created a clearinghouse of research entitled Since it has “ESSA” in the name some states and districts believe that it has been federally sanctioned but it has no official status. It does provide ratings of evidence — although it presents them in a potentially misleading way (see Myth 2). Additionally, the criteria used for their ratings differ from the criteria in the guidance document provided by the federal government.

Myth 9: “Self-supporting research from publishers and curriculum providers lacks the credibility of independent evaluation, necessary to qualify as evidence under ESSA’s evidence tier guidelines.”

It is understandable that educators and administrators concerned with the undue influence of bias tend to value research and evaluation studies conducted by independent organizations more than internal or publisher-funded studies. However, because they are the primary entities with a vested interest in funding and conducting research on their programs, publishers/curriculum providers face a “catch-22” when it comes to research and evaluation.

On the one hand publishers are and should be doing the bulk of the research on their programs, while on the other, states and districts have (at least on occasion) undervalued research findings that are from publisher-funded and/or internally-produced research. Despite some states’ ESSA plans outlining a commitment to engage in research to build additional evidence on the effectiveness of programs (5), it is not clear how or when states intend to take the lead on funding or conducting research on educational interventions, such as curriculum programs.

We hope this article has helped to clear up any confusion curriculum decision makers may of had on their journey to select the best programs and programs for their schools.


  1. Empowered by Evidence: Using Proven Strategies to Improve Student Outcomes. (n.d.). Retrieved from
  2. Research Evaluation and Advanced Analytics: 5-Steps to Being Empowered by Evidence (2018, July 16). Retrieved from
  3. Non-Regulatory Guidance: Using Evidence to Strengthen Education Investments. (2016, September 16). Retrieved from
  4. questions.
  5. Ohio Department of Education, Ohio’s consolidated Every Student Succeeds Act (ESSA) plan. Retrieved from



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
McGraw Hill

Helping educators and students find their path to what’s possible. No matter where the starting point may be.