Prescribing isn’t treating

Dr. Steve Cantrell
Talking Education
Published in
7 min readDec 16, 2019
Bridge Liberia pupils.

The Liberian Education Advancement Program (LEAP) is a public private partnership with eight non-state actors designed to improve learning outcomes for students across the public school system. In parallel, the Ministry of Education commissioned a three year Randomised Control Trial (RCT) designed to study and measure whether LEAP did indeed improve outcomes. The Ministry’s ultimate vision is to transform learning for all 2,619 public primary schools across the country.

The results are in:

Imagine you are a doctor. You want to know if a pill you prescribed for your patients made them better. You hire a fancy US academic team to conduct a study. They tell you that the best way to know if the pill worked is to study those who received the prescriptions. Confused, you ask if it wouldn’t be better to study those who actually took a pill.

“No.” The researchers say, “We’re more interested in the prescription.”

“But”, you counter, “I really need to know if the pill I’m prescribing works.”

“Never mind that.” The researchers insist, “We know what’s best.”

Liberia’s Minister of Education finds himself in a similar predicament to this poor doctor. The Minister, on behalf of the government, wants to know how well the Liberian Education Advancement Program (LEAP) is serving students. The researchers have their own agenda.

The issue is partly methodological and partly ideological. Research methods matter, despite how impenetrable descriptions of these methods can be. At issue here is the authors’ choice to privilege Intention-To-Treat (ITT) rather than Treatment-On-Treated (TOT) treatment effects. While this seems arcane, and indeed most of the study is written in language better suited to PhDs than ministers of education, it can be explained quite easily: ITT focuses on the prescription, those students initially assigned to the school. TOT focuses on the pill, those students who actually attended the school.

ITT changes the research question from “what was the effect of the treatment? (which TOT aims to answer)” to the quite different question “what was the effect of being assigned to the group that was supposed to receive the treatment?”

ITT waters down the treatment effects. Understanding the effect it seeks to estimate helps explain why. The treatment effects are diluted because ITT intentionally mixes the results from treated and non-treated students. It’s actually not as ridiculous as it sounds. For at the outset of the program, it’s good to know that students are not being systematically excluded from the program.

Initially, there was a good reason to test systematic exclusion. The Liberian Ministry of Education made two decisions which removed students from the most overcrowded schools. First, they lengthened the school day. This meant that schools which operated two shifts, one in the morning and one in the afternoon, would no longer be able to do so. Second, they limited class size to 65 students. This reduced the total number of students in overcrowded schools. The researchers reported that the displaced students were similar to the remaining students on measures of academic ability. Thus, there was no systematic exclusion from LEAP schools.

ITT should have ended there — or at least relinquished its top billing to TOT. Instead, three years later, when only one-third of the original students remains at the LEAP schools, the researchers insist that studying the prescription is more important than studying the effect of the pill. The main consequence is, naturally, greatly underestimated treatment effects. The researchers chose to minimise the program’s gains by reporting a treatment effect estimate that includes students who have never attended a LEAP school, students who were there only a short time, students who graduated, and entire schools who did not even participate in LEAP. The researchers, presumably at significant cost, tracked 96% of these students, tested them, then attributed their test results to the schools they no longer (or never did) attend.

Predictably, the differences in the estimated treatment effects are startling. The researchers trivialise the overall ITT treatment effect in reading. They report the program average ITT treatment effect of 0.16 standard deviation difference between students assigned to treatment and comparison schools, then explain the difference as equivalent to correctly reading four more words (11 vs. 15 for students enrolled in Grade 1 in 2015/2016). This is low because it includes all students who received a prescription. The authors report the individual providers’ TOT treatment effects, by contrast, to be as much as five times higher. Indeed, 5 of 8 providers’ TOT reading estimates are at least twice as high as the reported average ITT treatment effect for English. In math, it’s roughly the same story: The overall ITT treatment effect — those who received a prescription — is modest, but half or more of the individual providers produced quite impressive gains; when looking at students that actually took a pill.

More importantly, imagine how difficult it would be for a Minister of Education to even begin to interpret ITT treatment effects when these results combine learning outcomes for students under vastly different treatment conditions: some were educated in LEAP schools for three years, some for two, some for one, and others included in these results received not a single day of instruction in a LEAP school.

The authors insist upon using ITT for all treatment effects, not just for learning. This means that all estimated impacts, from teacher attendance to whether or not a student experiences school as fun, are calculated from the combination of teachers’ and students’ experiences both within and outside of the program schools. To the extent that program schools raise teacher and student attendance (they do) and increase students’ enjoyment (they did in the first year), these impacts are diluted by combining the results from as many as two-thirds who were not or are no longer attending LEAP schools.

Clearly, the choice of ITT treatment effects privileges academic concerns over the Liberian Minister of Education’s practical concerns. It appears, however, that the choices made within this study reflect more than merely academic considerations and suggest an active attempt to diminish the programme’s true impact and undermine its continuation. From both the first-year report and this newly released third-year report, it is clear that the authors have low regard for the Liberian Ministry of Education, its authority and ability to make policy, select capable operators, enforce its contracts, and pursue the interests of its students.

While the authors’ disregard for the Liberian Ministry of Education begins with unilaterally changing the key research question from treatment effects to assignment effects (as explained above), this disregard is also evident in the choice of language used within the report itself. The report is written in an academic, highly specialised language. It is difficult to understand even for those with such training. Other than a brief executive summary, nothing has been offered to the Ministry or the school providers to assist their understanding of the methods used, of the meaning of key terms, or of the practical implications of these findings.

Along with using ITT estimates to dilute LEAP’s treatment effects, and perhaps more insidiously, the authors divert attention away from the program’s significant learning gains and dedicate three-quarters of the paper to access, sustainability and child safety. Though important issues and worthy of consideration, the authors strain to cast these issues in the least favourable light.

To take just one of these issues, access to schooling, the authors plainly ignore the government’s explicit policy instructions to reduce overcrowding and re-label the responses to this policy initiative as negative externalities. The authors’ use of “expulsion of students by private operators,” despite the sense of injustice it evokes, is simply using sleight of hand to create a negative externality where none exists. The inflammatory term “mass expulsion” vilifies the government’s concerns about overcrowding and reinterprets efforts to reduce overcrowding as operator malfeasance. In addition, whereas expulsion typically requires action on the part of the school provider, the authors stretch their definition to include students who successfully graduated from primary school and did not attend secondary school. Finally, the authors never revealed any negative academic consequences for the students who were crowded out of their schools. It turns out that expulsion, despite the emotive language, is not quite what it seems.

The authors conclude their third-year report by stating unequivocally that the “state capacity to monitor and enforce contracts is weak,” questioning the government’s choice of contractors when it expanded the program, and suggesting that school providers are not aligned with public interests. The authors need to ask themselves whether their efforts have enabled the government to see more clearly, assess operators based on the government’s highest priority outcomes, and act with greater confidence. Sadly, the answer to these questions is no. The authors have clearly served their interests but have left the Liberian Ministry of Education no better off for their efforts.

Read the next blog in this series: Presumed Intent.

Dr. Steve Cantrell is Vice President for Measurement and Evaluation at Bridge International Academies. He is a former head of evaluation and research at the Bill & Melinda Gates Foundation where he co-directed the Measures of Effective Teaching (MET) project and led the foundation’s internal measurement, learning, and analytics. Cantrell is a former Executive Director of the US Department of Education’s Regional Educational Laboratory — Midwest. He is a former Chief Research Scientist for the Los Angeles Unified School District. He is the co-author of Better Feedback for Better Teaching (Jossey-Bass, 2016).

--

--

Dr. Steve Cantrell
Talking Education

Dr. Steve Cantrell is Vice President, Measurement and Evaluation at Bridge.