Presumed intent

Dr. Steve Cantrell
Talking Education
Published in
5 min readDec 20, 2019
Schoolchildren in Liberia. Photo by Adrianna Van Groningen on Unsplash

In an earlier brief, focused on a Randomized Controlled Trial of a public-private partnership in Liberia, I helped a non-technical audience understand the difference between two methods of estimating treatment effects, Intent to Treat (ITT) and Treatment on the Treated (TOT), and explained how, in every case, using ITT changes the research question and dilutes the estimated treatment effect.

If, as researchers, we want our work to influence how practitioners think and act, we owe them the courtesy of including them in the conversation. To me, this includes taking the time to help them understand our methods, the analytic choices we make, and the tradeoffs consequential to such choices. When we fail to do so, we are simply asking practitioners to accept our findings by faith, rather than reason. We are called upon to take the extra, and often difficult, step of rendering our work in plain language if our research is to be a persuasive act, rather than a manipulative one.

Believe it or not in light of the Liberia conversation, I’m a fan of estimating treatment effects using Intent to Treat. I don’t question its utility to sort out the possible unintended consequences of assignment to treatment. The first word of Intent to Treat, however, is Intent. In most cases, ascribing intention is more-or-less straightforward. If a doctor administers a prescription, they intend it to be followed. The intent is clear.

The authors, however, have wrongly assigned intent as they estimated treatment effects for the Liberian Education Advancement Program (LEAP). Moreover, because most involved in LEAP are not academics and are not familiar with ITT vs. TOT, they likely didn’t understand that assigning intent is up to the discretion of the researchers; and that it can fundamentally change the study results. In the many cases where intent is clear, the decision is trivial. This is not the case in LEAP.

The authors presumed an intention to treat beyond where it existed. To measure the effect of LEAP, the authors included all students who were on the 2015–16 school roster, the year before LEAP began. This definition is too inclusive for it contains three groups of students, only one for which the intervention was ever intended. The three groups are (1) those students who were not planning to attend the school the following year, (2) those students who wanted to attend the school, but due to capacity constraints were unable to do so, and, (3) those students who were successfully enrolled in the school. The first group simply migrated out (a frequent occurrence in Liberia). The second group wanted to be treated, but there was not enough space because of government policy. Students in the third group are the only ones LEAP providers intended to treat. So, from the outset, school providers intended to treat only one of these groups of students, while the researchers included all three groups in their estimate of the intended treatment effect.

As such, Intent to be treated isn’t the same as intent to treat, otherwise we would have to evaluate Harvard’s treatment effect using the outcomes of all applicants, irrespective of where they actually went to college. Closer to this particular academic conversation, some of the best studies of US charter school effects are based upon lottery outcomes, where those who did not attend the school clearly intended to do so, but couldn’t due to a bad lottery draw. These charter studies don’t consider using lottery nonwinners in ITT treatment effect estimates. Instead, they use lottery nonwinners as the control group. To combine the results of lottery winners and losers would be meaningless.

At the December 17 LEAP research release event, both the past Liberian Minister of Education (George Werner) and the representative (Gbovadeh Gbilia) for the current Liberian Minister of Education insisted that PSL/LEAP was designed to disrupt the status quo and change the conditions of schooling that thwart learning. This included removing illiterate teachers and reducing overcrowding in classrooms. The authors, despite all the details documented within the actual Memorandums of Understanding, stubbornly defined the Ministry’s efforts to disrupt these conditions as school provider decisions. They labeled efforts to remove illiterate teachers and reduce overcrowding as negative externalities.

And, the authors built these wrong assumptions into their analytic model. Apart from including students in group one (above) who never planned to return to the school they attended the previous year, they also contradicted a government policy decision by including students the government did not intend for LEAP schools to treat. The Ministry clearly sought to reduce overcrowding and never intended for LEAP schools to educate these displaced students. Yet, the authors insisted upon including displaced students in their intent to treat estimates. The authors combined the future learning outcomes of the lottery losers with those of the lottery winners and, by so doing, muddled the ITT treatment effect estimates. If the ITT outcomes are accepted as truth by LEAP supporters or funders — without interrogating the assignment decisions — these diluted treatment effect estimates could shut down a program that is working for tens of thousands of children.

If the authors had properly assigned intent, their Intent to Treat estimates would have included only those students enrolled in school at the start of LEAP. From there, it would have been important to follow all students, including those who left the school, for whatever reason. For example, to rightly understand Harvard’s effect on student outcomes, it’s important to include students who drop-out. If the LEAP researchers had properly assigned intent, the ITT treatment effect estimates would have been legitimately attributable to the actions of school providers and the providers, donors and the Government would have had no reason to cry foul. I only wish it were so.

Dr. Steve Cantrell is Vice President for Measurement and Evaluation at Bridge International Academies. He is a former head of evaluation and research at the Bill & Melinda Gates Foundation where he co-directed the Measures of Effective Teaching (MET) project and led the foundation’s internal measurement, learning, and analytics. Cantrell is a former Executive Director of the US Department of Education’s Regional Educational Laboratory — Midwest. He is a former Chief Research Scientist for the Los Angeles Unified School District. He is the co-author of Better Feedback for Better Teaching (Jossey-Bass, 2016).

--

--

Dr. Steve Cantrell
Talking Education

Dr. Steve Cantrell is Vice President, Measurement and Evaluation at Bridge.