Re-evaluating Animal Charity Evaluators

A response to Jon Bockman

Harrison Nathan
7 min readDec 22, 2016

Earlier this month, I released an extensive critique of the current Effective Altruist work on animal welfare, which in particular accused Animal Charity Evaluators (ACE) of using pseudoscience, fabricating figures, ignoring scientific literature, using unrealistic metrics which promote co-optation, and suspending its own formal criteria in its evaluation of the Good Food Institute (GFI). ACE remained mostly silent until late last night, when Jon Bockman responded to “common critiques” in a new blog post. Unfortunately, the post omits mention of my paper and only obliquely references a few of my criticisms.

The first few sections of Bockman’s post, however, do touch on some of the same topics. Regarding leafleting, a subject I discussed in detail, Bockman writes the following:

Our current leafleting report was written in 2014 and does not accurately reflect our current views or more recent developments in research about animal advocacy. The change is substantial because ACE was a very young organization in 2014, and our team and views have evolved significantly since then, as has thinking about effective animal advocacy more broadly. Our confidence in the findings of some of the studies has declined, and we now think the report is too positive about leafleting in comparison to other animal advocacy interventions in general. The language we used in the report is also generally less cautious than the language we would use now.

On cost effectiveness estimates, a subject I discussed using a detailed specific example, Bockman has this to say:

We have certain reservations about our estimates, such as concerns about the strength of the evidence about animal advocacy in general and a concern that people may take certain estimates as being more definitive than we intend. However, we do also find these estimates to be a useful component of our attempts to compare the wide variety of programs that animal advocacy organizations engage in. …

Another example is with estimates that are based on data which is not highly reliable, such as those for leafleting or online ads. This is because there is no available data which would be highly reliable. While of course we would prefer to have more reliable estimates, given the available information, the best we can do is to make clear what information we’re using so that others can adjust it if they feel it’s necessary. Furthermore, even these estimates provide a check of some kind on our immediate intuitions about the effectiveness of a particular program. Despite the inherent challenges of publishing cost-effectiveness calculations, not doing so would certainly limit the usefulness of our reviews, because of the additional clarity that quantitative communication can provide. (Emphasis added.)

As is usual for ACE, these carefully worded statements appear at first blush to acknowledge a problem and make a commitment to reform, while actually evading substantive discussion. Consider the wording highly reliable. Superficially, this sounds like a concession to critics, but it also implies an assertion, made without any argument or reference to any specific facts, that the available data has some degree of reliability. (I disputed this.) Also consider the phrase estimates that are based on data which is not highly reliable. This suggests that some of the cost effectiveness estimates are based on highly reliable data. Which ones are those? And consider the statement people may take certain estimates as being more definitive than we intend. I showed that at least one of their cost effectiveness estimates is wholly invented. It is disappointing that Bockman made no attempt to explain this.

What to make of Bockman’s statement, regarding leafleting, that our team and views have evolved significantly since [2014]? That’s a long time in which to correct the problem, particularly when the cost effectiveness estimate for leafleting has been used as the basis of other estimates since, and when the ACE website even still contains assertions like “The existing evidence on the impact of leafleting is among the strongest bodies of evidence bearing on animal advocacy methods.” This statement is beyond incautious language: it is deceptive, as there is no evidence supporting the effectiveness of leafleting.

It would be difficult for ACE to even justify how it made the estimate for leafleting at the time that it did. ACE had two studies available: one which found no effect, and another (whose numerous, grossly amateurish flaws I discussed in great detail) which was interpreted as showing a very large effect. ACE’s estimate ignored the former and was based entirely on the latter. Moreover, the sole basis of the estimate is ACE’s strange interpretation wherein five vegetarians could be “patched together” (ACE’s own words) from the results of the study, which actually found that just one person became vegetarian. This post hoc construct obviously lacks validity, and resulted in an absurdly high estimate, suggesting that it requires less than five dollars to create the equivalent of a new vegetarian.

Bockman’s post contains no talk of retracting anything per se, and offers this explanation for why the bogus claims remain on the ACE website even now:

While we would love to have all our published content continually updated to reflect the latest state of our views and of research on animal advocacy in general, we have limited resources and often need to choose between updating old content and researching areas we haven’t previously written about. In 2016 we chose not to update our leafleting report because of these trade-offs.

When genuinely accountable organizations commit errors of such a magnitude, they issue retractions, and they do so with the understanding that people will regard their subsequent work more skeptically, not applaud them for “improving.” They also do this promptly when the errors are pointed out. They definitely do not continue to publicize claims they have known to be false for years, “updating” them only according to their own convenience.

Genuinely accountable organizations also have explicit standards of quality. If one talks to ACE about the pervasive, extreme methodological flaws in the studies it relies upon, one hears that it cannot afford to adhere to basic scientific standards, but is “building up” to better research. One never hears, however, what research standards it does adhere to. Whereas in public ACE proudly claims to use “science to analyze the impact of interventions,” when it is questioned in detail about this “science,” there is only endless talk of “weak evidence.” There seems to be no limit to how weak the evidence is allowed to get, and many members of the EAA community, being unduly trusting of ACE researchers, will even accept wild guesses, pure intuition, and arbitrary constructs created without rhyme or reason. The result is that ACE’s claims about the effectiveness of interventions are not falsifiable.

ACE will not, and probably cannot, honestly respond to the criticism of its methodology, because doing so would involve acknowledging that it lacks actionable (not “highly reliable”) evidence supporting any of the interventions it claims are effective. Of the main five interventions it lists on its website, it analyzes three of them using unscientific studies which have failed to show that they have any effect on behavior, and the other two involve interactions with corporations. While the effects of the latter may seem more amenable to quantitative measurement as they produce explicit commitments, ACE’s estimates here are unrealistic because it has made no analysis of how these corporations act as intelligent entities. Its choice to award all the “animals spared” to the first non-profit to make a deal encourages non-profits to make fast deals, not good ones, and neglects to account for what the corporation might have done in the absence of the interaction. (Bockman also did not mention my criticism of ACE’s evaluation of corporate outreach.)

I have also called into question other aspects of ACE’s charity evaluation process, with particular reference to its recommendation of the Good Food Institute as a top charity. Many others have expressed the same concerns: mainly that since GFI has no accomplishments, there is no basis on which to give it a top recommendation, understanding that ACE claims to recommend charities based on objective evidence of their effectiveness and record. In particular, I argue that GFI totally fails on criteria 2, 4, and 5 of ACE’s evaluation process. Allison Smith wrote a separate post attempting to justify the choice of GFI, but besides a disingenuous comparison of GFI’s non-existent track record to the actual track record of New Harvest (“Our reviews mention their short track records as weaknesses for both New Harvest and GFI”), it only discusses GFI’s “strong leadership,” “closer alignment with animal advocacy,” and future plans. ACE’s view that GFI’s plans are better than those of New Harvest (for reasons such as that it plans to add more paid staff, which ACE approves of, and that it will spend less on developing technology and more on regulatory and marketing issues) are based on the subjective judgments of ACE researchers, who lack relevant qualifications and should be expected to produce objective justifications for their recommendations. The question should not be whether GFI has potential, but whether, as ACE claims, the evaluation process was rigorous. It obviously wasn’t.

Jon Bockman’s discussion of conflict of interest focuses heavily on Nick Cooney. As Bockman accurately notes, “he founded THL, he works at MFA, and he is a board member at GFI.” However, Bockman inaccurately asserts that “Nick hasn’t been involved with THL in a long time.” In fact, Cooney remained the director of Humane League Labs until earlier this year, and is the author of every single one of THL’s research reports. He also has been involved in research produced by MFA. That is, Cooney has been the main person responsible for producing the pseudoscientific research that ACE relies upon to justify its belief in the effectiveness of interventions, which is the allegedly objective basis for its unfailingly consistent recommendation of Cooney’s charities; and moreover, when Cooney became involved in a new charity lacking any track record, ACE suspended its normal criteria in order to recommend it. At the very minimum, Cooney’s thinking has had a great degree of influence on ACE’s thinking.

For several years now, ACE has claimed that the charities it recommends (other than GFI) engage in interventions whose effectiveness is supported by evidence including studies. These studies have fallen far short of basic scientific standards, and have been systemically misinterpreted as supporting the effectiveness of interventions. While ACE has a policy of transparency, it does not have explicit standards against which its work can be judged, and for years it has failed to correct false claims which it continues to make publicly, for example in its impact calculator. It is not sufficient to be transparent without being accountable.

--

--