How journals responded to PizzaGate

This post is long and depressing, I wouldn’t recommend reading it unless you have some kind of strange fascination with this story or want to feel terrible about the current state of science. It is simply for posterity’s sake.

Journals are the gatekeepers of the scientific literature and ensure that we don’t read untrustworthy, non peer-reviewed articles, going so far as to insist we only download articles from publisher websites.

Tracking the correspondences

Unfortunately we have not contacted every single journal with a problematic paper listed here, and in fact that post doesn’t even list every paper we’ve seen with issues (which is basically any Wansink paper we’ve read). For the most part, I contacted journals of papers I criticized, and Nick Brown contacted journals of papers he has criticized.

The Pizza Papers

Perhaps what readers will be most interested in are the original pizza papers. We published our preprint detailing problems with these papers back in January, but to make sure the journals were aware of the problems I also contacted the journals in March.

The Soy Papers

This lab has a tendency to publish papers that are highly related to each other, often being based on (apparently) the same data set. There were the “pizza papers” above (where it wasn’t disclosed until later it was the same data set), there are the University of Illinois veteran survey papers where it was discovered female teenagers fought in WWII, and there are a series of papers based on a survey that is described differently each time but yet always has 770 responses. One of the papers in the latter series deals with soy, and so do two (apparently unrelated) others. I’ll refer to these as the “soy papers”.

Oops, I did it again

Wansink has a tendency to “re-emphasize” some of his previous work.

Other papers

The Review of Economics and Statistics: “The Flat-Rate Pricing Paradox: Conflicting Effects of ‘All-You-Can-Eat’ Buffet Pricing”

Judging the corrections

If something is going to take several months, you might expect it to be a sufficient correction of the record, or at a minimum accurate. Unfortunately, we didn’t find either of those to be the case.

Case Study 1: Evolutionary Psychological Science

The methods of this pizza paper contradicted the methods in the other 3 pizza papers. In other words, they were falsified. Whether the methods were falsified intentionally or inadvertently is unknown.

Case Study 2: Journal of Product & Brand Management and Journal of Sensory Studies

These two “pizza paper” journals issued lengthy corrections for the papers in their respective journals. However, the corrections did not correct all of the errors, and in fact are not even consistent with the STATA code released by the lab as I detail here. More importantly, the corrections are just a band-aid on a gunshot wound. The data which the papers are based on is basically gibberish, as detailed here.

Case Study 3: BMC Nutrition

This journal immediately posted an expression of concern after our preprint was posted, and eventually retracted the pizza paper. However, the note does not explain why the paper was retracted, and in fact invites the authors to submit a new paper.

Case Study 4: Obesity

This correction is actually fine, the values got corrected. I would prefer it if the data got released, but whatever.

Case Study 5: JAMA Pediatrics

This paper was retracted and replaced with a “corrected” version. In a comment to Retraction Watch, the lab claimed the corrected article allowed them:

Case Study 6: Food Quality and Preference

This article contained some impossible statistics, but the “correct values were impossible to establish” since they couldn’t find the data, as covered by Retraction Watch. Given that the paper is from 2005, it is understandable for them to no longer have the data. If this were any other lab, I probably wouldn’t see this as that big a deal. However, given that the only three data sets this lab has released contradict the methods described in the papers, and this article already contained an impossible description of the methods, it stands to reason the data set would likely contradict the original paper.

Case Study 7: Handbook of Behavior, Food and Nutrition

Nick Brown called the editor of the book about a chapter which had been recycled, an extremely serious copyright violation. The editor was not pleased to discover this issue, but has been too busy to do anything about it.

Case Study 8: Journal of Nutrition Education and Behavior

I notified the journal of some issues with this table:

Case Study 9: Psychology and Marketing

According to Nick Brown’s blog post, there are 5 articles which had some portions “re-emphasized” in this article:

  • Wansink, B. (2013). Convenient, attractive, and normative: The CAN approach to making children slim by design. Childhood Obesity, 9, 277–278.
  • Wansink, B. (2015). Slim by design: Moving from Can’t to CAN. In C. Roberto (Ed.), Behavioral economics and public health (pp. 237–264). New York, NY: Oxford University Press.
  • Wansink, B. (2010). From mindless eating to mindlessly eating better. Physiology & Behavior, 100, 454–463.
  • Wansink, B., Just, D. R., Payne, C. R., & Klinger, M. Z. (2012). Attractive names sustain increased vegetable intake in schools. Preventive Medicine, 55, 330–332.

In other words, the Wansink scandal may not be a case of “the emperor has no clothes” as much as “the emperor went out in shorts and a tank top on a frigid day.”

That is how Tove Danovich framed the problems with Wansink’s work. But the Wansink story is actually far worse than the emperor’s. Not only does Wansink have no clothes, but everyone was convinced he was well-dressed, including

Creator of PrePubMed and OncoLnc http://www.omnesres.com/

Creator of PrePubMed and OncoLnc http://www.omnesres.com/