Mindless Takes

The most recent BuzzFeed article on PizzaGate got a lot of attention:

Naturally, all this brouhaha caused people to weigh in on the scandal.

The value of a person’s opinion is a function of their expertise with the subject, and their ability to draw insights from that knowledge. The latter is difficult to judge without knowing the former, but the former is pretty easy to judge, and most people haven’t been following PizzaGate closely so I don’t worry about most people’s opinions on the topic.

However, one especially bad take (accompanied by some horrendous bite size takes) on PizzaGate seems to be shared by a lot of people and is gaining momentum, and I just couldn’t stand idly by any longer.

To start things off, we had a piece with the spiciest of titles: “In Defense of Brian Wansink”. This was written by a Cornell grad student in the Daily Sun, which up until this point had pristine takes on PizzaGate. This grad student piece about Wansink receiving too much criticism shouldn’t be confused with another grad student piece arguing that Wansink has received relatively little criticism—due to him being male and critics being sexist pigs that don’t criticize other males.

It’s hardly worth discussing the overall point of the Daily Sun article given the embarrassing quality, so I’ll save that for later and just point out some of the atrocities in this article.

In the very first sentence the author confuses p-hacking with salami slicing.

Then the author writes:

Studies that have resulted in Nobel Prizes are called into question.

Although I currently believe the scientific enterprise is completely broken, I’m not aware of which studies the author is referring to here. (If there was a Nobel Prize for psychology this statement would likely be true, but thankfully there isn’t.)

He writes:

They wrote letters to…Cornell urging the administration to launch an investigation.

I’m not sure who “they” are, but I’m not aware of anyone writing a letter to Cornell urging them to investigate Wansink. We did write to Cornell’s ORIA and IRB requesting help to get a data set, and made the email public, but that’s it. Maybe he’s referring to Gelman’s open letter to Cornell University’s Media Relations Office in which he responded to the results of Cornell’s investigation. I don’t know, but I’m going to go out on a limb and say he’s talking out of his ass.

He writes:

This is in contrast to all of Wansink’s colleagues whose work remained untouched by any scrutiny during this period. A great illustration here is the case of Prof. Daryl Bem, psychology, who has published research on psychic powers of premonition (I kid you not) and yet wasn’t subjected to even remotely similar level of critique.

Daryl Bem’s work is often cited as the spark that started the current revolution in psychology, he has received backlash from the media, and has been a frequent punching bag of notable Wansink critic Andrew Gelman. There is even a campaign to get Bem’s paper retracted, despite the absence of calculation errors. I’m not aware of a paper ever being retracted simply because of suspected p-hacking. There was a case where a university found a researcher guilty of misconduct for what was described as “cherry picking” data. However, the retraction notice mentioned “falsification”. I’m perfectly happy to equate p-hacking and falsification, but it is a little difficult to develop guidelines on how much p-hacking is falsification.

Perhaps the author is referring to the fact that no one went through Bem’s entire catalogue of research and tried to get all the papers retracted (which hasn’t even happened for Wansink by the way). There are a couple reasons for why this might be the case. First of all, Bem is retired, so there isn’t really much motivation to discredit his (other) former work. It’s not like he is continuing to pollute the literature with new studies, giving TED talks, speaking in front of Congress, or writing best-selling books based on flawed work.

As mentioned above, it’s basically impossible to get papers retracted for p-hacking. Given that Bem openly advocated for p-hacking it might be reasonable to assume that his previous papers were also p-hacked, but then again he also advocated for publishing negative results, has been open about his research procedures, and has shared his data. I don’t know that Daryl Bem raises any more red flags than any other Ivy League psychologist, and investigating a person’s life’s work takes a lot of time.

He writes:

One of the things that separated Wansink from many of his colleagues was having a blog in which he openly discussed his research process, and his subsequent willingness to cooperate with the inquiry and learn from it. Thus the message from this public bashing, especially as it becomes progressively more severe, might not be that bad science gets punished but rather that being open about your research does.

This statement could not be more wrong. Yes, Wansink did openly discuss his research process, but he did so accidentally (and then deleted the blog post after he realized the mistake). He has not shown any cooperation with our inquiry, or made any attempts to do better work. He has been the exact opposite of “open”. He’s been a slippery weasel that has bobbed and weaved criticism of his research for years, and was only finally taken down by “weaponized autism”.

When we emailed the authors of the pizza papers we received 0 responses. It wasn’t until we contacted the lab that we got a response, and once we pointed out that we found some problems we never received another response from them. He publicly provided bogus excuse after bogus excuse for why the data could not be shared. He repeated this runaround with Chris Chambers for another data set. Requests for other data sets have also not been granted.

Wansink continues to defend his work, despite the overwhelming opinion that his method of performing research is guaranteed to produce false positives, the shocking number of errors in his publications, and the embarrassing quality of the data sets that have been released. When the lab is contacted by journalists the lab often does not provide a response, or just refers the person to a statement from a PR firm. When he does make public statements, they often contain falsehoods. Saying that Wansink has been open about his work is like saying Theranos was open about their blood testing technology.

In fact, I would argue that Wansink should be criticized for his response just as much as he should be criticized for his work. Perhaps he really didn’t know any better than to p-hack, collect useless data, and make hundreds of errors (although his emails suggest otherwise), and truly believed in his findings. But he must know that blatantly lying to dodge criticism is wrong.

While this grad student’s article could be excused as naive and self-serving given it’s a student at Cornell and they don’t want Cornell’s reputation to take any more hits, a similar piece that appeared the very next day by an apparently actual journalist can’t.

The main difference between these two pieces is the grad student piece focuses on the criticism by scientists, while the journalist piece focused on the media. Also, the journalist piece had fewer gross mischaracterizations or simply false statements. However, the journalist quickly shows his lack of knowledge of the area:

A long, sympathetic profile of Cuddy in The New York Times Magazine last year examined the bloodthirsty bullying and public shaming that foundered her academic research career.

This “bloodthirsty bullying” is a replication attempt of her work, a blog post that suggested work supporting the power posing hypothesis is p-hacked, and the first author on power posing declaring the study was flawed. Naturally, the media picked up on these developments. But the media is a fickle beast. If when the media is critical we call that bullying, then what do we call it when the media is positive?

Luckily someone called him out on Twitter about that, but he doubled down on the narrative New York Times thought would get the most clicks (careful researchers identify problems with bogus work is not nearly as sexy as mean bullies attack a poor woman just doing what everyone else does):

Okay, so the journalist is happy to believe a biased characterization in the New York Times over actual facts, but that doesn’t guarantee his take will be wrong.

His main argument seems to be that the media is “piling” on Wansink while not adding anything new, and they are doing so just as mindlessly as when they positively covered Wansink. I think there is a fundamental flaw in this argument. You don’t know if a story is going to add anything to the narrative until you, umm, do the story. He uses the example of BuzzFeed not adding anything to the narrative of Wansink’s p-hacking, which was already established with his famous blog post, and reported on by multiple journalists.

But what if the BuzzFeed emails actually showed that Wansink was a careful researcher, accurate to the third decimal point, as he claims? There’s no way to know what the emails were going to say until BuzzFeed actually acquired them. And if they don’t say something that contradicts the current narrative is BuzzFeed supposed to not report on them? Can we only report something if it isn’t consistent with prior reports?

Sure, if you asked me I would have said there would have been clear evidence of p-hacking, or worse. Actually, the fact that the emails weren’t worse was news to me, so I found them valuable. And I think a lot of people found them valuable. A lot of people thought Wansink meant well and was just incompetent, but the emails showed that Wansink understood he was selling journals shit sandwiches, and doing so all in the name of publicity and prestige.

And besides, it’s not like once a story is published every single person in the world reads it. Just because a news outlet did an excellent job covering a story that doesn’t mean another organization can’t cover it as well if they think their readers will enjoy the story. And I wouldn’t consider this “piling” on, I would consider it “amplifying” the story.

Before I move on I should mention the author makes some decent points.

Further, if we scrutinized all research scientists (or science journalists for that matter) the same way we are scrutinizing Wansink, who knows what we would find.

There be dragons. I could probably find a typo or error in any scientific paper ever published, maybe a serious error. If every company was subjected to the scrutiny of Theranos, or more recently Cambridge Analytica, problems would likely be found. But problems in someone’s work is not necessarily something to write home about (unless they are gross fabrications/falsifications), it’s how the person responds to the problems that is newsworthy. When I find an error in a paper I don’t suspect misconduct until the person provides a questionable response to my inquiry.

People don’t criticize Amy Cuddy because of her terrible power pose paper(s). They criticize her because she won’t acknowledge the problems and continues to encourage people to power pose and study power poses. Her coauthor (actually the main author of the original paper) Dana Carney, received nothing but praise for acknowledging the problems in her work. The reason why we continued to look at more work by Wansink was because he kept downplaying the problems we had found, so we found bigger problems, and bigger problems, until eventually he couldn’t muster up excuses anymore.

Any time work is popular, it’s going to be scrutinized. A lot of people use my site OncoLnc, so naturally I get people telling me there’s a problem, and sometimes there actually is. And sometimes a person will keep complaining publicly that there are problems. If your work is scrutinized the scrutineers may or may not have a point, but if your response to potential problems in your work is to completely ignore it that says a lot more about the likely quality of your work than the criticisms.

I wonder if we’d all be a little less scandalized by Wansink’s story if we always approached science as something other than sacrosanct, if we subjected science to scrutiny at all times, not simply when prevailing opinion makes it fashionable.

I completely agree with the journalist here, although this quote is a little incongruous with the previous quote. While the grad student thought we were doing a disservice by focusing so much effort on one person, the journalist wants that same amount of effort on all research. Currently in academia everyone is tripping over themselves to publish something novel, while no one is checking how any of the previous sausage was made.


Given these articles my colleague Tim took to Twitter to discuss the topic.

Naturally, the bad takes flowed.

Apparently the solution is to anonymize the investigations.

Never mind the fact that people have been focusing on the incentives and larger issues for decades to no effect.

Luckily Tim nails it on the head here.

If I critically assess some research, and someone else decides to critically assess some research, it eventually adds up. If everything I do is pointless because I’m just one person I guess I should just call it a day and wait to die. And besides, a single paper can have a large impact. Our Wansink criticism is so famous it’s taught in courses now. So clearly what we did had an impact.

At this point even casual observers couldn’t stomach this guy’s takes.

No one said getting papers retracted is the only solution, I even talk about how retractions are useless (although they are currently useful in that they get the media excited), he’s just putting words in people’s mouths.

Then, when the guy eventually gets around to telling us his master plan of what we should have done, it’s basically exactly what we did (and involves contacting journals to get papers retracted!):

This deeply confused person isn’t alone, I’ve talked with people who honestly believe dealing with scientific misconduct could be handled in such a way that it encourages other frauds to feel comfortable turning themselves in. Perhaps demonizing p-hacking (which has never been a part of any of our critiques of Wansink’s work — we focus on mathematical impossibilities) will discourage people from admitting some of their previous work was p-hacked, but it’s not like I’ve ever seen a line of people eager to throw their previous work under the bus. I think it’s more likely that putting the fear of God into people will make it less likely they p-hack in the future (but again, p-hacking was never the focus of our investigation).

The p-hacking aspect of this story is really what’s getting everyone confused. In fact, Gelman has not one, but two posts ranting about this.

Most people p-hack, so it’s easy to say that Wansink is just doing what most people do. But none of Wansink’s papers got corrected or retracted because of p-hacking, so Wansink is not like most people. It’s like saying that because a murderer also jaywalks they are just doing what most people do, and we shouldn’t focus on prosecuting murderers because most people are like them. The Wansink investigation was not about p-hacking. His blog post about p-hacking alerted us to publications which contained more discrepancies than any papers we had ever come across. Nick Brown explains.

Even if the Wansink story was all about p-hacking, and his papers were retracted for p-hacking, I still wouldn’t buy the “everybody does it” defense. By employing the “everybody does it” defense you’re implying that the violation is not a big deal, i.e. the rule being violated isn’t that important. Next time you get pulled over by the police, go ahead and mention everybody speeds so you shouldn’t be given a ticket. See how that works out. And maybe you’re right, maybe speeding laws are silly. I wouldn’t mind hitting 100 MPH on side streets. But if I had kids I’m not sure I’d want cars flying by my house. Rules sometimes exist for a reason.

And what happens when the violation is quite serious? Let’s say in a certain society murder is common. Do we stop prosecuting murderers because of the “everybody does it” defense? You might say that sounds like a terrible place to live, but that’s exactly what’s happening, and has been happening, in academia. Academia is full of serial killers, who subject data set after data set to unspeakable torture. And instead of viewing this as a problem, people want to say it’s fine because it’s become the status quo and the most effective way to get tenure.