Why We Find And Expose Bad Science

(It isn’t because we’re mean.)

Curious day. Just found out I was in a Buzzfeed article (who, in the absolute opposite of many news organisations, appear to be in fully-fledged flight towards rather than away from legitimacy).

It’s here. It’s also excellent, by the way.

I’m peripheral to most of this narrative — the midnight oil was burned by Nick, Jordan and Tim. However, I’m close enough to the process to confirm the above story is an accurate and full-scale version of events.

If you don’t have time to read it, 50 word summary:

  • a lab published a blog post outlining terrible research practices
  • subsequently, we wondered if there were errors in their work
  • there were lots (45), so we pointed them out (it took ages)
  • so far, 3 papers retracted, 8 corrected, more soon
  • the situation has gone on and on and on

(Note: not 45 errors, 45 papers with errors.)


There are a few parts of this story which really, really raise my blood pressure.

Scientific criticism (also occasionally just called ‘science’) is cyber-bullying, apparently. Nice to know that two PhD students and an independent scientist are bullying a tenured Ivy League professor.

Well, yes. Certainly I have a strong, trenchant and committed antipathy to children receiving vegetables.

(Sarcasm to one side for a second: we didn’t know what Smarter Lunchrooms were until a journalist asked about it.)

Criticism is counter-productive and unwarranted, and personal.

‘Tear PEOPLE down’, it says. Not tear down their work.

Here comes the blood pressure.


Do you know who I’m thinking of when we are making criticisms that could affect someone’s career, someone’s professional life?

I’m thinking of the hordes of grad students who never had a chance because they wouldn’t fudge results for a dicey PI.

I’m thinking of the PhD students who just gradually faded away from a project that made them passionate, because being a good scientist “wasn’t enough”.

I’m thinking of Douglas Prasher, who cloned the first green fluorescent protein gene, and ended up driving a courtesy bus in a car dealership when his grants ran out.

Pretty AND worth the Nobel Prize in 2008

I’m thinking of the hordes of people on any variety of contract — junior faculty, postdoc, research scientist — who spent ten years or more on the college-to-PhD pipeline for a job that evaporated right in front of them.

I’m thinking of everyone who ran out of money, or patience, or time, or was pushed out, stamped on, marginalised or defeated. The insane amount of human scientific capital that’s fallen off the wagon. I’m thinking of everyone who started off curious and intelligent, and ended up mired in an unsustainable system where they couldn’t pursue good ideas because they heard ‘we don’t have the money’ everywhere they turned.

(What money? Government and philanthropic money, used to find out stuff and then give the ideas away. There’s only so much of this money, and there’s less than there used to be.)

So. Bad research just doesn’t affect the people in the area around it, the people who might spend years trying to take a dodgy result and extend it.

It affects everyone else who needs the money.

It’s easier to do bad research. It’s easier not to be careful. Slop the numbers around until something works, fudge a few figures, conveniently misplace a few measurements, and then you’ll be able to say you’ve made discoveries. Dress it all up in pompous gibberish. Call a spade a ‘neo-classical earth-inversion mechanism’. Parlay your amazing ‘discoveries’ into requests for more money.

Oh, and you’ll get to be on Ellen or Oprah or whichever low-fidelity grinning windbag gives you cachet these days.

Before you know it, you’ve built an extremely expensive house of extremely cheap cards.

In this context, when you’re pulled up for doing something wrong (or, really, hundreds of things wrong) the thought “why are they picking on me?” isn’t just short-sighted, it’s deeply selfish.

Of course, bad research is selfish in general — you’re not just prioritising your own needs over even the vaguest concept of the public good, you’re going to screw everyone associated with you. If you do bad research, and this gets pointed out at great volume, then everyone who so much as knows where your office is gets it in the neck:

Your students end up going down with the ship, saddled with degrees they can’t use, and projects they can’t finish. They end up cynical and utterly uncertain of the quality of their ideas.

Your collaborators end up with big holes in their academic records for trusting you. After the dust clears, other researchers who know them think: “were they involved? how much did they know? did they never see how bad this was?”

Your colleagues feel terrible. Academic relationships are built on trust, and part of that trust is knowing that you’re not just acting in good faith, but with good practice. That’s a trust you’ve violated.

Your university has to spend a lot of time and money on convening investigations, and — if your mistakes go public — hiring corporate lawyers to do damage control (NOT. CHEAP.) They also have to wear a big brand hit, and become guilty by association. Did they enable your bad behaviour? Did they care more about your fame and grant money than holding you to the barest standards of accountability? They well might have. That happens.

The public get a raw deal, as usual — you are spending their money producing information they can’t use. Of course, the public do not have access to scientific papers anyway because academic publishing is a sullen trash fire, but that’s beside the point.

Other scientists in your field, of course, do not get off easy. As outlined above, they might have previously tried their own experiments based on the ideas you magicked into being. They wrack their brains reconciling your work. They might get lucky — perhaps your ideas pan out, and your bad papers actually have results that stand up. The point is, of course, that you’ve made them take a chance on that, rather than working from a base of information they can trust.

And science in general, of course, is polluted. More things to read, more cheap ideas thrown around like crumpled chip packets. Another ego demanding a place in public life.

For literally everyone else involved, I have more sympathy. You let them all down. No amount of messenger-shooting will change that.

And yet, we still hear: “You of the New Bad People, why are you picking on individual researchers? These problems sound like they’re endemic. Why are you analysing papers instead of criticising scientific culture as a whole?”


We’ve criticized scientific publication habits, and closed data culture, and bad behaviour in the abstract from arsehole to chapstick. Everyone’s lips are cyanosed from talking until their blood oxygen bottoms out, and their fingers are worn down to little stumpy single-joint Trump-digits from typing out objections. We’ve been criticizing a lot. We’ve all been talking at great volume about systematic problems in science since forever.

The problem with criticizing something in the abstract is that no-one ever thinks the criticism is about them. We can ALL stand up and pound our chests about how things should be, and then go home and participate in the same rubbish research practices at the center of what we decry. So many scientists are like student anarchists who yell SMASH THE STATE and then go home to their parent’s basement, to yell through the floor at their mum who hasn’t done their washing.

In case you hadn’t noticed, this makes change pretty slow.

But still, we see the frame outlined above. This is personal. This is destructive.

The same language, again.

Shameless little bullies.


And, thus, arises this framework where serious criticism — if it can’t be rejected out of hand or ignored — is portrayed as some kind of motivated, trivial series of annoyances. Gratuitous, somehow. We see this again and again in the above article, in the constant minimisation of errors:

In March, he told the Chronicle of Higher Education that field studies should be taken with a grain of salt, as opposed to research done in a controlled setting like a laboratory. “Science is messy in a lot of ways,” he said…
… seemed to dismiss the errors, explaining that most stemmed from “missing data, rounding errors, and [some numbers] being off by 1 or 2.”
All the numbers seem to be within one baby carrot of each other. Still, if we can track this data down, we’ll be able to see if this was due to recording, rounding, or measurement…
He’d also discovered that portions of some of his older papers had been republished elsewhere, he said, and had informed six journals of these duplications…
Numbers had been missing and statistical calculations off, Wansink wrote, but most importantly…
…he said he’d realized that some of the data entries for other papers were duplicates or “mismatched.” He suggested that they contact the journals to tell them they were aware of the problems and were going to reanalyze everything.
Wansink and his coauthors admitted to having incorrectly described the experiment’s design and number of students involved, used an “inadequate” data analysis method, and mislabeled the graph.

(Here’s a question: what if every single one of the criticisms made here is exactly right in every detail? Will ‘happily, I guess my entire body of research is only A BIT wrong’ still be the defense?)

And, finally, here’s why we find those errors: because they’re errors.

We find bad science because scientists are supposed to.

That’s why, after a whole day at work, when it’s half past 11, I fall asleep on the couch with a laptop on my chest. With my mouth open (I’m very elegant). Looking at the guts of a paper about birth order effects, or neonatal care, or violence in the media — none of which I have the slightest research interest in and no deep insight into. Because, after science as a modern commercial enterprise confused producing knowledge with producing research, the ability to be deeply critical of what we read suffered hugely, and I think that’s a monumental problem.

You know what was the most terrifying part of this whole process to me?

Recall the story about about the carrots — if you aren’t familiar, the original blog posts are HERE and HERE — and the extremely problematic point about how, if you have a number of something (total carrots), it should be equal to (a) the number eaten and (b) the number NOT eaten?

Utterly trivial mathematics a 5-year old could understand.

However, this paper was downloaded from ResearchGate ELEVEN. HUNDRED. TIMES. And no-one noticed. No-one noticed that if you have 12 carrots, and you eat 5, that the remainder can’t be 5.5. That (X+Y) should probably be the sum of X and Y.

Or, if they noticed, they didn’t say anything — which is even scarier.

We are all in a state of stunning and continuous inattentional blindness about the details of the science we read, partly out of necessity, partly in an attempt to stay sane. There is too much to see, too much to know. Science is furiously competitive and desperately loud. Everyone is swimming upstream as fast as possible, just trying to stay funded, or alive, or relevant. Just trying to pay their rent and do something useful before they die.

And we are all too invested in whether or not published ideas are useful to our own publications to seriously invest the time in checking anyone’s adding up.

It‘s unsustainable. So, call it whatever the hell you like, bullying or congenital meanness or personal attacks or terrorism. Fill your boots.

We’ll still be pushing back.

The black flag has been hoisted. It isn’t coming down.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.