Peer Review is just “Security Theater”

Jordan Anaya
6 min readJan 29, 2017

--

Imagine standing in line for months to get on a plane.

There are other options for transportation, but you put up with it because you’re told there are a lot of bad passengers out there, and this is the only safe way to travel.

You finally get on the plane and start to relax.

As you look around you start to notice people with items they shouldn’t have.

Is that an entire tube of toothpaste?

How did that water bottle get past security?

Wait, is that a pocket knife!?!

OMG, THERE ARE MOTHERFUCKING SNAKES ON THIS MOTHERFUCKING PLANE!!!!!!!!

What would you prefer?

A long line to get on a plane that is guaranteed to be safe but blows up as you are relaxing in your seat.

A short line to get on a plane that comes with a warning that it might blow up and you should look out for any suspicious activity.

If it isn’t yet clear, traditional peer review and publishing are the former, while post-publication peer review and preprints are the latter.

I’ve written about how useless pre-publication peer review is before, but given my recent publication and concerns about including non peer-reviewed work in grant applications, it seems like a good time to revisit it.

Perhaps Prachee Avasthi said it best:

Seriously, has anyone ever been part of a journal club where they didn’t talk shit about the paper?

Peer review is often performed with minimal effort, by researchers with limited expertise. Most manuscripts barely change between submission and acceptance, and even if a manuscript gets rejected it will just pass peer review somewhere else.

Despite this there are opinions like this:

So a manuscript is pseudoscience until a few random people briefly skim it?

I’m not saying preprints are any better or worse than peer-reviewed work (although currently they probably are in fact on average better), I’m saying we should be skeptical of all work until we have a reason to trust it. And no, peer review is not a reason to trust it.

Are you a nonbeliever? Then welcome to the sermon.

My squad and I recently took a look at four peer-reviewed papers, and found over 150!!! inconsistencies. And that number is likely to be a huge underestimate of the problems in the papers because we were very conservative about which statistics we checked, and we couldn’t check all the statistics without access to the data set. And we didn’t even touch on the countless methodological problems.

And some errors were BLATANT. For example, in “Eating Heavily: Men Eat More in the Company of Women” they list the degrees of freedom for a between-measures test as “[1,115]”. There are only 105 participants in this study. How can your degrees be larger than the total sample size?

In “Low prices and high regret: how pricing influences regret at all-you-can-eat buffets” Tables 2 and 3 are supposed to contain the same data, but the numbers change between the tables, and in Table 2 the sample sizes add up to 89 instead of 95, which is the stated total sample size.

In addition to these obvious problems there are a myriad of mathematical impossibilities scattered throughout the papers.

There were so many problems I thought there was a bug in MY software.

So I checked the numbers manually. Yep, all the numbers really were wrong.

It’s really quite amazing, it was like watching a train wreck that wouldn’t end. My collaborator exclaimed the papers were “lighting up like a Christmas tree” with errors, and I learned a few new acronyms (JMFC, FFS).

How did this get past peer review?

Well, BMC Nutrition has open peer review, let’s see what the reviews look like.

Delicious, absolutely delicious. Brief and topped with some great bits.

Wyn Morgan was directly asked “Are the data sound?”, and responded “Yes”.

Mitesh Patel wrote: “Statistical review: No, the manuscript does not need to be seen by a statistician.”

I disagree.

It’s easy to blame the reviewers, but if they rejected the paper it would have just been shoved down some other journal’s throat, and another, and another, until it finally wasn’t vomited up.

But if we can’t blame the reviewers can we blame the journals?

An argument could be made that journals should do more careful checks of manuscripts, but many problems with manuscripts are not easily detectable.

The scientific method has somehow morphed into:

Have a general idea of what you want to investigate.

Generate data.

See if your data matches your idea.

If not don’t give up, that’s what losers do. Generate more data, analyze the data in different ways. Alter your hypotheses. Do not go gentle into that good night.

Even the most carefully performed peer review cannot deal with these problems. We need researchers to be more open with their data and methods, more honest when mistakes are made. I would trust a preprint that provides data and code way more than a peer-reviewed paper that provides neither.

Technically my recent preprint hasn’t undergone peer review. But in reality it has undergone more stringent peer review than any peer-reviewed paper I know. In fact, I would bet my life it was more carefully checked than the papers it critiques.

My collaborators (my peers) scrutinized every method, thinking of all the possible ways we might be misinterpreting the data.

We wrote code in TWO different languages and compared the results.

We consulted third party applications, and generated test data sets to check our methods.

A famous statistician, Andrew Gelman, blogged about our preprint, and confirmed some of the results.

But until a couple people briefly look at it I guess it is just alternative facts.

Peer review is exactly like what Bruce Schneier coined as “Security Theater”, that is, a security measure which provides a feeling of safety but actually doesn’t do anything. It’s like how Trump’s wall will give people the peace of mind that something was finally done to curb illegal immigration but in reality won’t do much.

Peer review can also be viewed as an example of Goodhart’s law:

“When a measure becomes a target, it ceases to be a good measure.”

My former PI once told me to ignore the abstract length limit of a journal to fit in more information that would interest the editor and get the paper sent out for review. He also told me to exaggerate some of the statements in the paper to get the editor interested, and that we would tone them down in a later version.

So if you are relying on pre-publication peer review to decide what to believe, congratulations, you played yourself.

P.S. Thank you to Nicholas Brown for pointing out that my airplane analogy was basically “Security Theater”.

--

--