Why journals don’t retract papers
The other day a colleague sent me their letter to an editor regarding problems with a paper. The letter was so long that I didn’t read the whole thing. Worse, I hadn’t even read the paper the letter criticized, so if I wanted to provide useful comments I would also have to read the paper. At this point I realized why editors are reluctant to respond to errors in their journal and take action. It’s a lot of work.
When journals want to accept a paper they simply rely on the reviews freely provided by scientists, but when they have to investigate problems with a paper they don’t have a similar system. And while journals have a financial incentive to accept papers since the author forks over a hefty APC, they have no incentive to retract a paper. In fact, retracting a paper will likely hurt their bottom line as it may hurt their reputation and the author whose paper was retracted is probably a “lost customer” since their reputation is also hurt.
Not only are retractions a lot of work for the journal and bad for business, but the stakes are also high. If a journal mistakenly accepts a paper no one will probably notice, but a journal’s decision to retract a paper will be scrutinized, and they don’t have the crutch of being able to say it was one of the “few” cases of faulty peer review. Really it’s surprising that journals ever retract papers.
With this in mind it might be tempting to view journals which retract a lot papers favorably, but the number of papers retracted by a journal is a complicated function, with the main component likely being:
All Papers * Percent Criticized * Percent Investigated
This equation assumes that all papers which are criticized need to be retracted, and every investigation leads to a retraction. But there are papers which are not criticized which need to be retracted, there are papers which are criticized that don’t need to be retracted, and there are papers which need to be retracted and are investigated but aren’t retracted. Presumably, there aren’t any papers retracted that should not have been retracted.
Prestigious journals are known to have high retraction rates, and this fact is used by many (myself included) to suggest that these journals publish sensational results which are likely too good to be true. But these journals have a higher viewership, and the additional eyes likely leads to additional papers being criticized, and as a result more papers being retracted. Or perhaps the prestigious journals have more resources to investigate problems.
Basically, we have no idea whether journals are retracting the appropriate number of papers. We have no idea how many problematic papers are published, how many readers write to the journals, and how many investigations are launched. But anecdotal evidence suggests they often either don’t respond to reader emails, or don’t launch an investigation, or don’t retract a paper despite finding numerous errors.
It is interesting to think about how many authors feel comfortable with the idea that a single editor at a journal can decide your paper will be retracted. And it is interesting to contrast this with how authors blindly accept the current system of peer review, which isn’t much different. I guess the main difference is if your paper gets rejected from a journal, no one knows, and you just submit your paper elsewhere, no big deal. But if your paper gets retracted you likely just made the front page of Retraction Watch. Or maybe it’s like being turned down at the bar vs a breakup. In one case you had a lot more invested and more to lose.
But if we don’t trust journals with retracting our papers why do we trust them to accept papers? The dirty secret is we don’t. No one gets a rejection letter from a journal and thinks the journal made the right decision. We complain that the editor didn’t understand the significance of our findings, or the reviewers’ criticisms weren’t valid. When we send our paper to a journal we are convinced it is important enough to published, no matter what the journal or reviewers say.
This belief in our own work, combined with it becoming easier and easier for scientists to self-publish, is likely a leading reason for the explosion of preprints. But if we self-publish do we self-retract?
The retraction mechanism at preprint servers illustrates how flawed the entire concept is. After you post a preprint you are not allowed to take it down, but the server can (and does) remove it at will. Servers such as bioRxiv scrub away all evidence that the preprint ever existed. So if I happened to download a preprint, and it subsequently gets removed, potentially I could cite it in a paper and then no one would be able to read the preprint I cited.
While this is an extreme example, whether or not retracted papers can be cited is a bit of a grey area. Retracted papers do accumulate citations even after being retracted, sometimes getting even more citations after the retraction. This is likely due to a combination of drive by citations, not realizing the paper was retracted, or just citing the paper despite (or because of) its retraction status.
Complicating matters, it isn’t always clear why a paper was retracted. In the extreme example above for preprints, what should I do if I notice a preprint I downloaded no longer exists? There’s no information about why it was taken down, for all I know the information is accurate and should be cited.
As you can see, retractions are a bit of a clusterfuck. They take a lot of effort on the part of concerned scientists, the journal has to perform a thorough investigation and are incentivized to look the other way, it’s not clear when papers get retracted or why they got retracted, and after all of this people continue to cite them anyways. If you want to see how ineffectual retractions are just look at the anti-vaxx movement.
Retractions do serve one useful purpose however. When a researcher accumulates multiple retractions this usually (correctly) indicates a pattern of misconduct. Even more so when the same paper gets retracted more than once:
However, not all retractions are due to misconduct, and some scientists actually retract their own work because they realized it wasn’t reproducible.
What we really need is better linking of comments to an article. Currently it’s possible for a paper to be discussed on PubPeer, PubMed Commons, Twitter, Facebook, and various blogs, yet if you were to go to the publisher’s website you wouldn’t see any of this and think everything is peachy. A lot of comments on an article is not necessarily a bad thing however, so we would still need sites like Retraction Watch to highlight when an article appears deeply flawed and a researcher has a history of highly criticized articles. As more and more scientists transition to self-publishing, maybe journals will transition to organizing and moderating scientific discussions.