F1000Research gives open science a bad name

Jordan Anaya
9 min readJul 23, 2016

--

This is a difficult post for me to write since I believe in post-publication peer review and F1000Research is seen as a pioneer in using this model. However, when I envisioned post-publication peer review I never imagined getting rid of editors, allowing the authors to invite their friends to review their articles, relinquishing the responsibilities of accepting or retracting articles, and then passing the articles off as if they underwent a thorough peer-review process. The Scholarly Kitchen has repeatedly expressed concerns with the publishing model of F1000Research, but since the authors there seem to be publishers who want to maintain the status quo (for example they hate Sci-Hub while I think it’s the best thing to ever happen to science), I don’t pay them much mind. But for this one time only, they seem to have been correct.

What made me agree with publishers whom you can basically hear screaming “GET OFF MY LAWN” in every other sentence of their blog posts? I actually carefully read a couple articles at F1000Research and watched the peer-review process play out, and what I saw makes me feel like I’m taking crazy pills.

I already discussed this paper, which was also already excellently covered by Retraction Watch and Neuroskeptic, but like Donald Trump it’s so ridiculous that it needs to be talked about some more. Appearances can be deceiving, but in this case they’re not, and this article has amateur hour written all over it. It’s extremely short, has no figures, has one supplemental file which only has 75 words and is just copied from some clinical trial. And it only has 12 references, one of which is the article itself.

So how was the article received by the “reviewers” who were invited by the authors? Luis Pinho-Costa describes the title as “elucidating and enticing”, the methods as “ingenious”, and claims the technique “could revolutionize scientific publishing”. Amy Price notes that the abstract “provides considerable detail in an elegant way” and the idea is “an original innovation for data security”. Is this a peer review or is this Paula Abdul gushing over her favorite American Idol contestant?

I’d like to remind the readers that this article was plagiarized from Benjamin Carlisle’s blog, so I guess the reviewers are effectively complementing the blogger. And the fact the article was plagiarized makes words like “original” lose just a little bit of weight. In fact, this method is already used in other fields, so the fact that the “reviewers” find this technique mind-blowing shows they don’t have any expertise in this area. I couldn’t help but snicker at one comment by Amy Price. She mentioned she is “sure” that the next step of the project will be to create an algorithm that simplifies the method for users. Umm, I’m pretty SURE these authors know nothing about programming or web development and there’s no chance in hell they could produce anything useful. But I guess if you’re “sure” I’ll take your word for it. When their application becomes available let me know.

As bizarre as these glowing reviews were for such a simple paper, the reviewers’ responses to questions from Retraction Watch made me want to ask these people what they’re smoking, because I desperately want some. Charilaos Lygidakis seems to think it’s fine to take ideas from blog posts without any attribution and isn’t concerned by the article because he holds the authors “in very high regard”. Amy Price does not think the blog “would meet considerations for authorship” because it only “suggests a use for a concept” and “does not operationalize it”. She claims that the disputed technique “seems to be a common and undeveloped idea for clinical trial registrations”. Let me remind you that in her review for the plagiarized paper she referred to this idea as “an original innovation for data security”. So which is it, a common idea that was begging for someone to expertly put into practice or a mind-blowing idea that will revolutionize science? I guess it depends on if the idea is posted on a blog or presented in a scientific publication.

Amy Price says that since “the blog contains no supplementary materials and yet the research paper does” it “puts to rest the length and similarity argument”. As I mentioned above, the one supplemental file was just 75 words which are just copied from some clinical trial registry. And the file wasn’t even in the correct format. She says an “idea or a tool is not research”. Which I agree with to an extent, because you can have the idea that someone should cure cancer, but the person who actually does it should get the credit. But I would argue that the authors of this article didn’t add anything to the blog post. Oh wait, I forgot, they had a supplemental file, my bad. And just as an idea isn’t research, I would argue that ctrl-c, ctrl-v is not research.

You can say what you want about these authors, but you’ve got to give them credit, they got some ride-or-die-bitches to “review” their paper. And at F1000Research the inmates run the asylum. If two reviewers accept a paper it gets indexed by PubMed. An editor doesn’t need to read the comments to see if they made any sense. In fact, if you approve a manuscript you don’t even have to justify it. And just as the journal doesn’t decide what papers get accepted, they also don’t decide what papers get retracted. They have left it up to COPE to decide if this paper will be retracted.

___________________________________________________________________

Unfortunately this wasn’t my only recent experience with an article at F1000Research. I also followed the peer review of the search.bioPreprint article since it mentioned PrePubMed in the article (I made PrePubMed). I will understand if readers view my criticism of this article as biased since I have a potential conflict of interest. However, I’d like to point out that I was not paid to make PrePubMed, I am not paid to maintain it, and in fact the PrePubMed servers cost me money. I also did not submit PrePubMed for publication, instead opting to write a blog post. I’ve made all the code for PrePubMed publicly available, and I would be very happy if a competent institution would take the responsibility of indexing preprints off my hands. Regrettably, search.bioPreprint does not replace the capability of PrePubMed and the librarians at the University of Pittsburgh don’t appear capable of this responsibility.

I’ll provide a quick overview of search.bioPreprint. It takes your search query and enters that query into the search box of four different preprint servers for you. It then takes the results and uses proprietary software to cluster the results. However, the software is only capable of returning up to 200 results and the results are not sorted by date, making it impossible to find the most recent articles matching your query. Other issues are that each preprint server handles your query differently, so it is difficult to get consistent results from each server. And because you have to wait for the search engines of four different servers queries are slow.

Google Scholar is a well known method for searching for preprints, and in fact can be limited to your preferred preprint servers with the advanced search option as described by Jessica Polka in this post. Google Scholar is much faster than search.bioPreprint, allows for full text searching, and provides the option to sort by date. As a result, Google Scholar is a much more effective method for finding preprints than search.bioPreprint and has existed since 2004. Why someone would use search.bioPreprint instead of Google Scholar is beyond me.

Despite these facts, search.bioPreprint was written up as a preprint and sent to bioRxiv. Their rationale for developing their tool is the fact that preprints “are not typically indexed and are only discoverable by directly searching the preprint server websites”. However, as I just mentioned they can be found with Google Scholar and they are indexed daily by PrePubMed, so there was no reason for them to make search.bioPreprint. And they claim they submitted the tool as a preprint to bioRxiv to “support the preprint movement”.

I guess their submission to bioRxiv didn’t “support the preprint movement” enough because they then subsequently sent their paper to F1000Research. The bioRxiv and F1000Research submissions are basically identical, except for one paragraph. The authors added an entire paragraph in the F1000Research submission discussing F1000Research! And in said paragraph they refer to F1000Research as a “unique publishing platform” with a “transparent peer review process”. Jeez guys, if you’re going to suck some dick keep it in the back alleys will you?

Luckily the reviewers for this article weren’t smoking whatever the plagiarized article “reviewers” were smoking, and actually made intelligible points. Despite this the peer-review process was still eyebrow-raising. Cynthia Wolberger gave the article an unconditional “Approval” in part because search.bioPreprint allows for “full-text searching of all current preprint archives”. The problem? You can’t perform full text searching with the PeerJ Preprints and arXiv search engines. And it’s not her fault either because the authors emphasize in the paper that search.bioPreprint can perform full text searching. But one thing about her review that I do blame her for is that she says she hopes that the authors will in the future add functionality to the site, in particular allow sorting by date. In peer review when you have things you want the authors to change you don’t accept their paper and hope they get to it eventually. You tell them you will accept their paper if they make the changes, and then if they make the changes you accept the paper. At F1000Research it seems to be the case that the reviewers hope that the paper gets improved at some point, but they have no problem giving it a stamp of approval in the meantime. After all, they know the authors and trust they’ll do the right thing.

Prachee Avasthi had a more critical review and only gave an “Approved with Reservations”. She noted that the authors should mention Google Scholar in the paper given that most people use Google Scholar to find preprints. And she noted that if they are going to refer to search.bioPreprint as a “one-stop shop” they should describe what makes it better than Google Scholar or PrePubMed. And hilariously she noted that search.bioPreprint can’t find its own article at F1000Research. That’s right, a “one-stop shop” that can’t find itself.

And did the authors satisfactorily respond to these concerns? Not exactly. Instead of delineating the differences between Google Scholar, PrePubMed, and search.bioPreprint, the authors said they’ll “leave it to others to determine the pros and cons of using search.bioPreprint”. Instead of explaining in the text how to properly use Google Scholar’s advanced search they describe how adding “preprint” to your Google Scholar query doesn’t limit your search to preprints. Really? Instead of acknowledging how much better Google Scholar is you tell people how to incorrectly search for preprints with Google Scholar? Way to “support the preprint movement”.

And did they add the ability to sort by date? No, but they said they were working on it. And apparently this was enough for Prachee Avasthi to give the paper an “Approved” and get this article indexed by PubMed. So just to recap, both reviewers emphasized how important it is for articles to be sorted by date, and yet we have no idea if it will ever be implemented. Cynthia Wolberger was under the impression that search.bioPreprint performs a full text search and likely didn’t know how powerful Google Scholar is and yet can’t go back and retract her approval. But hey, the process was transparent so it has to be good, right?

As a cherry on top of it all is how the author started all of his responses. “We sincerely thank the reviewer for considering the manuscript as ‘Approved’…” Do you know when you thank people? When they give you something or do something for you. If you score well on a test you don’t thank the person who graded it…well, that is unless they graded it leniently and gave you a score you felt you didn’t deserve. And it’s clear that is what’s happening at F1000Research.

--

--