On Number of Publications

more publications ≠ better performance

Sanghyun Baek
Pluto Labs
7 min readJan 21, 2019

--

[Pluto Series] #0 — Academia, Structurally Fxxked Up
[Pluto Series] #1 — Research, the Knowledge Creating Industry
[Pluto Series] #2 — Academia, Publishing, and Scholarly Communication
[Pluto Series] #3 — Publish, but really Perish?
[Pluto Series] #4 — Publish or Perish, and Lost in Vain
[Pluto Series] #5 — On Where they Publish
[Pluto Series] #6 — On Number of Publications
[Pluto Series] #7 — On Citation Fundamentals
[Pluto Series] #8 — On Citing Practices
[Pluto Series] #9 — On Tracking Citations
[Pluto Series] #10 — On Peer Reviews
[Pluto Series] #11 — Ending the Series

Last week’s post described how academics and the ecosystem are hugely (and negatively) affected by the evaluation criteria on “where” their findings are published. This evaluation on where they publish not only discourages the overall creation of genuine and robust knowledge, but also induces bad atmosphere throughout various players. It’s hard to clarify what they’re supposed to represent, and research findings are valuable as they are, regardless of where they are published. Yet, it is most problematic in that many players who don’t really create knowledge, get exorbitant power on the system and some players, even academics, are tempted into wrongdoing so as to increase their influence.

This post will discuss, in a similar way, why researchers shouldn’t be evaluated upon how many they publish (i.e. why it’s a bad incentive). The number of publication doesn’t well represent the positive values of academic world. Specifically, as this criterion is absolutely quantitative, players can always “game” the metric. This evaluation practice may further lead to unwanted consequences, out of which the worst is making academics shady about their findings.

piles of books, source: Darwin Vegher, Unsplash

More Publications, More Obscura

Just as the phrase “Publish or Perish” literally infers, researchers are increasingly under pressure to generate more publications as their productivity is evaluated upon that. It seems quite normal and sound for anyone in this society to have pressure to perform better. But the question here is, “pressure for what?” It is highly questionable what kind of performance is traced by this measure. As this imperfect measure has ceased to be de facto a measure of productivity, many researchers, while trying not to perish, are being obsessed with publishing more papers rather than pursuing genuine knowledge.

First of all, the smallest unit of this measure, a publication, may not represent a unit knowledge. This is better asserted by the expression “Least Publishable Unit”. As the term itself explains, there is no agreed size or standard of how a unit publication should be. Even if there was a consensus on the Least Publishable Unit, it is still unclear what this unit is capturing. Put aside that it’s a challenge to define a unit knowledge, the value of knowledge isn’t so simple to be captured by a single metric. For a straightforward example, any mathematician would question whether the Perelman’s three papers solution to Poincare Conjecture was hundreds-fold less productive than more than thousand of publications by a dubious academic.

What’s worse, the publication rates differ across disciplines. Even within a single discipline, different subjects may show different publication rates. That is, while it could be a challenge for a researcher in the field of High Energy Physics to publish a single paper in a year, it could be relatively easy for a computer scientist to publish multiple conference proceedings papers in a year.

Pulling all the limitations aside, it is as well highly controversial what should count as publication. Just as it was described with “where they publish”, the number of publications is as much dependent on the index being used for the analysis. Web of Science and SCOPUS is again often used, thus the same questions addressed in “where they publish” holds here as well. Beyond that, another controversy on what should count still holds regardless of the index used. Should we only count original research reports? How about clinical trials, replication studies, review articles, and so forth? Should they all count as the same unit contribution? How about the authorship? Should we only count those published as first authors? Do we generously permit second and corresponding authors? Or just split the unit publication count by the number of authors? It is quite distressing to raise so many questions without clear answers to them, but using a simplistic measure without asking these is even worse.

More Publications, More Obsolescence

The evaluation based on the number of publications has led to undesired consequences as well. As it could be naturally be suspected, the number of whole publications has explosively increased. Most of the literature reported the growth of publication with analysis on the Web of Science database, but if we span these analyses beyond the Web of Science the numbers are expected to simply exceed the linear correlation.

More publications are seemingly not so bad if we assume that publications mean more knowledge. But the problem here is that those of low quality and duplicate, or even fraudulent findings are increasing. This is in some sense in line with the predatory journal discourse, where the Open Access and APC business models are often blamed for the increase of predatory journals. But as we all know, it takes two to tango. Low quality and fraud publications increased not just because it makes a great business, but also because on the other side researchers needed more publications as they are evaluated with them, be it groundbreaking, incremental, or fake.

Evaluating with the number of publication had also led to misconducts in research. Not performing due diligence in selecting journals when submitting manuscripts (i.e. submitting to predatory journals) is one such example, intended or not. A better, comprehensive, and contextual evaluation methods would encourage authors to choose a journal where it is most probable that their research findings will undergo meticulous scrutiny by genuine peers who understand their expert field.

More serious misconducts are conducted to increase one’s publication counts. Salami Slicing is to some extent related to the Least Publishable Unit discussed above. Some academics tend to split their research findings into smaller fractions so that they can secure multiple publications with a single research project. They may be tempted to even fabricate their findings to ensure they are published, such as P-hacking and HARKing. Other reported serious misconduct as to secure publication is impersonating fake reviewers (i.e. author-recommended reviewers with fake emails) and misrepresentation of authorship.

Above all, the most significant, and bad, the consequence of evaluating academics with publication counts is that they are being shady about their research findings. This again reconciles with what “publish or perish” says. Any actions that are against securing possible publications is NO for academics. Under the umbrella term “Open Science” there had been many suggestions on how research findings are communicated. Open Data is about making datasets available just as publications and Open Notebook Science is about sharing the lab notes. Not practically coined with specific terms, but there are much more valuable components in research that would be better communicated than held in private cabinets*: research ideas, experimental setups, protocols and tips, limitations and further studies to break through them, and etc. Not so much is actually shared actively. (*or hard drives, or cloud storages)

Because researchers are evaluated with the number of their publications, they are incentivized to report “Least Publishable Information” in their publications. Sometimes essential information might even be omitted from the publication. Considering the collaborative, circulating aspects of knowledge creation, we may all agree that it’s best to incentivize them to publish “maximal” information possible. The current evaluation on a simple count of publications is rather opposite to this belief.

Previous points are introduced as bad as they are, namely increased (and specifically low quality and fraudulent) publications; fabricated research findings; and misconducts in the publishing process. They are, at the same time, problematic since the knowledge creation is a revolving system. Bad primers may lead to even worse consequences. And fixing it is as important as creating new knowledge.

No JIF, No Pub. Count, now what?

Now we’ve discussed the two most used evaluation criteria of academia, namely journal impacts and publication counts, next post will be about citations. Unlike these recent two topics, citation is relatively more complex to deal with, in that it is a clear form of attribution. “Publications explicitly name another publication from the past as a source of information.” That’s probably the reason why most debates in recent decades are formed around the assumption “citation being the ultimate evaluation”.

Stay tuned to hear what Pluto has to say about this, and CLAP & SHARE the story with your peers, friends, and families to invite more discussion.

[Pluto Series] #0 — Academia, Structurally Fxxked Up
[Pluto Series] #1 — Research, the Knowledge Creating Industry
[Pluto Series] #2 — Academia, Publishing, and Scholarly Communication
[Pluto Series] #3 — Publish, but really Perish?
[Pluto Series] #4 — Publish or Perish, and Lost in Vain
[Pluto Series] #5 — On Where they Publish
[Pluto Series] #6 — On Number of Publications
[Pluto Series] #7 — On Citation Fundamentals
[Pluto Series] #8 — On Citing Practices
[Pluto Series] #9 — On Tracking Citations
[Pluto Series] #10 — On Peer Reviews
[Pluto Series] #11 — Ending the Series

Pluto Network
Homepage / Github / Facebook / Twitter / Telegram / Medium
Scinapse: Academic search engine
Email: team@pluto.network

--

--