On Where they Publish

why you shouldn’t judge researchers by Journal Impacts

Sanghyun Baek
Pluto Labs
10 min readJan 15, 2019

--

[Pluto Series] #0 — Academia, Structurally Fxxked Up
[Pluto Series] #1 — Research, the Knowledge Creating Industry
[Pluto Series] #2 — Academia, Publishing, and Scholarly Communication
[Pluto Series] #3 — Publish, but really Perish?
[Pluto Series] #4 — Publish or Perish, and Lost in Vain
[Pluto Series] #5 — On Where they Publish
[Pluto Series] #6 — On Number of Publications
[Pluto Series] #7 — On Citation Fundamentals
[Pluto Series] #8 — On Citing Practices
[Pluto Series] #9 — On Tracking Citations
[Pluto Series] #10 — On Peer Reviews
[Pluto Series] #11 — Ending the Series

In the previous post we’ve discussed what constitutes a bad incentive structure in academia. More, original, and robust knowledge is the general objective of academia as a whole. Should any undesired consequences other than these positive values be induced by the incentive, or fail to capture the positive values in appropriate means, that incentive can be told to be bad.

Journal, source: Plush Design Studio, Unsplash

This post provides the first instance of such, namely Evaluating researchers with where they publish. It is questionable as to what this practice is trying to assess, and regardless of what it assesses the complex values of knowledge cannot be measured with such simplistic means. The structure in which this practice is maintained has given excessive powers to some stakeholders, leading to lack of transparency, unnecessary policies, and even unethical misconducts.

Why where?

Describing the incentive structure of academia in a previous post, it had been told that researchers are often evaluated their productivity with their bibliographic records, or list of papers they published in academic journals, in combination with “where they publish” using Journal Impact Factors as proxy value of this “where”. This “where” is, of course, a shorthand for “which peer-reviewed journal”, and specifically more often than not it assumes journals included in major indexes such as Web of Science Core Collection (WoSCC) by Clarivate Analytics or SCOPUS by Elsevier.

On the surface, choosing where to publish may seem incredibly important. Based on where you publish your research findings, the extent to which your publication gets disseminated may seriously differ. That is, if you choose a proper journal for your publication, it may be passed to hundreds or thousands of your peers, while a poor selection may lead to no one in your relevant field ever reading it. That’s what you’d see when you google “where to publish”. (of course with a lot of those “library guides” saying you need to see if the journals are indexed in WoS or SCOPUS)

Another significant aspect to consider when choosing “where to publish” is the editorial board of the journal. The most distinct functionality of academic journals is peer-review, the scientific process of scrutiny by field experts. This process of peer review is managed by the editorial boards and teams. They pre-screen the submitted manuscripts, they choose the peer reviewers, and they make final decisions on whether to accept, reject, or suggest revision for the manuscripts. As different journals have different editors, thus different review practices, it is probably very important to choose a decent journal edited by authoritative experts in your field who’s skilled enough to properly conduct the scrutiny of your publication.

Why NOT Where?

While it clearly seems it’s important to choose a right place to publish, it still does NOT mean that research should be evaluated with where it is published. First of all, it is unclear what kind of value is captures. It is possibly great if it is to capture the extent of its dissemination as described above. But this is highly challenging, both contextually and technically. Should we look at the absolute number of unique people that a journal reaches? Unique sessions and accesses from universities and institutes? Does this kind of number on journals also represent the extent of dissemination for individual publications? Are journals and information services capable of actually tracking these stats? The answer is, ridiculously, whether yes or not, the system actually never cares about this aspect.
(Altmetrics, usage statistics from online information services, pertains to this aspect. Still, they’re indicators directly arising from individual publications, not from where it is.)

Who Decides Where?

The value captured with “where” may probably be inferred from the inclusion criteria and evaluation processes of the indexes (WoSCC criteria and process, SCOPUS process and board). Both have requirements in common such as ISSN registration and English (or Roman) bibliographic metadata (abstracts, references, etc.) which are significantly important in maintaining the whole database clean and manageable. Another necessary condition is that the contents in the journals are peer-reviewed. This, of course, is positive. The problem is how they are shown. As this is a NECESSARY condition, a journal included in these indexes must be peer-reviewed. But how do we know when they never prove that they are? There are some journals that publish peer review reports along with the original publications. For the rest of the journals in the indexes, do we just trust index providers that they honestly checked the peer review practices and actual evidences themselves?

This thing goes exactly the same with the rest of those inclusion processes. Beyond the necessary conditions there are many more criteria they say they look at. To name a few: “Academic contribution to the field”, “Quality of and conformity to the stated aims and scope of the journal”, “Editor standing”, “novel point of view”, and “Target audience”. These are as well mostly common in both indexes, though some differences in expressions. These criteria all look good and fancy, and again, the question is how do they really do it. The most I find from their explanations is that they’re evaluated by their editorial teams or boards (don’t confuse this with journal editors). I end up finding NO information about editorial board on WoSCC, and the board’s page for SCOPUS gives some random 17 academics with their affiliations, each representing one meta-discipline (17 people in charge of “defining” the whole academic taxonomy, wow). Put in short, we are CLUELESS about who’s deciding journal inclusions, why they should be responsible for that work, or what they actually decided upon which evidence.

We’re talking about index of index. Papers are published (thus indexed as part of the journal it is published in) in journals edited by the field experts. At this point we’re sort of relaxed as many journal webpages give information about their editorial boards.* And then these journals are meta-indexed in bibliographic databases like WoSCC or SCOPUS, edited by some random people, where we’re completely lost of who they are and what they did. Considering the fact that these indexes are used in so many decisions in academia as evaluation criteria, we’re letting some commercial businesses decide where academics should submit their papers to be regarded as “scientific knowledge” with obscure operations. I’d strongly question if this is what science is supposed to be.
(*See for example that of Lancet. I personally view this much information about editors be minimal requirement.)

Capturing Impact

Maybe the answer is best represented with the number mostly used as a proxy for this: Journal Impact Factors (JIFs). Obvious as its name tells, it seems like JIFs assess the impact of journals. Put aside the doubt on what this impact means, it seems like the impact of journals are highly related with citations, as the JIFs are calculated with the citation graphs. (JIF is average number of citations received by publications of the journal of interest in two preceding years from all publications of the index this year. See Wikipedia for more description)

Can’t kill multiple birds with a plain stone

It is indeed confusing what this impact means. One thing we can clearly tell. It means average citation. Be it impact, value, advancement, contribution, or whatever, a simplistic, single metric cannot (and should not) represent the complex and diverse aspects of knowledge. As well depicted by the inclusion processes by indexes, there’re contribution to the field, aims and scope and their conformity and quality, standing among the field, originality of ideas, audience, ethics, transparency… to name very few of them. No one will ever find a way to capture these values with a single metric.

Using a simplistic and single metric causes a serious harm to the academia: it leads to power law*. Coined by Robert Merton as “Matthew Effect”, this power law is well depicted by the expression “Rich get richer, poor get poorer”. This power law in academia is undesired for two major reasons. i) Emerging researchers in their early careers are given less opportunities. ii) Few outliers are given authorities than the community, which is against the “organized skepticism” of modern science. Science is achieved by scrutiny by peers, not by authorities of the few.
(*Evidences of this power law include: Prices’, Lotka’s, Zipf’s, Bradford’s laws)

“Where” Never Supersedes “What”

Besides the concerns on use of single metric, there are criticisms on JIF itself. Above all, because it is calculated from citations, it inherits most of criticisms on citations. For example, citations can be gamed, the actual counts may vary significantly depending on the citation index used, and most importantly citation counts do not represent the value of a publication. So do JIFs. More on this point will be dealt in a later post discussing Citations.

It is also noted that JIFs should not be used to evaluate individual publications. The meaning of average is commonly interpreted as a predictive measure, but it is reported that you can’t predict the citations of a publication in a journal with its JIF. This is mainly due to the skewed distribution of citations. JIFs are very much affected by the outliers. Suppose we have a journal with JIF=40, we pick random publications from it to see its citation count, and we’ll most probably see numbers much smaller than 40. Describing this caveat, Nature Materials suggested in an editorial that journals had better disclose their citation distributions.

Most importantly, the value of a publication is Universal (U from CUDOS). It shouldn’t change upon where it is placed. You did experiment A, collected dataset B, analyzed with method C, got result D. Whether you put it on your Facebook feed, tweet it, blog it, arxiv it, or publish it in a journal with exorbitantly high JIF, its value shouldn’t change.

Power Overwhelming

As researchers are evaluated upon where they publish, some stakeholders in relation with this “where” get to play great influence over the academia. As described above, pretty much the virtual definition of science (or “where we need to publish to be deemed as science”) is controlled by few index providers.

Publishers and journals gain authoritative power as well. As most of journals have binary decisions on submitted manuscripts (i.e. accept or reject), researchers are subject to this decisions. Many publishers and journals do not “formally support archiving* (i.e. preprints and repositories). While there are alternative venues to publish which may support archiving, researchers are still prone not to use preprints if a high impact journal of their field doesn’t support it. That’s the only thing that prevents researchers from archiving before publication, and put in other direction, that’s why publishers and journals can decide not to support it.
(**according to SHERPA/RoMEO, 487 out of 2566 publishers (19%) do not “formally support archiving”. Extending this stat to journal level surely will increase the portion.*)

Publication Overflow

Publishers with authoritative power may lead to more publications rather than better and robust knowledge. As the business model of most publishers is harvesting profits through subscriptions and APCs (authors pay this when submitting manuscripts for Open Access), it is obvious they would favor more publications. On individual journal level, this may not be true since more publications may decrease the JIF. (it’s in the denominator) But on a publisher’s level, this can be coordinated by increasing the number of publications while at the same time generating more citations across their journals. Indeed the total number of publications have drastically increased for decades, but it’s up to question whether it also led to more, better knowledge.

Moral Hazards

Another caveat here is that publishers and journals may affect the overall direction of research communities and trends. As discussed already in previous post, they are prone to set policies that would lead to increased JIF such as accepting eye-catching, trendy, or even wrong claims and rejecting incremental, negative, or replicating studies.

Even worse is misconducts that aims to increase the metric. Journals, especially its editors, may unethically force their submitting authors to cite papers in that journal. Editors and authors around multiple journals may form citation cartels, where they cite each other’s papers to purposely increase impact of each. These malpractices are not just bad as they’re unethical, but they also further depreciates the reliability of JIF as a metric. As such, they contaminate the overall literature, generating less relevant, or sometimes even meaningless, citations across publications. At the end of the day, they deteriorate the trust in academia.

To Improve, not to Deter

There will always remain debatable aspects on where research findings should be shared. As a core part of scholarly communication, disseminating scholarly contents will always involve transfer of valuable research information from one player to another. Furthermore, evaluation by JIF (or where they publish) has this advantage over citation counts that, citations for an individual publication takes seriously long time, while a proxy with the journal can be used immediately. Nevertheless, the current practices of evaluating researchers and their contents based on where they publish should be improved.

As a simplistic metric lays questions as to what it captures, more comprehensive means to better capture diverse aspects of knowledge should be investigated. It should be always taken into account that a piece of knowledge, and its value, does not change upon where it is published, while we keep reminding that it’s still important to question to whom it will be communicated.

Malpractices may arise by a few players having too much authoritative power with interests far from generating genuine knowledge. Specifically, transparency is imperative not to let the opaque operation of a small number of profit-oriented players decide what will be regarded as research.

Pluto Network
Homepage / Github / Facebook / Twitter / Telegram / Medium
Scinapse: Academic search engine
Email: team@pluto.network

--

--