Publish, but really Perish?

the incentives, evaluations, and their necessities in academia

Sanghyun Baek
Pluto Labs
9 min readDec 28, 2018

--

[Pluto Series] #0 — Academia, Structurally Fxxked Up
[Pluto Series] #1 — Research, the Knowledge Creating Industry
[Pluto Series] #2 — Academia, Publishing, and Scholarly Communication
[Pluto Series] #3 — Publish, but really Perish?
[Pluto Series] #4 — Publish or Perish, and Lost in Vain
[Pluto Series] #5 — On Where they Publish
[Pluto Series] #6 — On Number of Publications
[Pluto Series] #7 — On Citation Fundamentals
[Pluto Series] #8 — On Citing Practices
[Pluto Series] #9 — On Tracking Citations
[Pluto Series] #10 — On Peer Reviews
[Pluto Series] #11 — Ending the Series

Peer review process, source: Nick Kim

With previous posts, we have explored Research, the knowledge creating activity, its modern changes and importance, Academia, the dynamic social stakeholders in knowledge creation, and Scholarly Communication, the system of communicating research-related information and its most prevalent form, academic publishing.

This post will deal with the last “background” part of the series, which will be the incentives, evaluations, and their necessities in research world.

So what’s Not-Perishing?

In academia, incentives are provided mainly in the forms of career opportunities. Specifically, for academics, they are given as job positions, promotions, job security like tenures, or research grants (funding). The core momentum that penetrates through these diverse forms of incentives is a sustainable academic career. That’s why the stories are often told around typical “tenure tracks”, as tenures are believed to be directly associated with academic freedom.

Nobel Prize award ceremony, source: Nobel Media, Alexander Mahmoud

Another significant factor that drives academics is the intangible assets of academic reputation. As academia highly appreciates “originality, advancing the field, and being first” many researchers are as well motivated by reputational incentives such as recognition, acknowledgments, honorary awards like Nobel and Fields, or sometimes the intellectual ecstasy of solving notorious challenges seems to mean more than Medals to genuine brains. This driving force of reputational incentives is pretty much well shown by how Blackmore used the term “prestige economy” to describe it.

So how to Not-Perish?

Some evaluation methods are used when these incentives are given to academics. Of course, the actual methods will vary depending on where they are given, but there are some common things in use generally. These evaluation methods often do not directly compensate researchers, but they practically incentivize them to do certain things since they are eventually going to be compensated based on how they are evaluated.

The most common evaluation methods generally fall into two categories: either quantitative or qualitative. Quantitative evaluations are those that yield numerical information. Qualitative ones, on the other hand, are those that capture non-numerical values, such as an answer to “how valuable a research paper is in a specific context”. Qualitative methods often, if not always, take the form of peer evaluation, or sometimes the results from such evaluation.

Numbers tell

Most commonly used quantitative evaluations are i) bibliometric analysis, ii) citation analysis, iii) some of their combinations and tweaks, and iv) others. Bibliometric analysis is the evaluation taken based on the list of publications authored by an academic. The most basic metric is the number of publications. Citation analysis, as the name itself shows, is the evaluation taken based on the number of citations received (i.e. citation counts). The most basic metric is the citation count of individual papers authored by an academic, and the total citation count is commonly used as well. Sometimes the distribution and pattern of citation counts are investigated for evaluation.

Numbers on a monitor, source: Unsplash, Stephen Dawson

For decades there had been numerous proposals on metrics that are combinations and tweaks of these two metrics. Among them, the best known are the Impact Factor (IF) and the Hirsch index (h-index). The Impact Factor of a journal, or the Journal Impact Factor (JIF), to briefly introduce, is the average number of citations received by the articles published in that journal in recent years. It is devised by Eugene Garfield of Institute for Scientific Information, and nowadays the most commonly used is the one that is calculated and provided by Clarivate Analytics (Clarivate hereinafter). The one by Clarivate, as their essay describes, is “calculated by dividing the number of current year citations to the source items published in that journal during the previous two years.” Thus, for instance, a JIF of 20 in 2018 means that the articles published in that journal during 2016 and 2017 have received, in average, 20 citations from articles published in 2018 in all journals, proceedings, and books indexed by Clarivate’s Web of Science Core Collection.

JIF may seem irrelevant to the evaluation of individual academics since it is a metric for journals, but when they are evaluated with their publications, a lot of times they are evaluated with where they publish as well and JIF is used as a proxy for how good a journal is in this case.

Hirsch index is another aggregate metric that is commonly used to capture both the number of publications and the citation patterns of an academic. It is defined as the maximum number h, where the academic has published h publications with each having at least h citations. For instance, a researcher with an h-index of 10 has published 10 articles with more than 10 citations received for each. Although JIFs and h-indexes are described as metrics for journals and authors respectively, they can be calculated for any entities if we can define the set of papers related to those entities (e.g. research groups, institutes, universities, countries, etc.)

Besides these commonly used metrics, there have been numerous proposals on other methods as well. EigenFactor and SCImago Journal Rank tried to find a steady state of impacts between journals by flowing their impacts over the citations with iterations, which is very similar to how Google ranked web pages with PageRank algorithm. Another metric based on citation counts is SNIP, which tries to remove the differences between disciplines that other metrics usually fail to take into account. Some modifications of h-index had been proposed, with some notable examples being i10-index and g-index. Altmetrics had been proposed to capture online based usage statistics such as web view counts, downloads, or the number of mentions on other web sources(e.g. Wikipedia or social media).

Another measure that is to some extent quantitative, is the amount or record of grants obtained by an academic. This is especially often used when professors are evaluated for promotions in university departments. In general, this is to capture how much funding had been secured by a researcher, but depending on different contexts it might also look at details such as types or sources of these grants.

But Numbers can’t Tell Everything

Qualitative evaluations mostly take the form of peer evaluation, which is more or less similar to peer reviews in journal publishing we’ve discussed in the last post. In a very similar context, experts from the same or similar fields (e.g. peers in the department in case of the university) would look into several aspects of the academics being evaluated, including the bibliographic records (i.e. their publications).

Peer evaluations may be required for several reasons, but most probably they would be used for making decisions. These may include but not limited to: comparing two or more candidates for a limited position, awards such as Nobel prize or Fields medal; recommending a faculty member for tenure or promotion; assessing a grant proposal for final decision on funding; determining priorities between several research projects; so forth.

One literature reports that peer evaluations are preferred for several advantages. To directly quote as it says, “it is relatively inexpensive and can be done expeditiously; it does not make excessive demands on peers and specific norms can be set and maintained; it is the best mechanism to divide research funds, to monitor research and to evaluate research results.” though I’m kinda opposed as these might depend on how peer evaluation is practiced. Or rather this is the biggest advantage of peer evaluation: They may embrace the contextual aspects of a decision which numbers can not. Most importantly, at the end of the day, it is the criticisms and scrutiny by peers that makes science and knowledge so robust.

Sometimes the results of external peer evaluations and their decisions are used for evaluation retrospectively. For instance, the fact that an academic has been awarded honorary prizes or elected to a prestigious position may be utilized as a criterion in qualitative evaluation. They also give possibilities for quantitative use such as comparing two institutes with the number of Nobel laureates. Other instances include experiences in editorial boards for journals or committees for certain academic programs, educational backgrounds such as where an academic have finished Master and Ph.D., teaching and training experiences in universities, or past activities in other academic institutes.

What else than Not-Perishing?

There are some necessities for evaluating research performance and productivity. For this part, I will briefly arrange some points shortly as they are mostly highly controversial. This post is the last part of this series that deals with background information so I wanted to keep it to facts as much as possible.

There are limited resources

Academics outnumber the availability of resources. The society as a whole need to distinguish who will perform better with those scarce resources. This scarcity is not limited to money, but also covers positions, opportunities, facilities and equipment, and maybe the most scarce resource of all is “intellectual man-hours” of promising researchers.

It is relevant to point out within this context that Eugene Garfield when he devised Impact Factor hoped that it may help libraries choose best subscription options with increasing number of journals. This is a significant issue in higher education as shown in the fact that university libraries are canceling subscriptions, and will continue to be a challenge for society unless we fundamentally change the structure in academic publishing.

Scholarly communication evolves based on past knowledge

As discussed in the previous post, academics work on past pieces of knowledge to create further more knowledge. In this process, they engage in tons of occasions where they need to discriminate between tens or even hundreds of literature. To do this in an efficient manner, it is critical to have at least a glimpse of differential factors between ever-so-increasing number of research outcomes.

It is the nature of modern science

As already mentioned in the qualitative methods section, it is the criticisms and scrutiny by peers in academia that makes science and knowledge more robust.

Scientific scrutiny, source: Understanding Science, UC Berkeley

So, Publish or Perish?

From the titles and headers to this last concluding remark here, I’ve consistently used the “Publish or Perish” theme without giving any idea about what it tells. That was intended to keep the story informative, but as you can surely tell, yes I am going to say that academics perish unless they publish and that it’s bad. Next post will discuss why the current academic incentive structure is problematic, and how the complex and dynamic relationship between social stakeholders work in that.

As always, please clap and share the story with your friends, families, and peers to promote more discussion, and help this story improve.

[Pluto Series] #0 — Academia, Structurally Fxxked Up
[Pluto Series] #1 — Research, the Knowledge Creating Industry
[Pluto Series] #2 — Academia, Publishing, and Scholarly Communication
[Pluto Series] #3 — Publish, but really Perish?
[Pluto Series] #4 — Publish or Perish, and Lost in Vain
[Pluto Series] #5 — On Where they Publish
[Pluto Series] #6 — On Number of Publications
[Pluto Series] #7 — On Citation Fundamentals
[Pluto Series] #8 — On Citing Practices
[Pluto Series] #9 — On Tracking Citations
[Pluto Series] #10 — On Peer Reviews
[Pluto Series] #11 — Ending the Series

Pluto Network
Homepage / Github / Facebook / Twitter / Telegram / Medium
Scinapse: Academic search engine
Email: team@pluto.network

--

--