Publish or Perish, and Lost in Vain

Obsession with publication is one thing, yet there are more being missed out

Sanghyun Baek
Pluto Labs
10 min readJan 8, 2019

--

[Pluto Series] #0 — Academia, Structurally Fxxked Up
[Pluto Series] #1 — Research, the Knowledge Creating Industry
[Pluto Series] #2 — Academia, Publishing, and Scholarly Communication
[Pluto Series] #3 — Publish, but really Perish?
[Pluto Series] #4 — Publish or Perish, and Lost in Vain
[Pluto Series] #5 — On Where they Publish
[Pluto Series] #6 — On Number of Publications
[Pluto Series] #7 — On Citation Fundamentals
[Pluto Series] #8 — On Citing Practices
[Pluto Series] #9 — On Tracking Citations
[Pluto Series] #10 — On Peer Reviews
[Pluto Series] #11 — Ending the Series

Thanks to your patience to read my series up to this point. Now I’m done with dealing with the backgrounds of the series, Research, Academia, Scholarly Communication, and Incentives. This post will be the first of the series to deliver its main theme, our claims, beyond just being informative.

To put it in a single line, the current incentive structure of academia is perverse, and it discourages collaboration and genuine knowledge creation.

Photo by J. Kelly Brito on Unsplash

Good, Bad, Ugly

What does it mean when we say an incentive is “perverse”? According to BusinessDictionary, it is “an incentive that produces an adverse consequence due to the actions undertaken to receive the incentive.” Further extending this to the evaluation methods used in academia, I would say the incentive is perverse because it fails to encourage the “good” things and it produces “bad” things in academia. But what are the goods and bads of academia?

Admittedly, speaking of good and bad is very much controversial, depends a lot on the sociocultural context, and sometimes things get even “ugly” since we cannot definitely and clearly divide every aspect of knowledge creation and its communication into either good or bad. But the discourse begins when we start talking about the good and bad where we can generally agree at least in the current global context we live in.

Good

Robert Merton, the founder of the sociology of science, introduced four necessary traits of modern science, and with a later addition of Originality they formed five of CUDOS: Communalism, Universalism, Disinterestedness, Originality, and Skepticism. From this Originality, with how we described research as knowledge creation, one of the goods of academia is, without doubt, “creating more original(new) knowledge”. Put aside the way it leads to the more controversial question on what is regarded as original, any aspects of the incentive structure that can be “contextually” interpreted to have led to “more original knowledge” may be seen as a good incentive.

With the points C, U, D, and S (eww) another good can be drawn: robustness. We don’t want to just repeat seeking new findings about nature, but we want those findings to be solid and reliable as well. The way we make knowledge more solid and reliable is by sharing it with the community (Communal and Universal, be it peers in the field or the public), consistently laying questions to it (Skeptical), and making it hold regardless of the questioner (Universal and Disinterested). Failing to make knowledge robust may lead to the public believing in “bad science”*, but more importantly it can make the whole silos of our knowledge a mess with questionable ones.
(*Although the direct quote of “bad science”, the power-pose study is still on fire. And it’s a good sign of making it “robust”)

power posing, source: Mohamed_hassan, Pixabay

As I noted in the first introductory post of this series, academia has got to be more collaborative. In terms of scholarly communication, being collaborative means sharing more information. Merton, in describing C of CUDOS, noted that the intellectual property of academics should be limited to the recognition (attributing such as authorship) and esteem (the reputation that follows). He also noted that scientific findings are products of social collaboration thus assigned to the community. In Pluto’s point of view, this can be subjected to more debate on how much an academic may claim the right on a finding with how much contribution made. (A lot of research leads to patent, and it’s highly questionable if Merton was denying patents as well) Anyhow, collaboration is deemed good aspect considering the circulating nature of knowledge creation (i.e. scholarly communication), but the discourse is required around who contributes exactly how much by sharing which scientific information and how they are accredited accordingly.

Bad

So what are the bad things undesirable in academia? First of all, out of question, anything that’s explicitly opposed to the good things mentioned. That is, anything harassing the creation of original, robust knowledge and its proactive communication, is bad. Especially, any aspect in the incentive that prevents researchers from collaborating with each other (i.e. making them be shady about research findings) need to be fixed.

Being obsessed with things that has nothing to do with genuine knowledge is bad. Any activities on both individual and group level are bad if they disturb the creation of original and robust knowledge, thus are deemed misdeeds. On the individual level, cheating and frauds such as p-hacking, HARKing, or cherry picking are bad. On institutional or group level, misdeeds include citation cartels (i.e. group of researchers citing papers of each other purposely) and setting journal policies only as to increase its impact factor, and political rejections and scooping by editorial boards or reviewers.* On each occasion, these examples might seem negligible, but they still cannot be ignored (or rather needs to be regarded as significant misconduct) as they seriously affect the trust and reliability in academia. In a yet bigger picture, anything slowing down the creation of knowledge is bad. And the misdeeds, either directly or indirectly, do slow it down.
(*things get uglier as there are more controversial issues as simultaneous releases, or ethical reasons for rejecting, though it’s up to question whether they’re bad)

the Ugly Incentive

When we say the current incentive structure in academia is perverse, we’re thus saying that it either fails to encourage the creation of new, robust knowledge and its communication, or it rather induces misconducts and malpractice in academia. Some of the points are given on the evaluation methods in use currently, and some are about what the current system is missing out.
(for more detailed explanation of the incentive structure, see our previous post)

Numbers get ugly

Evaluation methods, specifically the quantitative metrics such as JIF or number of publications, are sometimes criticized for their inability to fully capture the “good things” discussed above. (i.e. more papers do not mean more knowledge) More often, they’re blamed for encouraging the bad things. For example, evaluating researchers based on their number of papers published in prestigious journals is criticized as academics may become obsessed with getting their papers accepted (thus published) in high impact journals rather than pursuing genuine knowledge (the infamous publish-or-perish). Academics would, therefore, omit as much information as possible from their submissions if they give any possibility of more publications.

As researchers are also assessed with “where they publish”, often with the JIF, journals and their editorial boards sometimes have unusual authorities, possibly because they determine which papers get accepted. It is suspected that some journals are even engaged in organized manipulation of their impact metrics. Some journals are criticized for selectively accepting those more eye-catching, trendy, or even wrong.

It’s not FOMO!

We’re actually missing out! The points above are about pitfalls and caveats of the methods already in use. On the other hand, the current incentive is missing out a lot of valuable things in academia.

It’s always Papers

As already discussed with the quantitative metrics, the current evaluations are much focused on the form of “original research paper”. There are the whole bunch of different forms of research findings that arise in the course of a study, and they never get the light of being credited as contributions to academia. Open Data, DataCite, MakeDataCount, or Open Notebook Science is some of the notable examples that endeavor to better incentivize data (so as to get them shared) in research. Beside them, there are far more. Proposals, ideas, experimental setups, protocols, analytic methods, codes, … I won’t really be able to name them all.

Source: Free-Photos, Pixabay

Publication Bias

Even within the narrow form of published papers, there are still valuable pieces of information being missed out. As shown in what kinds of studies are accepted by journals, specifically those with high impacts, there clearly is a preferred type of study in academia currently: positive results. In other words, null (or negative) results are hardly published. Null results refer to those that failed to confirm the hypothesis of the study, thus they never mean anything near “meaningless”. Sometimes failing to do something gives precious information since they may provide how not to fail. Another important type is replication studies. A replication is kinda repeating a prior study following the specifications documented, with different setups. The fact that they are not actively shared and rather put in the file drawer forever, means that we need a better incentive structure to “make them count”.

Retrospect, Revisit, and Robust

Above all, we’re missing out the most fundamental works of scholarly communication that sets the basis of modern science. The retrospective activities are so important because they make the most distinctive feature that builds robust knowledge but still they are undervalued. We’re missing out values in Peer Reviews. They are neither actively shared nor properly credited. It is problematic in itself since they’re too valuable to be missed, but more importantly, not sharing and not crediting them may sometimes let those activities be conducted in irresponsible manners. Considering how discourses around modern science and its problems consistently converges to the “verifiable, testable, scrutinized, validated” state of knowledge (which all seem to make knowledge more robust), it is ridiculous to see the peer reviews being so undervalued.

To sum up, the motivations that drive academics in knowledge creation require a severe fix, specifically in how they are evaluated as well-performing and which activities are accredited as academic contributions and thus shared. The quantitative metrics are not fully, or sometimes even not at all, indicative of the good things in academia they’re supposed to capture. They often tempt academics into wrongdoing. What’s more, the overall system is failing to leverage numerous types of activities and information arising in the course of studies, which are too valuable to be missed or even set the ground of modern science (or supposed to do if properly incentivized). Specific points will be dealt with in upcoming posts, with why they’re problematic and how some stakeholders are situated. With the remaining of this post, I will briefly arrange the stakeholders and their positions shortly.

the Academics

  • They take the most central role of the whole organization of knowledge creation: they create knowledge
  • They are also the biggest “consumers” of knowledge. They create new knowledge based on past ones. (often the most recent)
  • Their activities (thus career) are highly governed by other players: institutes (universities), funding agents, publishers, and journals, etc.
  • They’re “sparsely split” into numerous groups (communities, societies, etc.).
  • Thus it is a challenge for the academia to have a decent “interface” to speak out, to gather a concrete voice, or to make “consensus”

the Commercials

  • A lot of publishers, metrics providers, information science businesses (e.g. a lot of those acquired by Clarivate Analytics, Elsevier, Informa, EBSCO, etc.)
  • Don’t confuse. Non-profits can still be commercial (i.e. take fees). See how many non-profit publishers are taking charge for paper submissions or subscriptions
  • These commercial players, doing “information-oriented” business specifically in academia, has this paradox
  • In the sense they sell “information”, they often provide the only results of information and its analysis. As academia and its scrutiny is about the information beyond, a lot of times it is required to look at the raw data of these businesses (i.e. transparency)

the Public and the Agents

  • the public makes a lot of input to academia, mainly in the form of tax
  • There are a lot of “agents” between the public and the academia: funding agents, universities and libraries, policy makers, etc.
  • As pictured in “principal-agent problem”, these players act as agents to make decisions on behalf of the public (or sometimes for another group).
  • Often these agents are not experts in certain academic disciplines or put in better words, their decisions often affect more than one field of expertise
  • These agents a lot of times when they make decisions depends on the information provided by the commercials and in the process those inputs (especially capital) from the Public flow into the commercials.
  • In the case of qualitative evaluations and decision making, these agents again consult on another group of agents, who are usually experts in specific disciplines, which brings us back to the academics!

The complexity of stakeholders and relations inbetween spans more than these. See, for example, how the American Association for the Advancement of Science (AAAS, which publishes the famous journal Science and its sisters) has “America” in its name but all statements in its mission would aim for “the world”, “the benefit of all people”, or “the public”.

a “Note” searched on Google, as of 20190107 11:30 KST

Do academics “need to expand U.S. competitiveness” when they submit their findings to AAAS journals? Or the competitiveness of their own country? Or that of the world (yes, according to AAAS’s mission)? What is it competing against when it’s for the world? How about when their studies are funded by a private company? The questions are not to draw answers, but to note the complex aspects of scholarly communication and its global span.

Now we’ve briefly explored what needs to be incentivized in academia and what not, upcoming posts will discuss specific points addressed here with details in why they are problematic and how some players are engaged in those. As always, please clap and share the story for more discussions.

[Pluto Series] #0 — Academia, Structurally Fxxked Up
[Pluto Series] #1 — Research, the Knowledge Creating Industry
[Pluto Series] #2 — Academia, Publishing, and Scholarly Communication
[Pluto Series] #3 — Publish, but really Perish?
[Pluto Series] #4 — Publish or Perish, and Lost in Vain
[Pluto Series] #5 — On Where they Publish
[Pluto Series] #6 — On Number of Publications
[Pluto Series] #7 — On Citation Fundamentals
[Pluto Series] #8 — On Citing Practices
[Pluto Series] #9 — On Tracking Citations
[Pluto Series] #10 — On Peer Reviews
[Pluto Series] #11 — Ending the Series

Pluto Network
Homepage / Github / Facebook / Twitter / Telegram / Medium
Scinapse: Academic search engine
Email: team@pluto.network

--

--