The ridiculous number that can make or break academic careers

Vincent Tunru
Flockademic
Published in
4 min readSep 11, 2017

--

Let’s say someone is applying for a PhD, a Postdoc, or a Tenure position. How would you determine whether the candidate would be a good fit? Presumably, the process would involve a skilled researcher familiar with the field looking at the candidate’s past research to judge its quality and relevance. And in an ideal world, all that would probably be true.

In the actual world, however, whether a researcher will secure that coveted position is strongly influenced by an inherently flawed number.

The Impact Factor

In 1972, Eugene Garfield published a method to represent the relevance of scientific journals in a single number: the (Journal) Impact Factor. It was a relatively simple concept: over the past two years, divide the number of times articles in a journal have been cited by the number of articles it has published.

The Impact Factor clearly struck a chord, as it grew to be very influential. And it makes sense: unless you’re already intimately familiar with a field, it can be hard to find out which journals are most relevant. The Impact Factor thus provided publishers with a selling point they could use to convince librarians to purchase a subscription.

However, as Jeff Atwood recently remarked:

Whenever you put a number next to someone’s name, you are now playing with dynamite. People will do whatever they can to make that number go up, even if it makes no sense at all, or has long since stopped being reasonable.

Thus, journals have been known to publish more review articles, which tend to attract more citations as a proxy for the papers they review; to delay articles to the start of the year to allow them to gather more citations; and to pressure their authors into citing other articles from the same journal.

The number has other flaws: it is impossible to reproduce, it varies wildly between disciplines, it’s a self-fulfilling prophecy, and it is not representative for the average article in a journal.

The Impact Factor in less flattering terms

Now, if it were only used to determine which journals to buy a subscription to, this would only be a minor problem. Alas, it is not.

Publish or perish

It’s not only librarians for whom it’s not trivial to recognise relevant research:

I am only expert in a very small area. I am not capable of critically analysing most of the research I come across. (…) When it comes to grant review or making academic appointments I am often out of my field.

So what happens? When reviewing perhaps a hundred grants or job applications, it’s not the research applicants have done but rather the Impact Factor of the journals they’ve published in that matter. If it was a flawed measure for evaluating journals, it is even more so for evaluating researchers.

The Impact Factor is not representative for the average article in a journal: a single article that is cited disproportionally often can greatly boost that journal’s score. In other words, researchers can write completely shoddy research, as long as it’s accepted by a journal that happens to have a high Impact Factor. Likewise, if perfectly good research doesn’t get published in high-Impact journals — for example, because it takes a novel approach that does not align with the reviewers’ visions — it doesn’t help the authors’ careers.

As such, the Impact Factor negatively impacts researchers’ careers, which in itself is already undesirable. It has repercussions beyond even that, though.

Why metrics are not the solution

The entire point of Flockademic is to make scientific articles freely available —so what does the Impact Factor have to do with that?

As I mentioned in The vicious cycle of academic publishing, the Impact Factor is a major reason the transition to Open Access is taking so long. Since traditional journals are more likely to have a high Impact Factor, researchers are heavily pressured to publish there — even though that often means hiding their research behind a paywall.

One idea to increase access to research could therefore be to provide an alternative metric to assist with evaluating researchers. Although the idea of hitting two birds with one stone is very attractive, there are a few reasons I passed on this.

The first reason is that such initiatives already exist. Unfortunately, not much is currently known about their effectiveness at predicting academic excellence (whatever that is), and it doesn’t yet appear to be widely used in evaluating researchers.

More importantly, however, is that the use of metrics in evaluating researchers might not be a good idea altogether. This is feedback I’ve often received, and it’s not unimaginable that a single number might not be fit to represent the breadth of an academic’s career.

However, over-reliance on metrics is mostly a cultural problem that needs to be changed by academic themselves — such as through initiatives like the San Francisco Declaration on Research Assessment or Bullied Into Bad Science. As a software developer, there’s not much for me to contribute there.

And thus, my investigation of how software can help open up access to science continues. As usual, you are kindly invited to follow along, and to send any ideas, remarks or suggestions to Vincent@Flockademic.com.

Update

I’m now part of a project called Plaudit, a project that allows academics to highlight robust research by endorsing it; a first step circumventing the stranglehold of the Impact Factor. Give it a try, and let me know what you think!

I’m finding out how best to open up access to scientific articles. Sign up for the mailinglist or follow Flockademic on Twitter to join me on the journey.

--

--