In the current funding climate where submitting a grant is basically playing the lottery, there is intense interest in creating metrics that can help measure the impact of a researcher. You can look at number of publications, number of “high profile” publications, total citation counts, citations per “high profile” publication (h-index), or whatever metric is the flavor of the week. With this panoply of methods to combine publications and citation counts I’m surprised bibliometricians haven’t yet drawn any inspiration from sports, which have entire conferences on how to best describe players with a single number. In particular baseball has been at the forefront of sports analytics and has a popular metric known as value over replacement player (VORP).
I was once in the audience when a MSTP director said that the MSTP students at prestigious universities do not publish as well as you would expect given the resources available to them. I don’t know if this is true, but I do agree that where you train can influence your publication record. If you train in a lab which publishes in Nature, Science, or Cell every year then your chance of publishing in one of those journals is higher than if you train in a lab that only publishes one paper every several years, and does so in a bottom-tier journal. In general I think it is common to evaluate students or trainees based on their situations, however I have never heard someone make the argument that a principal investigator’s accomplishments should be viewed in the context of how many resources they have.
Imagine you are reviewing a grant application and see that a researcher has two Cell publications in the past year directly related to the grant. Impressive, right? What if you found out that investigator has 20 postdocs? Still impressive? What if they have 30 postdocs?
It is precisely because awarding a grant to one investigator is the same as withholding a grant from another investigator that a metric for efficiency is particularly appropriate. As a funder you can think of it as how much bang you are getting for your buck. Extending the concept to faculty positions, for every position held by an investigator there are scientists who did not get a position and had to leave academia. There are, and will be, investigators who occupy these scarce positions which would be better held by someone else, or in extreme cases, anybody else. As a result I propose a new principal investigator specific metric, researcher over a replacement (ROAR).
The goal of this metric is to determine how valuable an investigator is compared to an average investigator. Give an average investigator the lab space, grant money, and personnel of the investigator in question and how productive would they be? Would they publish more, less? Would they have more citations, less? Would they be a better mentor to their students and postdocs? Would they develop more open source tools? Would they be more of a team player and more open with their research?
Obviously some of these are difficult to quantify, and in some cases can take years, such as using percent of trainees that end up as faculty as a measure for quality of training. And even something as simple as citations is unfair to new faculty. I’m also not much of a fan of citations since very few papers cited in publications are actually directly related to the work and are simply cited to provide the reader with background, and when selecting which papers to cite people just grab papers in prestigious journals without even reading them, thus maintaining the impact of those journals, which leads to others citing papers in those journals…
How to best implement this metric will be up for debate, but one simple solution would be to divide the number of senior author papers (preprints count) an investigator published in the past year by the amount of funding (in hundreds of thousands). With this metric an investigator with a single R01 who published four papers in the past year would be more impressive than an investigator with five R01s who published twice as many papers. You might argue that this will result in investigators simply submitting a bunch of useless papers. For one, there are a lot of negative results that aren’t published that probably should be, so this isn’t necessarily a bad thing. And when evaluating an investigator a single number should never be used, and if an investigator dilutes their publication record other metrics will presumably suffer. For example, when evaluating an NBA player I don’t just look at PER but also minutes, usage, true shooting, real plus minus, etc.
Institutions might be interested in a slight twist to the ROAR metric: divide the metric by the investigator’s salary (in hundreds of thousands). This way it will be easy for institutions to see who is grossly overpaid when looking at research output per grant money, although some investigators have additional responsibilities such as teaching which the institution may or may not value.