Evolution of Scientific Assessment — Retrospect, Introspect and Prospect
Scientific assessment traces its roots back to 15th century. This period witnessed evolution of people whose expertise spanned a significant number of different subject areas — the polymaths. They were skilled in several fields of arts and science.
In the 18th century, the scientists were considered as the ‘all-knowers’ in their fields. With the passage of time, in 20th century, specializations evolved. It was the era of new specializations in the study of science. In the 21st century, the specializations got further narrowed down to specific fields.
With the narrowing down of specializations ‘information explosion’ came into picture. There are 25,400 journals in science, technology and medicine. This figure is increasing every year at the rate of 3.5%. It is estimated that the global scientific output doubles in every nine years. The information explosion or overload comes with a challenge to filter out the best and find well-suited source journals. To face the challenges and stand against opposing results, Eugene Garfield introduced impact factor (IF) which was derived from the science citation index. It is used as a proxy for the relative importance of journals within a field.
With the changing scenarios in scientific community, the intention behind impact factor expanded beyond its original purpose. IF is now widely adopted as a way to demonstrate the quality of research, impact in promotion, tenure of researchers and funding proceedings.
As the IF is based on the citation counts, there exists a significant lag between the dates a paper is published and when it actually begins to contribute to the IF of its parent journal. In the current “publish or perish” culture, the academics are expected to publish impactful articles at an unprecedented speed, making authors to wait for citations from months to years. The entire process is lethargic and troublesome for the early researchers who are trying to establish themselves.
Publication in high impact journals invites cut-throat competition. The process is cyclical and time-consuming resulting in multiple rounds of review and revision and if the publication gets rejected, then one has to start from scratch all over again. It causes hindrance in the growth of scientific research and wastage of resources. Publication in high impact factor journals distracts the attention of the researcher from primary responsibilities like teaching and mentoring. It should be changed for the betterment of the scientific community.
To address these deficiencies, some effective metrics were derived like G-index, SNIP, immediacy index, Eigen factor and h-index.
h-index was created by Jorge E. Hirsch. It is one of the most popular alternatives in the scientific industry to impact factor. All these indices initially work on the citation metrics and are biased towards researchers, whose papers have been out for long time period from the mainstream, leaving no measures to assess early researchers.
A slight noticeable change has been noticed in shifting the focus from journals to individual articles. PLOS are the pioneers in this area. PLOS journals provide a rich array of metrics in every article published since 2009. With this approach, the research assessment can be extended to a broader array of research outputs. There are services that support the deposition of outputs other than the articles such as Dryad, Figshare and Slideshare. The emergence of new services such as Altmetric, Impact Story and Plum Analytics aggregate media coverage, citation numbers, social web metrics and provide authors with a complete picture of the impact of their research.
With the passage of time, the little involvement of impact factors in research assessment should be able to improve the research communication substantially. One will be able to understand the benefits, significance and influence of the collective investment in research, and ultimately an effective system of research communication will surface.
Profeza is an all new social journal built on the framework of scientific publication and introduces Research Assessment 2.0. Built for scientists, by introducing accountability and transparency into the system, it recognises every aspect of doing science, which remains concealed in a scientific article.
To recapitulate the evolution of scientific assessment, in 20th century, it didn’t matter what one did. It only mattered what one knew. In the 21st century, it doesn’t matter what you do. It only matters where you publish. In the coming century — it will only matter what you did? How you did? Why you did? — Profeza is all about this.