The Scientific Method — A novel vehicle for communicating science
A modular network of micro-publications anyone can observe, question, and hypothesize their way through
Last April, my entire feed burst into flames with retweets of an article published in The Atlantic entitled The scientific paper is obsolete.
In the article, the author details the history of the research community’s preferred formats for sharing findings, the scientific paper. He asserts that modern research has become so complex that the scientific paper is no longer able to fulfill its function of communicating results.
“…the basic means of communicating scientific results hasn’t changed for 400 years. Papers may be posted online, but they’re still text and pictures on a page.”- James Sommers
Despite the inflammatory title, the article makes a really good point. It’s incredible that as a community we are still using the same vehicle to communicate science that was developed by The Royal Society in 1665. Sommers goes on to describe computational notebooks such as Wolfram’s Mathematica and iPython’s Jupyter Notebook that have successfully improved upon simple “text and pictures” by facilitating data visualizations and the complete sharing of code/methodologies.
While these improvements are certainly commendable, at the end of the day computational notebooks simply modernize the same structure for communicating knowledge the scientific community has been using for centuries. This conclusion left me unsatisfied from the promise of Sommer’s incendiary tagline — so I began to consider the question myself.
The Scientific Paper is a Substandard Tool for Modern Scientific Communication
The rigid format of the scientific paper within bibliometric driven scientific economy encourages a narrative bias, hinders the timely and complete reporting of results, inherently lacks modularity, and results in an air of exclusivity that hinders the scientific community at-large.
Humans are predisposed to storytelling. Narratives are inherently interesting and help us make sense out of and retain the complex information presented within academic research. Even the most ethical researcher is unable to avoid the bias inherent to human cognition when presented with the challenge of recognizing patterns within large amounts of data. The structure of the scientific paper facilitates a researchers ability to tell a narrative within their work.
The typical scientific paper employs the following structure:
- Abstract: A short summary of the entire paper
- Introduction: The relevant background required to interpret the paper’s findings
- Methods: A description of the methodology employed to obtain the results
- Results: The experimental data discovered during the study represented in figures
- Discussion: The author’s interpretation of the significance of the results
The introduction sets the scene by giving context to the research and hooking the reader explaining why this work is significant. The results section is the meat of the story, where the researcher provides the actionable evidence to support their claim. Finally, the discussion is the resolution, where the reader can be debriefed and the significance of the work can be claimed.
Because of our human tendency to enjoy a good story, we are much more likely to cite studies that assume this narrative structure. High-impact factor journals select manuscripts based on their citation potential which creates a systematic incentive for researchers to produce work that will be perceived as “compelling”.
The only section that does not fit this traditional literary format is the methods section. Which probably has something to do with explaining why so many papers lack the methodological detail required for independent replication.
Incomplete and Delayed Publication of Results/Methods
The narrative bias facilitated by the structure of the scientific paper can often manifest itself in what is known as a publication bias. Publication biases occur when research is more likely to be shared if it supports the scientist’s hypothesis at the outset of the study. For example, 58% of clinical trials are not published because they didn’t achieve the intended result. In fact, there are prevalent opinions within academic science that publishing negative results can actually hurt a researcher’s career prospects.
Taking an even more granular view — the formatting requirements of most academic journals limit the sheer volume of information that can be presented within a scientific paper. Despite the fact that majority of articles are accessed online, journals typically charge authors a per page/figure publication fee. This makes it not financially reasonable for authors to include complete methodologies and results. In fact, researchers are incentivized to hand select the data points from all of their experimental observations that provide the evidential support for their conclusions to achieve an arbitrary level of statistical relevance. If you have the time, google “p-hacking”. It’s significant.
As any researcher can tell you the publication of a scientific paper is an inordinately large undertaking. The lifecycle is roughly the following:
- Receive a grant to conduct research
- Gather experimental data
- Prepare a manuscript
- Submit the manuscript to academic journals for peer-review
- Publication of the research paper
The entire process of finding funding, collecting data, and publishing it within the context of a compelling scientific narrative is a daunting task. The data gathered in step two is unavailable to the community until after the conclusion of step five. In addition, it is not uncommon for a year to pass between the submission of a manuscript and its publication in an academic journal.
Among chemistry publications, there is an average nine month delay from manuscript submission to publication. This temporal delay slows down the entire scientific industry and the rate of innovation within it.
Lack of Modularity
A quality scientific paper is the combination and careful assembly of a variety of research outputs. They include a literature review, the research question, experimental designs, raw data, statistical analysis of the data, and top-down pattern recognition. When these outputs are combined into a scientific paper and published as a single unit they become individually inaccessible.
For instance, despite technological advances it is still relatively rare for the raw data from which a paper is based to be openly published alongside the manuscript. If another scientist would like to access the data from which a paper is based they are required to reach out to the paper’s authors. This process can be problematic at times as authors are not required to openly share their data.
Finally, scientific papers are written and formatted specifically for consumption by academics within a given topic’s field of study. They exude a certain degree of technical jargon and prestige. This cultural norm makes it nearly impossible for the general public to consume the published knowledge and requires the scientific community to rely on journalists in search of click-worthy headlines to do the translating.
A Case-Study Illustrating the Flaws of the Scientific Paper
The linked tweet-thread from Dr. Brian Skinner (@gravity_levity) describes a tail of scientific intrigue where Dr. Skinner makes an observation about a physical chemistry paper describing the observance of superconductivity at room temperature.
Upon noticing a statistical anomaly in one of the figures, he published a 15-sentence article calling attention to his observation in the pre-print server Arvix.org. The community as a whole recognizes the unlikelihood of this anomaly being genuine, which casts significant doubt upon the findings of the original paper.
This type of interaction would never have been possible if published through the traditional academic journal infrastructure. There’s no telling the amount of time and resources that would have been squandered in the time it required Dr. Skinner to publish his criticism. Luckily, between Twitter and Arxiv he was able to publicize his observation quickly.
The Scientific Method — an Improved Format for Communicating Science
At Knowledgr we believe it is possible to utilize a new format for scientific communication that: 1.) eliminates narrative bias 2.) encourages complete and timely publication of results/methods 3.) is inherently modular and 4.) emphasizes inclusivity.
Ever heard of the The scientific method?
The scientific method is the only known empirical algorithm for knowledge acquisition. It serves as the central dogma of modern research; a formal definition of the scientific thought process that allows humanity to overcome our cognitive biases.
The steps of the scientific method are:
- Observe: The sky is blue
- Question: Why is the sky blue?
- Hypothesize: The sky is blue because the air is blue
- Experiment (Experimental Methodology + Experimental Observation): Methodology — Isolate air in a translucent container and observe Experimental Observation — The air in the translucent container is clear
- Conclude: The hypothesis ‘the sky is blue because air is blue’ is not supported
- Criticize: I imagine a lot
Notice how each step of the scientific method does not stand alone:
- Observations lead to questions which inspire hypotheses.
- Hypotheses require testing with experiments.
- Each step along the way can be criticized.
If the scientific method was adopted as a format for scientific communication by the academic community it would result in a fascinating network of “micro-publications” that:
1.) eliminated narrative bias — Publishing stand-alone observations would create a “blinding effect” or a healthy separation between the researcher who produced experimental data, and the researcher who used that data to support their hypothesis
2.) encouraged timely, complete publication of results/methods — Each experimental observation would consist of a raw dataset that is published alongside a detailed methodology of how the observation was collected. It would be simple to record how often any given observation had been replicated. Because these experimental observations stand-alone there would be no incentive not to publish them immediately.
3.) was inherently modular — Each observation, question, hypothesis, experiment, conclusion, or criticism could be published to stand-alone. Anyone could access these micro-publications individually, or combine them to synthesize something larger.
4.) emphasized inclusivity — Hypotheses could be as small as Dr. Skinner’s 15 sentence Arxiv publication or as large as a doctoral thesis. While it may be difficult for a layperson to produce a meaningful hypothesis, it is easy to imagine how almost anyone could contribute a simple observation or question to the network of micro-publications.
If you want to have good ideas you must have many ideas. Most of them will be wrong, and what you have to learn is which ones to throw away. — Dr. Linus Pauling