Trusting Science in the Time of Coronavirus

Sarah Nathan
5 min readAug 17, 2020

--

August 17, 2020

by Leslie McIntosh and Sarah Nathan

Dr. Leslie McIntosh is the founder and CEO of Ripeta, LLC, a company that seeks to improve scientific reproducibility through machine learning. Sarah Nathan is a writer and editor, who will begin a degree at Yale Law School this fall.

In June, news broke that two influential COVID-19 studies had been retracted. The scientific publishing community was likely unsurprised.

Back in April, when major news sources began reporting on the ways that the pandemic was changing the way that we do and report science, the editors of major scientific journals all aired a similar message: Our quality checks are still in place, we are encouraging preprints, and we have the best information out there. The goal was clearly to inspire trust in science at a time when evidence-based decision making is crucial.

But behind each of those statements was a note of reservation, a hint of the pressure felt by these editors — and their doubt in their own processes. Richard Horton, editor of The Lancet, was quoted in The New York Times saying, “…if we make a mistake in judgment as to what we publish, that could have a dangerous impact on the course of the pandemic.”

A handful of weeks later, The Lancet has made the exact mistake that Horton worried about; one of the recently retracted articles, which relied on a faulty data set, was published in The Lancet.

As James Heathers reports in The Guardian, the problems that led to this retraction are not new. Heathers explains that the peer review process was never meant to catch analysis or data errors, and publishing at a rapid pace only exacerbates the failings of the peer review process. Yet the scientific community has continued to promote its own reliability. Perhaps the purveyors of scientific literature believe that, if they admit that the research vetting process has been flawed for years, they will do permanent damage to public trust in science. And, to be clear, public trust in science is crucial to solving many of society’s most persistent ills. But so is scientific accuracy.

This dual knowledge — of the need for reform and the need for public trust– has resulted in an awkward middle ground in which editors attempt to quietly improve their processes while insisting that the old vetting methods are perfectly reliable. This mixed messaging has left researchers confused about the importance of new guidelines; researchers encounter journal instructions that require numerous transparency measures like data availability statements and code sharing, yet no one checks for compliance.

As the scientific community grapples with implementing better quality controls, it is important that the consumers of science learn to separate the signal from the noise. In other words, we need to recognize the ways that the scientific community sells the concept of trustworthiness, why those methods fail to ensure quality, and how we can recognize legitimate trustworthiness.

The Noise

The scientific community uses three platforms to paint an image of reliability and integrity. The first is the journal itself. The higher the journal’s impact factor (a measure of how widely read and cited its articles are), the more respect it commands. Similar to a police officer’s badge, these journals can “flash” their status to symbolize that their articles are of the highest quality. But, like a police department fixated on solving a serious crime, journals can overstretch the authority of that badge in the pursuit of an important goal. For instance, a journal’s staff may decide to swiftly publish an article on COVID-19 to help spread crucial health information. Though the journal is well known, it may publish a substandard article that fits its preconceived ideas about the virus, just as a police department might arrest the wrong suspect if officers rush to solve a crime, building the “facts” around a preconceived notion.

Next, consider the role of peer review, a process in which two or three researchers check the quality of an unpublished manuscript. Like a board of trustees overseeing a non-profit, the peer review process establishes oversight and legitimizes the publication process. Both boards and the peer review process do not always function as intended, however. For instance, board members may not have enough expertise to make informed decisions, may have personal biases that influence their decision making, or may place too much trust in the chairperson, especially when the board must make quick decisions. Similarly, peer reviewers must be sufficiently expert to assess the quality of a manuscript, which is not necessarily possible when dealing with novel issues like a new virus strain. The need for expertise also means that reviewers are often prior collaborators of the author and may have researched closely related topic. Under time constraints, these reviewers may be more likely to lean on their existing biases, just like board members who are overly willing to agree with the chairperson.

Finally, scientific trust is established by the expertise, degrees, and training of researchers. Researchers undergo years of rigorous training, and most have PhDs, which inspires trust in their work. Yet researchers’ expertise is limited to a specific field and, often, a particular subset of that field; a doctorate in quantum mechanics provides no expertise in epidemiology. Especially during a crisis like the COVID-19 pandemic, “experts” may be tempted to wander out of their specialties, writing on topics in which they do not have expertise. Their articles walk and talk like expertly researched articles, making it extremely tempting to trust them.

The Signal

The research community has long held that reproducibility — not prestige — is the true marker of scientific quality. For an article to be reproducible, an unaffiliated researcher must be able to find the information necessary to redo the study and garner similar results. In essence, then, reproducibility is synonymous with vulnerability. It asks that researchers lay bare their methods and data, opening themselves up to scrutiny. Though the recently retracted articles would certainly not have been reproduced yet, that vulnerability alone could push researchers to improve the quality of their methods and data sets.

Attempts to improve reproducibility have often been undermined by concerns about protecting confidential data. But reproducibility does not necessarily require authors to make their data publicly available. Certain types of data should be restricted, but better processes could ensure access to raw data through a third-party data repository for those researchers wishing to replicate a study. Instead, researchers typically have to reach out directly to the author, who has no obligation to hand over the data. Mistakes and malfeasance are, thus, easily concealed.

When evaluating science, consumers of research — whether they be journalists, governments, nonprofits, other researchers, or laypeople — should look to reproducibility, not journal brand, the peer review process, or the degrees of the authors. Thankfully, responsible consumption does not require us to actually check the veracity of the author’s data. Rather, it requires us to check if and how the author has made their data, code, methods, and analysis process available. Of course, because of the urgency to act on research relevant to the COVID-19 pandemic, decision-makers may feel compelled to rely on such research even in the face of problems with transparency. Here, consumers of science should push journals to enforce strict transparency guidelines for authors writing about COVID-19. These demands are achievable and necessary, and they could finally open the door for journals to accept reproducibility as the new standard of scientific quality.

--

--