Unexpected effects on scientific publishing after CRISPR-Cas9 editing in vivo

Stephen Floor
5 min readJun 6, 2017

--

There’s a new scientific paper that’s spreading across the Internet like a virus in hosts with no defenses.

Schaefer et al. injected mouse embryos with CRISPR-Cas9 programmed with a guide RNA to attempt to repair a genetic variant responsible for blindness. It worked! The scientists were able to restore function in these mice through gene therapy. Amazing, right?

It’s not quite that simple. CRISPR-Cas9 uses similarity between the guide RNA sequence and DNA sequences to locate its cut site. It’s not perfect, though. The scientists later sequenced the genomes of two mice that were edited and found a large number of differences between the edited mice and a control mouse. What followed has at least as much to do with scientific hype and publishing as science.

*The paper has been critiqued. I won’t rehash that here. Instead, see this post, the PubMed commons, and Twitter threads.

Unexpected effects on scientific publishing

There were strong reactions to this paper in the media and on Twitter. Megan Molteni wrote an article in Wired about this paper that posed a fascinating question: what if the reception of this work is in part due to different publishing models in science and medicine?

There’s a push to change publishing in biology. The number of figures in papers has increased ~three times from 1984–2014. There are calls for scientists to publish smaller papers. Publishing ASAP through preprints on bioRxiv is growing rapidly.

So, with the goal of accelerating publishing in mind, consider this quote from the co-author Vinit Majahan via WIRED:

“The culture and pressures of science right now push people to not share results that aren’t a splashy cure. But in medicine you can’t do that. If you make an observation that’s important enough to share with your community, you’re obligated to do that right away.”

Publishing data directly to the Internet is an alternative to the monolithic papers scientists currently write. This appears to have been Majahan’s intention by submitting this letter based on the above quote. It’s also the intention of concept journals like Science Matters. Others have proposed using Figshare, Dryad, or Binder to publish data directly.

What might publishing single observations look like? How would they be interpreted and evaluated?

Seven guidelines for publishing and interpreting observations (or papers)

First, the observation must be well-controlled. The meaning of a “control” depends on the observation, but minimal proper controls are critical to interpret the data.

Second, it should be transparent whether observations have been peer reviewed or not. This is not to say peer review is required. Peer review is sometimes used as a proxy for “validated”, so transparency is crucial. This raises its own set of concerns, but those are for another time.

Third, if conclusions are written based off an observation, then the scope of the conclusions must match the scope of the data. The scope of conclusions drawn from observations is likely smaller than the scope of those in a monolithic paper, as fewer dimensions of the research in question have been explored.

Fourth, reactions to observations in the scientific community, media, and financial markets should reflect their (limited) scope. Observations inherently have fewer orthogonal types of validation and controls. Molteni: “No one should presume a standalone study can predict the future of an entire technique.” Here, some conclusions of Schaefer et al. (and associated media reactions) seem outsized — but I don’t fault the authors specifically for this. Authors are encouraged on many levels to push their conclusions beyond the scope of their work. Schaefer et al. did explicitly address some limitations of their work, which should be commended.

Fifth, a forum for discussions of observations would enhance their use. If authors are unaware of the discussions occurring around their observations, then that reduces the utility of the observation and associated reactions. Imagine if the diffuse reactions to Schaefer et al. were contained in a forum connected to the paper with the objective of determining why there were so many variants found in this study. The Google Group discussing NgAgo is a great example of this.

Sixth, observations should be periodically collated into traditional papers. There is value in researchers taking the time to synthesize their data as a classical publication with an introduction, discussion, limitations, and future directions. These publications could use data from multiple scientists, who would all then be credited in the paper.

Lastly, some observations are wrong. People make mistakes, and scientists are people. It’s important to accept mistakes in science, as they are more likely to happen when pushing the boundaries of knowledge. I am not saying Schaefer et al. is wrong or a mistake. However, if an observation is later found to be an artifact, then science needs better mechanisms to incorporate and disseminate that information. Authors of observations, scientific papers, or media articles referencing an observation could be automatically notified, for example. Retracted papers continue to receive citations, so this area needs some work.

Establishing truth in science

Scientific conclusions are based on the aggregate results of related studies in a field, not single observations or even single papers. Scientists know climate change is occurring and is caused by humans because 97% of active climate scientists support this conclusion based on data. Scientists know GMOs are safe because this conclusion is supported by data. Scientists know vaccines are safe and effective because that’s what the data show. Anyone can demonstrate the effect of gravity by dropping a pen on Earth, but pens don’t fall in space. Does that invalidate gravity?

An individual observation or paper that contradicts preexisting data does not necessarily overturn preexisting conclusions. It depends on the relative scope and content of the paper versus the preexisting data. So, no, Schaefer et al. didn’t “show that [CRISPR is predictable] is false.” Instead, this result must first be validated to show it’s reproducible and to identify its scope. Research to understand why this experiment led to this result will inform gene editing protocols and associated analytical methods. In other words, this paper is more of a beginning than an end.

If the goal is to reform scientific publishing by reducing the scope of papers, then the scope of hype and conclusions should also be reduced. Not every paper is going to cure cancer (as if that’s even a thing). Not every contradictory result undermines a field. Perhaps we can program CRISPR-Cas9 to knock out the 21st century hype machine.

Postscript: This is my first post. I hope to use this to share thoughts on science and its relationship with society. I welcome feedback and comments on this and any future post. Thanks for reading!

--

--

Stephen Floor

Assistant Professor at UCSF. Writings on science and society, academic culture, and related.