When Science Needs Self-Correcting

Julia Strand
Mar 24 · 6 min read

Admitting scientific errors is hard. It’s also important.

In 2018, I published a paper that reported the most interesting finding of my career. A year later, while trying to figure out why I couldn’t replicate the effect, I discovered a massive error in the original experiment. The central finding was the result of a software glitch and was completely untrue. I had published a paper with invalid data and false conclusions.

This research was about the cognitive effort people use while listening to speech — think of that feeling of “squinting your ears” while trying to understand someone in a noisy bar. The 2018 paper showed a clever way to dramatically reduce cognitive effort: present the speech with a modulating circle that got bigger when the speech got louder. Participants were faster when they could see the circle than in the control condition when they couldn’t.

The data were gorgeous — every single one of the 96 participants showed the effect. When publishing the study, my co-authors and I employed many open science practices: the analyses were pre-registered, and we publicly shared our materials, data, and code on the Open Science Framework. The paper got glowing reviews and was published in Psychonomic Bulletin & Review. We replicated the effect at another university and felt very pleased with ourselves.

We planned follow-up studies, started designing an app to generate the modulating circle for use in clinical settings, and I wrote and was awarded a National Institute of Health grant (my first!) to fund the work.


Several months later, we ran a follow-up study to replicate and extend the effect and were quite surprised that, under very similar conditions, the finding did not replicate. The circle slowed people down. I considered everything that might be different between the studies: code, stimulus quality, computer operating system, stimulus presentation software version, etc. The difference was massive enough that I was confident it wasn’t just a fluke: you don’t go from 100% of participants showing an effect to 0% without something being systematically different.

Finally, I found the issue. In the original experiment, I had unintentionally programmed the timing clock to start before the stimuli were presented in the control condition — akin to starting a stopwatch before a runner gets to the line. This meant that the modulating circle didn’t make people faster, but rather that the timing mistake made the control condition look slower. The effect that we thought we had discovered was just a programming bug.


When I identified the error, I was shocked. I felt physically ill. I had published something that was objectively, unquestionably wrong. I had celebrated this finding, presented it at conferences, published it, and gotten federal funding to keep studying it. And it was completely untrue. I was deeply embarrassed to have made such a stupid mistake, disappointed that my finding was junk, guilty for wasting everyone’s time and polluting the literature, and worried that admitting the error and retracting the paper would jeopardize my job, my grant funding, and my professional reputation.

This had been my mistake, but would also have consequences for my co-authors — a former student of mine and my post-doc mentor. The replication at another institution (that used the same flawed program) was the basis for my former student’s masters project and her defense was scheduled in two weeks. A student at another university had just proposed a thesis extending the work. My grant funding was based in part on these results. And I was currently under review for tenure.


I found the mistake when I was alone at my laptop, working at home late in the evening. While I sat in the dark (crying), I briefly considered what would happen if I never told anyone. The bug was hard for me to identify; maybe no one else would ever find it. I could just go on with other research and nobody would ever know.

Obviously, I decided not to go that route.

The list of what I had to do was pretty devastating: call my co-authors, tell my former student to cancel her master’s defense, write to the journal editor to initiate retraction, contact the National Institute of Health program officer, alert my department chair and dean overseeing my tenure review, and tell my research students. I stayed up all night writing email drafts and, after a new flare-up of panic, checking every other program I’d ever run to see if I’d made the same mistake elsewhere (I hadn’t).

The next day was the worst of my professional career. I spent all day emailing and calling to share the story of how I had screwed up. After doing that, part of me wanted to tell as few other people as possible. So why share this with an even wider audience?

One reason is that I’ve never heard about a comparable situation. Part of the gut punch of finding this mistake was that I had no idea what would happen to me as a result of it, particularly freshly grant-funded and pre-tenure. I’ve heard of people finding mistakes early in the research process and having to re-run experiments. I knew about the scientists who have stepped up to nominate findings of their own that they have lost confidence in. I’ve heard of people who have had problems in their research exposed by others. But I’d never heard of anyone who found an error in their own published paper that invalidated the conclusions. It’s been reassuring that there have been several prominent retractions recently, but when I found and reported the issue in October of 2019, those had not become public. I had no model to follow.

The biggest reason I wanted to share this story is that the fallout wasn’t as bad as I expected. Everyone I talked to — literally everyone — said something along the lines of, “yeah, it stinks, but it’s best that you found it yourself and you’re doing the right thing.” I didn’t lose my grant. I got tenure. The editor and publisher were understanding and ultimately opted not to retract the paper but to instead publish a revised version of the article, linked to from the original paper, with the results section updated to reflect the true (opposite) results. After spending months coming to terms with the fact that the paper would be retracted, it wasn’t.

Finally, I wanted to write about my experience because even though this mistake didn’t ruin my career, the fear that it could highlights some serious issues in scientific publishing.


Regardless of the nature of errors, the most common fate for papers that are wrong is “RETRACTED.” This can happen when authors self-correct honest mistakes or when researchers are found guilty of scientific misconduct like deliberately faking data. Given that the majority of retractions happen for pretty damning reasons, it’s hard to ask people to self-nominate for that category. I expected that revealing my error would lead to a retraction, and that was one of the things that made it difficult to disclose.

Mistakes happen. We should embrace systems designed to reduce mistakes, but some will sneak through. When they do, it is in the best interests of scientific progress that they come to light. However, for individual researchers, there are many, many incentives not reveal errors.

What are alternatives to outright retraction? Some journals have experimented with “retraction with replacement” that replaces original versions of articles with updated ones. Psychonomic Bulletin & Review’s development of publishing a “related article” with notices in both versions that link to one another is similar and, I think, a great step toward encouraging authors to disclose their own errors (though I’ve encouraged the publisher to make the notice more prominent as it’s currently very easy to miss). Another option is implementing a distinct category like “withdrawn at the author’s request” or “self-retraction” for situations in which an author initiates or cooperates with an inquiry to distinguish those situations from instances of misconduct.

I’m sharing this story to help normalize admitting errors. Although this process has been difficult, the consequences were much less dire than I’d feared. Changing culture is hard, but one step toward building better science is publicly revealing our own errors and showing how we fix them.


References

Strand, J. F., Brown, V. A., & Barbour, D. L. (2018). Talking points: A modulating circle reduces listening effort without improving speech recognition. Psychonomic Bulletin & Review. https://doi.org/10.3758/s13423-018-1489-7

Strand, J. F., Brown, V. A., & Barbour, D. L. (2020). Talking points: A modulating circle increases listening effort without improving speech recognition in younger adults. Psychonomic Bulletin & Review. https://doi.org/10.3758/s13423-020-01713-y

Julia Strand

Written by

Julia is an Assistant Professor of Psychology at Carleton College in Northfield, MN. She studies speech perception and spoken word recognition. @juliafstrand

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade