Photo by Ryoji Iwata on Unsplash

5 tips for dealing with non-significant results

What can researchers do to avoid unpublishable results?

--

Originally published at Nature Index, 16 September 2019

When researchers fail to find a statistically significant result, it’s often treated as exactly that — a failure. Non-significant results are difficult to publish in scientific journals and, as a result, researchers often choose not to submit them for publication.

This means that the evidence published in scientific journals is biased towards studies that find effects.

A study published in Science by a team from Stanford University who investigated 221 survey-based experiments funded by the National Science Foundation found that nearly two-thirds of the social science experiments that produced null results were filed away, never to be published.

By comparison, 96% of the studies with statistically strong results were written up.

“These biases imperil the robustness of scientific evidence,” says David Mehler, a psychologist at the University of Münster in Germany. “But they also harm early career researchers in particular who depend on building up a track record.”

Mehler is the co-author of a recent article published in the Journal of European Psychology Students about appreciating the significance of non-significant findings.

So, what can researchers do to avoid unpublishable results?

Continue reading at Nature Index

--

--

Jon Brock
Dr Jon Brock

Cognitive scientist, science writer, and co-founder of Frankl Open Science. Thoughts my own, subject to change.