When was the last time you presented non-significant results at a conference?

For most people, however, to hear about results being non-significant generates much disinterest. Would you like to listen to a 20 minute presentation showing that men and women like the same things? That the red box generated as much purchase intent as the blue box? That recommendations were the same for all six flavors of a new product? What fun is there in reporting on…. well…. nothing.

So it was with great interest that I read about a team of researchers who attempted to replicate 100 scientific published studies, only to find that a large portion of those studies didn’t end up replicating the original results.

Of course, they didn’t. Why would you expect them to? We have an innate bias to be interested in things that are new and different and surprising, not things that are the same as before and ordinary. We love to publish reports that are significant. We love to present significant results at conferences. Really, who expects their paper to be accepted for presentation or publication if the results are nonsignificant? How many papers have you submitted to The Journal of Negative & Null Results?

What this appears to be to me is a simple (but expensive and time consuming) demonstration of the file drawer problem (so named by Robert Rosenthal in 1979). Everyone shoves aside and instantly forgets about any result that isn’t significant. Must. Achieve. Significance! So let’s say we do 20 studies and 19 of them go nowhere. And then, low and behold, the 20th study is significant. Woo hoo! It’s time to present and publish!

Hey wait… doesn’t 19 out of 20 sound familiar somehow?

Email Jonathan to set up your full-service or DIY sampling portal and instantly access millions of double-opt-in, pre-screened panelists, or Davis to find out how you can monetize your loyalty, e-commerce, or gaming website.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.