What’s Your New Paper For?
Is drinking any amount of alcohol bad for you? Even one glass of wine a day?
Yes, said a meta-analysis published last year in The Lancet. And the findings were alarming: Even having one glass of alcohol a day will increase your risk of all sorts of bad things happening to you — including stroke, coronary disease, heart failure, and aortic aneurysm.
A headline in The Guardian said “Drinking is as harmful as smoking, and more than five drinks a week lowers life expectancy, say researchers.”
Sounds like a research communications success, doesn’t it?
But within days, the Lancet meta-analysis was attacked. On Twitter. And by other analysts.
The study made basic scientific errors, some critics said. Others charged the study’s authors with bias against alcohol consumption. Some critics went so far as to say the Lancet has a history of publishing and hyping flawed studies.
The result? Public confusion about whether the findings were legit or not.
Sounds like a research communications disaster, doesn’t it?
But wait. The study was published in a leading peer-reviewed journal. It was a meta-analysis. And it was covered widely by savvy science journalists.
Aren’t those all markers of quality research and successful public engagement with research?
Thank God for Twitter peer-review, at least this time.
But what if this isn’t an isolated incident?
What if a system that was designed to give us objective information has become infected with clickbait?
Last month, Aaron Carroll devoted one of his New York Times columns to the growing problem of bias in science.
We shouldn’t just be worried about blatant financial conflicts of interest in the medical and scientific community, Carroll argued.
Because the scientific system itself, he continued, is now a source of entrenched bias. The way science works today requires researchers to build reputations and protect them to win grant funding and advance careers. And that means pursuing results that will be noticed.
“Journals and grant funders like to see eye-catching work,” he wrote. “It would be silly not to think that this might also subtly influence thinking and actions. In my own work, I do my best to remain conscious of these subtle forces and how they may operate, but it’s a continuing battle.”
Now, I am far from an old-school, Strunk-and-White-wielding, write-a-dull-headline, “get-your-’public engagement’-off-my-lawn-I’m-a-science-communicator” kind of person.
But I’d say research communications is increasingly making research bias worse.
For example: A couple of months ago, the Aspen Five Best Ideas of the Day feature ran an item headlined: “Breathing Through the Nose Aids Memory Storage.”
If you’re like me, you read that and thought: As I’m listening to my spouse, I need to breathe through my nose.
Click through the link, though, and you’d discover that’s not what the study showed at all.
Quoting from the first graf of the press release from the Karolinska Institutet on the study (which was published in The Journal of Neuroscience):
“If we breathe through the nose rather than the mouth after trying to learn a set of smells, we remember them better.”
Of course, no one would have clicked on a headline that read: “Breathe Through Your Nose If You Want to Remember a Smell, Study Finds.” Except to read the underlying article in The Onion.
But the actual headline is misleading, because the findings are confined to memory of smells, not other memories.
And the Aspen Institute compounded the error by labeling the findings an “idea,” when they are clearly not. They are just findings, and not very implementable ones at that.
But, of course, these practices — writing headlines for new-study press releases at or beyond the very limits of defensibility, and then services like Aspen’s Five Best Ideas of the Day or Futurity serving them up breathlessly, because they need opens and clicks and shares as well — are common in research communications today.
So: Are we asking that piece of new research to do too much?
Are we asking it to shift public awareness and thinking? On its own? Can we do an ideas campaign to help it?
Will our promotion efforts feeding into the unconscious dynamic Aaron Carroll has highlighted around the hyper-competitive practices of science, science publication, and science communications — practices that are creating a race to the bottom?
Will those efforts also feeding public cynicism about the validity of research findings?
And how can we tell?