Is the media now giving scientists lessons in research integrity?

Journalists’ standards of statistical reporting have been notoriously low. But they are improving. And it seems they might now have become better at it than the scientists who produce the numbers…

Witness yet another round of heart attack and painkiller stories this week which offers intriguing hints about who is more to blame when things go awry whilst crossing that critical juncture between scientific research, scientific publishing and the mainstream media.

It began when the BMJ — a well-respected academic journal — reported a study saying that non steroidal anti inflammatory drugs (NSAIDs) taken in high doses raised the risk of heart attack, in the worst case by 100%.

The main question with stories like this (as we pointed out in March) is almost always the same and is extremely simple: ‘how big is this increased risk in real terms?’ We guess that’s the public’s biggest question: should I really worry about this? So, if you have any pretentions to informing them, you should try to report this kind of story properly. Properly means that you must include a figure for what’s called the ‘absolute risk’.

Otherwise, who is to know how big 100% is — does it double something already big, or something small? Doubling a 0.1% risk to 0.2% is quite different from, say, doubling a 10% risk to 20%.

This basic information is sensibly required by the BMJ, which says in its guidelines for scientists wanting to publish their results in the journal:

• Results — main results with (for quantitative studies) 95% confidence intervals and, where appropriate, the exact level of statistical significance and the number need to treat/harm. Whenever possible, state absolute rather than relative risks.

But this study on painkillers, published in the BMJ, simply did not do this. It seems to have no absolute risk numbers, only citing the scary-sounding ‘relative risks’ — like the ‘100% increase’ in the chance of having a heart-attack. Well-conducted as it may have been as an observational study, that’s a fail on communication and responsibility.

However, the BMJ published this research without the absolute risks, even though this fails its own reporting standards. A second count against the scientific community.

But what followed next was truly odd. The BMJ wrote and released a press release that did include some absolute risks. After stating the relative risks (cited from the paper) the press release said:

“To put this in perspective, as a result of this increase, the risk of heart attack due to NSAIDs is on average about 1% annually”.

Someone, then, did the necessary research to work out the absolute risk, it seems. But who, and — most importantly — how? Where does this come from? It’s not easily calculated from any numbers described in the paper. Did whoever wrote the press release worry that the paper didn’t meet the journal’s own standards and try to fix it in the press release? Clearly, they seem to have been aware that such a number was important. But if someone, perhaps a press officer, perhaps an editor (though let’s not presume) is going to add extra information to a scientific study, then this needs to be made clear and strong justification given — along with the necessary data and calculations so that the figure can be checked.

[Update: The BMJ have now confirmed that, in the absence of absolute risks in the paper, the authors added a sentence for the press release, which unfortunately was not spotted by the BMJ press office. The authors have since added details of how this figure was calculated to the BMJ’s ‘rapid response’ section online: http://www.bmj.com/content/357/bmj.j1909/rapid-responses]

That brings us to a further problem with this added statement. What does it actually mean? Are they seriously saying that every year 1 in every 100 people who take NSAIDS will have a heart attack because of them. Or 1% of people who take them for a year?

That implies an extraordinarily high base rate and a truly stunning danger from NSAIDs. Since the heart-attack risk seems to be strong in the first days of taking the medication and not rise significantly thereafter, this 1 in 100 risk of a heart attack would be almost all concentrated in those first days. That would be astonishing.

None of these questions is easily answered, and so raises another — is this statement fit to send out to the world’s media as something to pass on to the public?

The BMJ is one of the leading scientific publishers of health research, and should be a leader in best practice. This is not its finest hour.

Luckily for the BMJ, and the public, it seems the media was (in many places) a model of clarity and responsibility. What often happens is that the ‘biggest’ number possible goes into the headline in a desperate grab for reader interest.

In this case, they did a lot better. Sure, the Guardian started with the 100% headline — old habits die hard. But the BBC kept its nerve.

And from there on, things were pretty good. For instance, the Guardian was clear that the size of the effect varied between different NSAIDs, and the BBC made it clear how much you had to take, namely, quite a lot:

for example more than 1200mg of ibuprofen a day

Most important of all, both also said that “the lack of absolute risks of heart attack for people using NSAIDs and those who are not, in the paper, and the fact that the researchers were unable to exclude other possible influencing factors, led some independent commentators to conclude that it was difficult to assess its significance.” (Guardian).

Partly with the help of the Science Media Centre, they were not only aware of the lack of absolute risks, they found some good expert commentary (redemption here for scientists), including this from the Guardian:

And this from the BBC:

There were clear and nicely organised sections with sub-headings: ‘What should patients do’ and ‘How big are the risks?’

In all, it was good to see, a nice example of thoughtful reporting — given the limitations of the raw material. And, interestingly, the 1% figure from the BMJ press release didn’t appear. Was that because the media had concerns about it? If so, well played.

Back, though, to the problems — and it seems that in this case at least they are all on the side of the scientific community.

Maybe there were good reasons why the research paper did not include a figure for absolute risk — it’s not straightforward to obtain one, or calculate one, after all. But it is such an essential that it deserves effort and, if the effort is unsuccessful, then an explanation.

And what we absolutely don’t need is a press release that starts adding ‘facts’ to a scientific study without any clues as to where those facts came from, who was responsible for obtaining or calculating them, and hence some way of checking them. That is, indeed, what a scientific publisher like the BMJ has peer review for.

All of us associated with the Winton Centre have gone on… and on… about relative and absolute risk in the past. What does it take to stop scientists avoiding going to the trouble of calculating, and clearly stating, the absolute risks? Perhaps being embarrassed by the superior professionalism of the media? Perhaps journals taking their responsibilities in enforcing publishing guidelines more seriously?

And what we absolutely do not need is to create a new problem — of press releases becoming sources of unverifiable extra information that the media do not know whether they can trust or not. Scientific publishers need to take much greater care and responsibility, particularly in matters that affect public health.

Maybe the BMJ’s strange behaviour in this instance will — now it has been pointed out to them — make them lead the charge on this important issue.

The bottom line is that we are all in this together — communicating this kind of information is a shared responsibility. We rely on each other to help people — public and professionals — towards an understanding of the research, and failure anywhere along the line can wreck the whole endeavour.

Co-authored by Alex Freeman & Michael Blastland