Here we go again
News reports of risk fail on the fundamentals. But it’s not just the reports.
I’m a journalist. I have the power to make people feel more afraid, or more reassured.
I don’t need physical threats, just data — the same data, which I can present or frame in different ways. It’s not even an especially accomplished trick: anyone can do it and the methods are well known.
For example, let’s say that a medication has the side-effect of increasing the risk of a potentially fatal illness by 50 per cent. It’s quite likely, I think, that I would be able to scare some people with this number, especially those taking the medication.
If, on the other hand, I tell them that this fatal illness occurs in only one in a million people, and a 50% increase means one extra person might die in every two million who take the medication, they might wonder what all the fuss is about.
One frame is a relative risk (up 50 per cent!); the other frame is an absolute risk expressed as what’s known as an expected, or ‘natural’, frequency (one person in 2 million). The essential point is that the seriousness of an increase in relative risk of 50 per cent can vary hugely depending what absolute risk you start with.
All this is old news to most people who talk about risk. It’s one of the basics: as a rule, if you want risks to look bigger, use a relative frame; if you want them to look smaller, use an absolute one. The slightly obvious point — unless you set out to make up people’s minds for them — is why not both? That is what journalists are encouraged to do.
Well, I say these methods are old news, and then you read the news, and realize that the argument you thought had been made and won a hundred times needs to be made all over again. Because there it is, the scary framing, all on its own, yet again — in a story this week about painkillers and cardiac arrests.
No absolute risks were reported anywhere in this story. In consequence, you cannot tell how big this risk is, only by how much it rises. But if 100,000 people take ibuprofen for a week, or two, or a year, in any given quantity, how many extra cardiac arrests will there be? From this report, you can’t say. People taking these drugs and wondering what to do are offered the equivalent of a scream.
It’s easy to blame journalists and their editors — and we should. They are in the business of communicating and they haven’t communicated the numbers in a form their readers can make sense of. Is this ignorance, or worse? The media, eh?
Except that it turns out it’s not just the journalists. In this case, the press release did the same thing.
There are no absolute risk numbers in here, either. No mention of how common a cardiac arrest is in those not taking the painkillers, compared with those who do. Suddenly you feel an unnatural sympathy for journalists. How can they do their job when this is their raw material?
So you, and the journalists perhaps, turn to the research paper itself. And find to your dismay that it’s little better: all relative risks and no clear statements of absolute effects.
At this point, I feel what little patience I have left evaporate altogether. This is an interesting piece of research, and paper’s methodology is good, I’m told, and it does include a total for out of hospital cardiac arrests in Denmark over 10 years (about 29,000). But it does not state the absolute risk and it does not give us the population data from which we can calculate one. What’s more, this is not a count of all the cardiac arrests. It seems to be those for which resuscitation was attempted. There will be quite a few other cardiac arrests for which it wasn’t. There are also some arrests that are not caused by the heart itself, but by violence or accident for example. Quite how all this shakes down into the number relevant to this risk is a question that someone who knows about cardiac arrests could very usefully have addressed before I started lumbering through it.
Is all this an oversight, a lack of awareness of the effects of framing, is it too hard? The study design, a form of case-control study, gives a relative risk as an outcome, but what reason can there be for not reporting, prominently and clearly, the absolute risk — such that journalists (who need to tell their readers, listeners and viewers), doctors (who might want to communicate these results to their patients) and others can find it and understand it?
So, off you go to try to find some absolute numbers from somewhere else. How common is a cardiac arrest in the general population — so that you can then assess how important a 30% increase truly is? This is what I had to do.
Well, I tried. Cardiac arrest incidence data turns out to be not that easy to find. For want of better number or a clear understanding of what precisely to count, I began with that 29,000 total for cardiac arrests over 10 years reported in the research paper, took the Danish population (about 5.6 million), from which we can say that, very roughly, the 2.2m older people (aged 50 or over) have 95% of the cardiac arrests, so giving us about 2,750 cardiac arrests a year. That’s about 1/800 older Danes having a cardiac arrest a year.
The painkiller paper says the median treatment period is around 30 days. So the risk of having a cardiac arrest in a year — 1/800 — becomes a risk of about 1/10,000 of having a cardiac arrest in those 30 days being studied.
Which suggests that 3 heart attacks in 30,000 people, then, would go up to nearly 4 if they all took ibuprofen. Slightly more accurately, at least on this data, there’d be one extra cardiac arrest in about every 32,000 people prescribed — assuming that the effect is causal, which isn’t certain, and assuming that this is based on a vaguely appropriate baseline.
If you take ibuprofen for a whole year, your risk would rise significantly. As it would if you are significantly older. If you take it for a few days, say, and you are younger, it would probably be significantly lower.
How many painkillers per day would these 32,000 people need to take to cause this increase in risk? The paper can’t be sure and doesn’t tell us. Note, though, that it looked at cases on prescription, not over the counter. The maximum recommended dose appears to be 3,200mg a day (or 4 x 200mg tablets 4 times a day) for four days. The standard dose is about 200–400mg every 4–6 hours.
Now maybe a raised cardiac arrest risk of 1 in 32,000 from taking ibuprofen for a month or thereabouts sounds just as scary as a rise of 30%. Or maybe not. Maybe it feels a lot less scary. Maybe, if I’m in pain and ibuprofen works for me, I might accept that risk. Maybe if I’m simply told that it goes up 30%, I wouldn’t. Either way, I would quite like both absolute and relative numbers so that I can make my own choice.
It seems to me that the research paper, press release and news report could have emphasised a less scary-looking number. Clearly, they could also have used both methods of conveying the information. And by the way, why am I doing this work — and guesswork — to find out what should have been a basic part of the communication in the first place? I’m not a statistician. I’m not medically trained. I’m a hack with an English degree. It is ridiculous.
Let’s widen the complaint. Why do journals accept papers that use only a number that is known to magnify alarm? Is this consistent with a sense of public responsibility? What’s the role of peer review here?
I’d go further. I’d suggest that all journals reporting a risk should require a simple table which puts these different framings side by side. Anything less than this kind of balanced reporting — using varied framings of the key numbers to ensure that it is not being made to look particularly ‘big’ or ‘small’ — is a bias. If there was ever a time this was considered acceptable, it isn’t now.
My suggested table would include: absolute baseline risk as a natural frequency; absolute risk for those exposed to the risk as a natural frequency; relative risk. And I’d add one more to give patients and others a sense of the uncertainty: what’s called the ‘number needed to treat’ (NNT), or in this case better described perhaps as ‘number needed to expose’. This is (as calculated above) the number of people who would need to be exposed to the supposed risk in order for one person to suffer the consequences. For this example of cardiac arrest and ibuprofen, it could — I think — look something like this (applying the data to the Danish population and keeping the numbers approximate to make them round.)
I’ve no idea if this the best way to do it or if these are the best numbers. I repeat my lack of credentials — partly as an appeal for someone who knows their way around this kind of data to do a better job. For that, at least, I have no doubt that there is an urgent need.
If I were careless, or worse, in my work in the media such that I made the numbers as scary as possible, some might suspect me of being a bully who positively wanted people to be afraid. They’d have a point.
If I write a scientific paper, why should the judgement be different? Perhaps, in my defence, I think that making people take notice is the right way to talk about this risk: “I’m not scaring people, I’m saving them.” Perhaps I think they don’t need to make up their own minds as I’ve already used my superior judgment to decide that the risks are too great. Perhaps I think that everyone knows the incidence of cardiac arrests and so they know what the relative risks imply. Perhaps I think you can extract the relevant information from the tables in my paper, do your own calculation and come up with the absolute risks, so what’s the problem? (Well, sometimes you can, if you know how — though not in this case since not all the crucial figures were in there).
But why should you have to do all this work? And should you trust me if I use the kind of reporting an unscrupulous journalist might use when trying to produce the biggest, scariest number possible?
I’d like to be helpful, not merely critical, so let me put it like this: credibility is enhanced by varied framing.
People’s intentions in writing a paper or press release about a risk may be perfectly honourable. I’ve no doubt that often they are. But if they want to reassure their wider readership that they are not playing a risk up or down — if they truly want others to be able to assess the magnitude of a risk and make up their own mind about what to do — they should include some simple, clear, prominent, absolute numbers. Otherwise, I simply cannot tell if this is really a big problem. In short, anything press released about risk should include absolute numbers. Full stop.