In the tumultuous chaos that is science journalism, there are a few issues that stand out. Things that scientists like me complain about every time a new study is reported, with our endlessly tedious love of facts and accuracy. The most famous of these, thanks to the witty James Heathers, is that rodent research is often reported as if it is in humans instead — just adding the words IN MICE to many science headlines drastically improves their accuracy.
There are other problems, many of which I’ve written about myself. Observational research being reported on as if it is definitively causal. Basic, lab-bench research — cells in petri dishes — being touted as if it is fully tested in human beings.
All the issues I blog about every week, basically.
But there’s one thing that is almost always misunderstood, misrepresented, and makes a huge difference in how people view a piece of research. It’s a fairly simple change that can make a study seem either immensely meaningful or entirely meaningless.
I’m talking about how we represent risk.
And so, in the theme of creating silly twitter accounts that only tweet one thing, I jumped on the bandwagon and made justsaysrisks, to help people better understand science. Also because it’s fun.
There are two main ways that you can represent a difference in risk between two events: the relative and absolute risk difference*. And while the two options may seem a bit complex, in reality they are incredibly simple to understand.
Human beings are, by and large, terrible at understanding risk. But that’s mostly because no one has ever explained it properly.
Relative risk is the ratio between one risk and another. Basically this means that you take the likelihood of one event happening and divide it by another. Absolute risk is the absolute difference between the two risks, which just means that instead of dividing one risk by another you subtract.
Let’s look at an example. There will be maths, but stay with me.
One risk that’s often in the headlines is the risk from skipping breakfast. A recent study found that people who skipped breakfast had about double the risk of dying from heart disease as people who ate breakfast every day.
The thing is, as risk go, dying from heart disease is relatively low overall. Many of us will, eventually, succumb to heart attacks, but if you look at the general population — including people aged 20 and above — the rate of heart disease deaths is fairly low.
In the study I mentioned above, the rate of heart disease deaths per year in people who always ate breakfast was 0.64%. The rate of deaths for people who never ate breakfast was 0.73%.
Let’s look at the crude relative and absolute risks here:
RELATIVE RISK = ratio of one risk to another = 0.73/0.64 = 1.14 = 14% increased risk
ABSOLUTE RISK = one risk subtracted from the other = 0.73–0.64 = 0.09 =0.09% increased risk
So we could either say that there was a 14% increased risk or a 0.09% increased risk of heart death associated with skipping breakfast.
Another way to put this is that 6 in 1,000 people who always eat breakfast die from heart disease each year, whereas 7 in 1,000 people who never eat breakfast do.
It sounds much less scary than the headlines when I put it like that.
Now, part of the reason that the media reported that the risk was double as high was that, after adjusting for confounding factors in a complex statistical model, the relative risk increase was actually 87% rather than 14%. But even then, the relative risk increase was much bigger than the absolute one.
The thing is that relative risks are incredibly important for research. Absolute risks depend entirely on the denominator — they are based on who you are looking at. If you look at a group of people who have a very high risk of heart disease death, say people over 85 who’ve already had a heart attack — the absolute risk difference will be big. If you look at 20-year-olds who are fit and healthy, it’ll drop significantly.
On the other hand, relative risks remain remarkably steady across populations. So if there’s an 87% increase in heart disease death, the absolute risk will change depending on how many people are at risk in any group, but the 87% ratio will stay the same. This makes relative risk much better if you are going to compare different populations — say, over 85’s and 20-year-olds — than absolute risks, because it’s usually transferable.
The problem with relative risk is that it gives you a very misleading estimate of the actual risk to you as an individual, especially when the risks are quite small. If an event is rare — like heart disease deaths in 20-year-olds — then the relative risk sounds huge even though the actual impact is tiny. Raising your risk from 0.00001% to 0.00002% is doubling your relative risk, but represents a tiny increase in risk that’s probably meaningless to most people.
There’s even scientific evidence on this — if you are communicating to lay people, using the absolute risk, represented as a proportion (i.e. 1 in 1000 increase) is the best way to help people understand.
What does this all mean? Firstly, if you see a headline that reports a big increase in risk, take a second look at the numbers. It’s very likely that they’re reporting a relative risk increase (or decrease), which might not have much relevance to your life at all.
Secondly, if you’re a journalist reporting on scientific studies, always report absolute risk. You can do both together — sometimes it’s very good to know absolute and relative risk increases — but if you aren’t reporting the absolute risk there’s a good chance you’re misleading people.
It won’t always be the case, but it often is.
Science reporting is hard, because reporters are trying to explain complex concepts that they themselves don’t fully understand to a lay audience, but there are some basic things that we can all look for that improves it immeasurably.
Using absolute as well as relative risk differences makes a huge difference in how we all understand science.
It may not fix all the problems, but it’s a great first step.
You can now listen to Gid on the Sensationalist Science podcast:
*Note: In the US, these are sometimes referred to as a “percent” and “percentage point” increase respectively