Why Doctors Are Bad At Stats — And How That Could Affect Your Health

María del Carmen Climént
WintonCentre
Published in
9 min readSep 19, 2019
Photo by Aarón Blanco Tejedor on Unsplash

*Listen to the podcast here.

On a recent visit to a hospital I found myself looking at the hundreds of patients around me and wondering what sorts of questions come into their minds:

‘Is chemotherapy my best option for cancer treatment?’

‘If I opt for lung transplant, what are my chances of surviving?’

‘my prenatal testing has come back positive — does my baby really have a genetic abnormality?’

Often, the answer for these types of questions isn’t as simple as ‘yes’ or ‘no’. The answers depend on probabilities: for a specific patient, certain chemotherapy triples the risk of getting infections; or one person might have 85% chance of surviving the first year after a transplant, but 50% chance to making it to 5 years. Or there might be a 90% chance that the test is a true positive, but 10% chance that it’s a false positive. But what do all these numbers mean?

Much of the evidence that we need to make health decisions comes in the form of numbers. I had — like most of us — assumed that doctors are not only experts in diagnosis and treatments but also at interpreting all types of numbers related to health.

But starting work at the Winton Centre for Risk and Evidence Communication at the University of Cambridge has given me a new insight: doctors struggle with numbers more than we imagine.

This issue was pretty new for me but certainly not for professionals such as Professor Gerd Gigerenzer, who has spent decades studying numeric illiteracy in health and its impact on public health. Gerd is the director of the Winton Centre’s sister institute in Berlin, Germany, called the Harding Centre for Risk Literacy.

I met Gerd in Berlin when our two centres came together to share their current projects and discuss future ones. There, he delivered a talk on the severe statistical illiteracy that he has found after testing thousands of medical professionals. For the first time, I realised the huge scale of the problem and wanted to understand it into more depth.

Gerd and his team have explored whether medical professionals understand the statistics measures actually needed to prove that a cancer screening programme saves lives.

This is a classic problem in health statistics. What clinicians need to compare is mortality rates, not 5-year survival rates. The mortality rate tells the number of deaths in a period of time. In contrast, the 5-year survival rate only tells how many people live 5 years after the day they have been diagnosed with cancer.

Some screening programmes can diagnose people earlier — which can increase those ‘5-year survival rates’ — without making them live any longer.

A recent widely-reported study published in the journal Lancet, for example, used 5-year survival rates for cancers in different countries in its headlines, but actually compared mortality rates in different countries as well. This showed several cases where 5-year survival rates had been increased without actually improving mortality… which could be due to the earlier detection of cancers through greater uptake of screening programmes, without improving subsequent treatment.

Figure from the Lancet article showing that in Canada, for instance, 5-year survival rates for rectal cancer show an improvement (green shapes are above 0 on the y-axis showing increased 5-year survival), but mortality has not decreased (the green shapes are almost all on the right hand side of ‘0’ on the x-axis showing increased mortality). Colorectal cancer screening in Canada has been increasing which could increase the early detection of rectal cancer- an artefact which could partly explain why 5-year survival has improved without any subsequent improvement in mortality.

It’s vital that doctors understand the difference between these two measures and that it’s only mortality rates that can tell us if lives are really being saved. But Gerd has found out that more often than not, doctors don’t understand this difference.

“And almost 50% misleadingly believe that if screening early detects cancer, that would prove that lives are being saved.”

I talked to Gerd via Skype, and while talking to him I was as intrigued as the day that I listened to his talk in Berlin. He gave me one demonstration after another of the problem. One was particularly dramatic: most doctors were not able to interpret the probability of actually having cancer, given a positive test result for a breast cancer screening programme.

“Most of them think that a positive test for breast cancer screening programme is very certain: they think that 80 to 90 % of woman with a positive result actually have cancer, while it is only about 1 in 10, so 10%.”

That is worrying, to say the least.

Photo by Richard Catabay on Unsplash

I wanted to understand this problem from the doctors’ perspective, so I also talked to Michelle Da Silva, a doctor specialising in renal medicine who currently works as an Academic Clinical Fellow at the North West Deanery Hospital in the UK. First, I asked her how often health professionals actually use statistics for their regular practice.

“We use statistics in everything that we do in our day to day work: when making a diagnosis, when we interpret research publications, to make a decision on treatment and management options, when we explain the levels of risk to patients, and also when we interpret the results of tests.”

Talking to her, she didn’t sound like the doctors that Gerd described to me — I got the impression that she was very confident in statistics. But she explained that her confidence came after she did a one-year course on statistics as part of her PhD. She is convinced that a better grasp of numbers is vital for all doctors.

“If you do not understand correctly you are not able to communicate with the patient and explain the risks, you are not able to make a decision based on evidence, you are not able to propose a management for their pathology…”

Gerd’s studies, and many others, tend to focus on developed countries such as the UK, Germany and the United States. As a Mexican myself, I was curious to know if there was some evidence about this problem in countries in Latin America. I found some studies that point out that we share the same issue, here is one example:

Multiple researchers from Chile, Peru, Spain and Germany conducted a study in Lima, Peru; they tested whether physicians would recommend cancer screening to patients based on specific scenarios given. They also assessed participants’ comprehension of the evidence, as well as a-priori screening beliefs, numeracy, science literacy and statistical education.

More than 400 physicians (from 6th year, 7th year and residents) participated in the study. They were asked to imagine that a 55-year old patient had come to ask them about screening for a certain cancer type. Participants were given some statistics about incidence and mortality in people with and without screening. Based on that, they had to decide whether they would recommend screening to patients.

The results showed that stronger positive a priori screening beliefs, lower numeracy and lower knowledge on screening statistics were related to worse comprehension of the evidence, and thus to those doctors recommending screening to patients even when it was ineffective.

Misinterpretation of statistics in health can have profound consequences for patients. A woman who shared her story with the BBC in February 2019 nearly decided to have an abortion when a routine genetic screening test during pregnancy came back positive for Turner syndrome- a rare genetic syndrome that affects girls, and none of the healthcare professionals explained — or possibly even realised — that it was more likely than not a false positive.

This doesn’t mean that the medical tests are particularly inaccurate, but rather that each screening test requires a basic understanding of statistics to interpret its results and patients rely on doctors to do this interpretation.

Numeric illiteracy in health professionals is a global-scale problem that affects both developing and developed countries. And its consequences are out there, from an individual level to a social scale. The big question is, why does it happen?

Gerd thinks that the problem originates from education in medical schools.

“In medical school students learn almost anything except statistical thinking.”

Michelle agreed on this point as she told me that she didn’t have any training on statistics during her undergraduate course in Brazil, her home country.

Other countries, including the UK, do often include statistics in medical schools, but the way they are taught remains a problem. I asked Gerd what would be the first action point to change this situation and his answer was with no hesitation at all.

“The first one is totally clear: change medical education.”

This may sound ambitious, but there are already people working on this. In particular here at the University of Cambridge, with the work of Matt Castle and his team. Matt is a mathematician and statistician currently redesigning the curricula on statistics for the Medical School at the University of Cambridge.

I talked with Matt, and he told me that in many universities’ curricula, he spots the same problem:

“What we have in most curricula is this theoretical, mathematical version of statistics that is taught, and that is so far from what is really needed, which is a practical application-based, problem-based approach.”

Is this student learning stats?

Matt and his team are trying to do things differently.

“We design courses that try and reflect how people are going to be using statistics on practice: we give problems, we give real life situations, and we talk though the techniques that need to be implemented to analyse those data.”

Numeric illiteracy is a complex problem that goes beyond the health community. Michelle thinks that managers and politicians are part of the problem too; and Gerd goes even further, he believes that some of the problems in healthcare worldwide are about doctors feeling able to stick to the evidence.

“In many countries doctors have conflicts of interest, between what they think is best for the patient and other interests… and another factor that goes against evidence-based medicine is defensive medicine.”

Defensive medicine means that doctors aren’t always motivated purely by what is best of the patient, but also out of fear of having legal action taken against them in the future. They order more tests or procedures than is strictly necessary, simply to protect themselves.

“In countries like in the US it is particularly strong”,says Gerd. “In study after study over 90% of American doctors say that they practise defensive medicine and they have no choice. So there is an entire social system in place which makes it difficult for doctors to go after evidence.”

Despite the complexity of the problem, Gerd Gigerenzer, Matt Castle, Michelle Da Silva and many others agree on one thing: education is the most important starting point.

Changes to the curricula in medical schools that lead to a better understanding and communication of numbers could potentially improve clinicians’ decision making with huge benefits for patients.

Michelle is very positive in regard to the potential impact of these changes.

“It would be extraordinary. It would improve outcomes because all the decision were made according to what is the best available evidence”

“Patients would be able to make a decision knowing all the risks and benefits, knowing the numbers in favour of and against a procedure or management plan.”

Next time I visit a hospital, I will see people and their dilemmas in a different way.

For some, the decision of whether or not to undergo major medical procedures will rely on doctors’ interpretation of statistics. For others, having clear numbers could allow a better understanding of a difficult situation and can bring peace of mind.

And I will wonder if each of their healthcare professionals will actually be providing this, or whether they will actually be left adrift in a sea of misunderstandings: failures and tragedies that could easily have been avoided if only our healthcare professionals were given the knowledge they need.

Changes in curricula, like Matt Castle’s at the University of Cambridge, are a good start. But training can’t be restricted to new students and never reinforced again. Such a big problem demands actions on a broader scale: healthcare professionals, universities and professional organisations all need to take a role.

There’s too much at stake to let numeric illiteracy stand in the way of good healthcare decisions.

At the Winton Centre for Risk and Evidence Communication we have produced free online courses in risk communication for health professionals that you can visit.

--

--