Machine Learning in Medical Imaging: Needs, Opportunities and Promises

Ahmed Khalil
4 min readJan 8, 2018

--

Whenever there is discussion about the use of big data or machine learning for patient management at scientific conferences, people’s eyes uncomfortably shift towards the radiologists. Hunched up in dark rooms, the flickering light from their fancy computer monitors ceaselessly reflecting off their glasses, they have always been, to some extent, outcasts in medicine. For as long as anyone can seem to remember, neurologists, surgeons, pediatricians and the like have all thought they can do a radiologist’s job just as well. Apparently, so do computers.

Radiology Needs More Automation

There is no doubt about it: radiology could use some help. Imaging data is more complex than ever — a decade ago, looking at a plain head CT scan would have been enough to decide how to treat a stroke patient. Today, for a similar patient, four different scans would be run in specialized centers to make better-informed decisions. Each needs to be interpreted and integrated with other imaging and clinical information, which is no easy task. And as more equipment becomes available, the total number of scans being done is expanding at an outrageous rate.

To make matters worse, there are also too few radiologists being trained in many countries, especially developing ones. Many hospitals have resorted to “nighthawk” teleradiology to cover this gap, with radiologists in Australia covering shifts in Europe for example. Thankfully, most imaging data are now stored digitally — the days of awkwardly holding up photographic film against a window in lieu of a lightbox are long gone — meaning algorithms have easy access to a vast and growing amount of data.

A Match Made in Heaven?

A radiologist from Iowa, Gwilym S. Lodwick, suggested over half a century ago that computers could be used for diagnosing lung nodules and bone tumors on x-rays. Before medical imaging went digital, this was done by filling out a checklist of features by hand (e.g. “how sharp are the borders of the lesion?”), then using the results to make predictions. This seems unremarkable, but it set radiology on the road to automation, and Lodwick was nominated for the Nobel Prize in Medicine in 1975.

There is an irony to all this. Two central principles of medicine are particularly revered in radiology and might be the reason the specialty has a huge target on its back. Firstly, diagnostic radiology is all about pattern recognition — a skill that humans excel at (to their evolutionary advantage). Secondly, radiologists are almost obsessively systematic. As a medical student, one of the hardest things I had to unlearn was the urge to blurt out the glaring abnormality on, say, a chest x-ray when asked to describe it. I would start with “massively dilated heart shadow” and my radiology professor would frown like I had offended his ancestors because I hadn’t mentioned I was “looking at a well-penetrated chest radiograph under full inspiration in PA view dated September 15th, 2007”.

What else is systematic and good at identifying patterns? You see, radiology never stood a chance (or so it seems).

Falling Short of a Radiologist’s Duties

Nowadays, machine learning is being used in radiology for either classifying different types of tissue, known as computer-aided detection (“does this part of the image show grey matter?”) or to diagnose or stage diseases, known as computer-aided diagnosis (“is this a lymphoma?”). Of course, that is far from everything a radiologist does. If you look at a radiology report (a good one, I mean), you will see that it’s a rich, detailed description. The diagnosis only comes briefly at the end, almost an afterthought. Why? Because clinical decisions increasingly rely on subtle imaging features rather than crude diagnoses.

In medicine, neatly categorizing everything, which is what most algorithms currently do, is convenient (and necessary), but oversimplified. In practice, diagnostic radiology is probabilistic, involves carefully unraveling relevant from incidental findings, and putting things in broader context — things are rarely (pardon the pun) black and white. Some companies, like the mysterious DeepRadiology, are working on algorithms that are more flexible and dynamic — generating radiology reports that don’t just classify, but describe what the algorithms “see” in the images.

So far, exuberant promises have been made, but it’s not clear yet whether these new approaches will deliver (or what will happen to radiologists if they do). Medical imaging doesn’t exactly have a great reputation for living up to hype.

--

--