KINTalk: Assuring safe, effective and ethical AI in medical imaging

Lorna Downie
KIN Research
Published in
5 min readDec 17, 2020
This article was originally published on our KIN blog.

Friday 11th December 2020 marked the return of KINTalks where we welcomed Charles Kahn, MD, to our virtual stage. Serving as Editor of Radiology: Artificial Intelligence, Charles, Professor and Vice Chair of Radiology at the University of Pennsylvania, is at the forefront of emerging machine learning applications in medical imaging as well as ethical, legal and social issues around AI. Taking to the virtual floor, Charles talked about the promise and perils of AI in medical imaging and the prerequisites for the safe, effective and ethical use of AI. Our key KINTalk takeaways are below:

The promise of AI in medical imaging

Much hype and speculation continue to surround the likes of AI, machine learning and deep learning as they surf the peak of our inflated expectations. In the world of work, tasks are being created, destroyed, altered and redistributed, as AI automates and augments work tasks which previously required us to think. Medical imaging is no stranger to this occurrence as it leads the way in putting the promise of AI into practice.

Charles opened his talk by discussing how changes in hardware, software and data have enabled radiologists to be at the forefront of emerging machine learning applications thanks to: (1) increasing (and inexpensive) computing power; (2) advancements in machine learning techniques; and (3) the volume of data which is now available. Together these changes have led to a shift in medical imaging from the use of traditional machine learning techniques to the use of deep learning techniques. Deep learning, a subset of machine learning and type of representation learning, has been outperforming humans on various tasks for a number of years now (delve into the history of the ImageNet Challenge if you are interested in exploring this more). As such, clinical applications of deep learning have emerged, notably for classification tasks (e.g. determining the presence or absence of a disease), segmentation tasks (e.g. what is an organ and what is a tumor) and detection tasks (e.g. predicting the location of potential lesions). Use cases continue to expand with the likes of opportunistic screening, whereby deep learning is being used to determine the presence of certain diseases when a routine scan of a body part is taken e.g. chest.

However, along with the hype have come hurdles. Now for the tales of turbulence and unintended consequences, with many a lesson for professionals preparing for a future with AI…

The perils of AI in medical imaging

Charles began by transporting us to a world of forests and army tanks, where once upon a time, the US Army wanted to use deep learning to identify camouflaged enemy tanks in a forest. Using 100 photos — 50 of camouflaged tanks amongst trees and 50 of trees — the artificial neural network was trained, with 100 further photos used to test the model. Despite these being classified correctly, the artificial neural network was soon found to be unfit for purpose. Why? Well, the photos of camouflaged tanks amongst trees had been taken on cloudy days and the photos of trees on sunny days. The artificial neural network had instead learnt to distinguish photos dependent on the weather!

Medical imaging has not been immune from such technical pitfalls, with a number of illustrative examples provided, including an artificial neural network which classified patients ‘correctly’ by looking at metadata — in this case a small ‘L’ which was in the corner of training images from one particular hospital which had a significantly distinct patient population. Take a look at the paper of Zech et al. (2018) if you are interested in finding out more.

And there are plenty of other hurdles, including: (1) the black box nature of AI systems; (2) adversarial attacks on machine learning highlighting how the likes of adversarial text substitution, coding and rotation can change the output of a model; (3) the possibility of deriving realistic facial images from medical images, leading to the potential identification of an individual; (4) the hidden technical debt in vast and complex AI systems, in which the code is simply a very small cog ; and (5) an overreliance on technology, leading to a reduction in the skills of radiologists.

Can these hurdles be overcome? Charles next shared his thoughts on what is needed to achieve the promise of AI in a safe, effective and ethical manner…

The prerequisites for safe, effective and ethical AI in medical imaging

First and foremost, awareness of the potential perils is a crucial first step, as Charles succinctly explained:

“If you don’t look for it, you may not find it.” Charles Kahn

Alongside awareness, there is a need for: (1) large, well-annotated datasets; (2) interpretable AI; (3) standards, policies and guidance around data exchange and ethical practices; (4) maintaining a strong connection between science and practice, with high quality scientific research needed on clinically important use cases; and finally (5) education for both radiologists and data scientists. Now, that is not to say radiologists will need to become data scientists, merely that they undertake the necessary training to be able to collaborate effectively. Likewise, for data scientists, rather than becoming radiologists, a role analogous to medical physicists should be adopted, so that they have the ability to ensure that AI systems are both accurate and reliable in this context.

Ending on a powerful message, quoting none other than Professor Albus Dumbledore, Charles highlighted to professionals and researchers alike that we have the power in assuring the safe, effective and ethical use of AI…

“It is not our abilities that show what we truly are. It is our choices.” J.K. Rowling

Our thanks once again go to Charles for a truly insightful KINTalk with many takeaways for the medical world and far beyond!

Author: Lorna Downie

Stay tuned!

Interested to know what would be discussed at our next KINTalk? Sign up to our newsletter and stay tuned on Eventbrite to hear about cutting-edge digital technologies and innovation stories at the intersection of science and practice! We hope to see you (maybe in person) at our next KINTalk!

--

--

Lorna Downie
KIN Research

PhD Candidate at the KIN Center for Digital Innovation, Vrije Universiteit Amsterdam