Using AI in Diagnostic Imaging

Madison Page
Visionary Hub
Published in
9 min readOct 31, 2021

--

Photo by National Cancer Institute on Unsplash

Artificial Intelligence. AI. If you hear those terms without context, what does your mind first go to? Do you think of robots dominating humanity in the next 10, 100, or 1000 years? Do you think of Siri, Alexa, or Google? Or do you think of detecting illness or conditions that can be observed through medical imaging, like cancer? Although not always recognized by mainstream media, AI is extremely versatile in its applications and has significant potential in areas that could impact human and veterinary medicine, such as diagnostic imaging.

So, if there’s more to AI than search engines and human-like technological systems, what is it?

Overview of Concepts

Artificial Intelligence, or AI, can be described as the ability of coded systems (often created with the language Python or C++) to carry out work that, prior to the creation of the AI system, was thought to only be within the capabilities of human intelligence. AI is known to “learn”, or improve based on past experiences. This is what distinguishes AI systems from regular technological systems.

Photo by Chris Ried on Unsplash

Machine Learning

To execute tasks, AI systems must first learn from training and testing data. This process is called Machine Learning, or ML. When a new system is being trained, the data is separated randomly and proportionally in each set. The training data is then given to the AI which will then either classify the data itself and set labels to each data point, or learn from already attached labels. The AI will then be given the testing data, which it will attempt to label accurately.

Let’s clarify this with an example. If an AI system is being trained to recognize whether or not there are bikes in an image, the data set would be composed of images with and without bikes. To make sure the AI doesn’t associate certain factors (like weather or the color of the bike) with the presence of the bike itself, the data needs to be diverse. Diverse data could mean having photos taken in different cities, photos with the bikes placed in different positions, photos at different times of day, and for those containing a bike, having photos with and without people on the bike. With this more varied data set, the AI will be more likely to recognize the specific factors associated with a presence of a bike itself if presented with data labeled as containing a bike. After the AI has received the training data, it would be given the remaining data (the testing data), where it would be expected to correctly label the new data as containing or not containing a bike. This example, however, falls into Deep Learning.

Photo by Pietro Jeng on Unsplash

Deep Learning

Deep Learning (DL) is a subset of ML, where AI learns through a labeling process with data sets but does so through a series of layers of neural networks. This allows the AI to identify a greater multitude of details in data and ultimately process more complex data.

Neural networks are the circuits that AI uses to go from an input, like an image of a city street, to an output, like a label stating the image contains or does not contain a bike. Just as we do not know exactly how a young child learns to associate concepts with objects, like recognizing a bike and knowing what it is and what it is used for, we do not know how AI goes from receiving data to labeling it or how the neural networks function. We only know the start, when the AI is introduced to the data, and the end, where the AI assigns a label to the data. The parts of the neural network in between the input and output are called the hidden layers of the circuit. These layers will take the result of previous layers and apply mathematical functions to it to produce the output. The AI adjusts these mathematical functions based on feedback it receives after analyzing testing data. From the results, the weights, determined by the importance of a trait in determining the correct output, are adjusted for each characteristic of the data in the neural network.

Photo by christian koch on Unsplash

Image Recognition

The example of bike detection in images utilizes AI’s abilities for image recognition, as would the use of AI in diagnostic imaging analysis. To be able to assign labels to image data, the AI must first be able to analyze images.

AI analyzes image data by the individual pixels that comprise the data, as does the human eye. Therefore, you need to be able to understand your own data before presenting it to your AI. This understanding is also crucial in preparing the image for training, as image data must be thoroughly annotated so that different components of each image data can be presented to the AI as being assigned a certain label. An equivalent in human learning would be, in kindergarten, being shown a picture of a classroom and learning the vocabulary of different stationery and furniture from the notes on the image. Once the image is adequately annotated, it can be used as training data in the DL process.

Let’s move on from stationery vocabulary and bikes. Just as AI can detect objects in a landscape or background with DL, it has immense potential to detect illness or conditions from medical imaging.

Photo by Marcel Scholte on Unsplash

Why do we need AI to analyze diagnostic imaging, when we already have doctors who can do this?

Benefits

As AI learns through neural networks, it does not actually know or recognize what it labels. Though this may seem counterintuitive, this feature permits AI to have such high accuracy.

Think about it. If you have a doctor who is well trained in diagnostics, they are very accustomed to picking out abnormalities in diagnostic images by searching for and identifying set traits. If they don’t find these specific identifiable traits that they studied, they will not formulate a diagnosis or indicate abnormal conditions. AI, however, learns to look for traits based on the traits it observed in training data. For example, it may detect the presence of a tumor on a pathology image based on a small difference in cell density, that seemed so insignificant that it would not have been picked up by the human eye.

Another advantage of diagnostic AI is the high exposure levels that AI can attain through changing the training data set size. AI can have unlimited training, meaning it can easily surpass the experience levels of humans, who will only be as good at labeling diagnostic images as they can be based on how many times they are asked to do so and given the right feedback. On the point of accurate feedback, while the data given to AI systems is already labeled correctly, experts in human and veterinary medicine may not have much opportunity to update vocational experience and knowledge regarding diagnostic analysis regularly.

Photo by National Cancer Institute on Unsplash

The final benefit of AI for analyzing diagnostic images that will be mentioned is that it reduces human error. Although like humans, AI makes mistakes, it does so much less frequently. One of the reasons for this is the lack of external factors influencing performance, which can highly influence human accuracy in a multitude of tasks. Such external factors include stress, fatigue, issues with physical health, and overall lack of mental clarity. Many of these factors were investigated for their impact on diagnostic imaging analysis in a paper by Dr. Adrian Brady in the Radiology Department at the Mercy University Hospital. One of these factors was visual fatigue, explored in a study conducted by Krupinski and collaborators that showed that the time in the workday at which diagnostic images were reviewed directly impacted the accuracy of the doctors conducting the analyses.

“Krupinski and co-authors measured radiologists’ visual accommodation capability after reporting 60 bone examinations at the beginning and at the end of a day of clinical reporting. At the end of a day’s reporting, they found reduced ability to focus, increased symptoms of fatigue and oculomotor strain, and reduced ability to detect fractures.” (Brady, 2016)

Other factors included decision or mental fatigue, inattentional blindness (analyzers are distracted by other events and responsibilities), and the Dual-process theory of reasoning (analyzers become adapted to drawing conclusions from certain general patterns and begin to use a less reasoning-based approach). Because these factors are inapplicable to AI accuracy, systems using AI can be predicted to be much more consistent in terms of accuracy.

Current Challenges

Although there are a few limitations to AI beginning to be used more widely in diagnostic imaging, one major restriction is the possible reluctance of medical professionals, more specifically radiologists, to accept and incorporate the technology into their practices.

Photo by National Cancer Institute on Unsplash

It is becoming more apparent that radiologists will not be replaced by AI that is used solely for interpreting diagnostic images. Aspects that are more inherent to humans, such as communication and displays of empathy, are important in some roles that are part of a radiologist’s job. In fact, radiologists must deliver diagnostics to their patients or clients in a sensible manner, and provide education and their own professional judgment on a situation in a comprehensible and often empathetic way. In addition, having radiologists oversee the work of the AI will likely improve the trust that patients and clients hold in the technology. Utilizing AI in radiology practices may increase radiologists’ ability to execute other aspects of their job by freeing up time that would have otherwise been used on image analysis. However, this does not change the fact that there may be a lower demand for large numbers of radiologists. In addition, radiologists who have adapted to the status quo may have difficulties adjusting to changes in their professional roles.

Another cause that may lead to AI complicating business in radiology is the need for updated legal systems. If an AI makes an inaccurate or false judgment regarding a diagnostic image, who is accountable, if anyone? Is it the radiologist, who may assume the technology is accurate and does not need to be supervised? Is it the creator of the AI, who may have introduced some error in the coding or training of the AI? These questions must first be answered before AI becomes more heavily incorporated in medical and veterinary practices.

Photo by Scott Graham on Unsplash

To conclude, AI has the overwhelming potential to disrupt diagnostic imaging and radiology. There are significant benefits to using AI in human and veterinary medicine in these areas, as the use of AI reduces the need to rely on humans, which reduces human error. AI may also be able to detect more subtle details in medical images through neural networks and a high level of exposure to data. Ultimately, however, there are still questions that remain to be answered before we can implement AI for diagnostic imaging on a larger scale. Such inquiries touch on topics such as the legal implications of AI use, and what the use of AI in radiology would mean for radiologists and other medical professionals who interact with radiographs. These topics, and many more, are ones that must absolutely be considered in the ongoing discussion around the overall role of AI in human and veterinary medicine.

This article is the first in a 4-part series by Madison Page on creating in the field of Artificial Intelligence. The next will be published in the upcoming weeks.

--

--

Madison Page
Visionary Hub

Working on Lynx to reduce veterinary diagnostic costs