AI and Medical Imagery — A Much Needed Marriage

Allwyn Joseph
AZmed
Published in
9 min readJun 22, 2020
Designed by Freepik

Introduction

Over the past fifty years, healthcare has come a long way, improving both longevity and quality of life on Earth. Technological advancements, easier access and a better understanding of the human genome are the main factors that were responsible for carrying the healthcare system this far [1]. However, today, poor budgeting, growing demand and the lack of specialists to commensurate with demand has put the healthcare system as we know it in peril. To add fuel to fire, the COVID-19 pandemic only highlighted these flaws. The system is beckoning for a change, and A.I in part could be this change.

Figure 1 — From 153 exabytes of healthcare data in 2013, the data is projected to grow to 2143 exabytes by the Fall of 2020 [3].

In the last decade, Big Data has grown more than ten folds [2], and the same holds true concerning advancements in computing hardware and software and cloud storage. The data and technological boom, in turn, engendered the possibility of applying A.I within industries across all walks of life — finance, education, travel, healthcare, etc.

Within the healthcare sector, high-resolution medical images, biosensor physiological data, genome sequence data, and digital records in big part have fostered the medical data boom from 2013 to 2020 (see figure 1). Medical professionals, whose capacity is already strained by ever-growing demand for healthcare, lack the resources and training to analyze and extract value from this enormous influx of data. In view of this disproportionate growth between medical experts and patient medical data, deployment of A.I systems have already begun within the fields of medical imagery, genome sequencing, and physiologic biosensor data.

A.I in Medical Imagery

With more than 2 billion medical scans performed worldwide annually [4], the necessity for the convergence of human and artificial intelligence in the field of medical imagery is only accentuated. It is, however, comprehensible that like with any established industry, the healthcare industry too has its reservations concerning change, especially change that comes fast. But perhaps a brief history of A.I and its role in imagery could help thaw this ice.

A.I to simply put it, was concocted to imitate human intelligence (in some wavelength, see figure 2), and since this intelligence was man-made it was called “artificial”. A subcategory of A.I is Machine Learning (ML) which in essence is the study of computer algorithms that build a mathematical model based on sample data, with the end goal of having the ability to effectuate predictions were similar data to be encountered by the algorithm.

Figure 2 — Comparison between a biological (top) and artificial (bottom) neuron. The feature inputs in artificial neuron act as dendrites and the output as axon terminals.

One such ML algorithm is a Neural Network (NN), the first use of which dates back as far as mid 20th century. Inspired by the neurons in the human brain, a NN is architectured by combining multiple perceptrons (artificial neurons) and layering them one after the other, to create a single or multi-layered network. A NN with multiple layers is also called a Deep Neural Network (DNN), and the act of training a DNN to perform predictive tasks is called Deep Learning. With the advent of Big Data and substantial computer power, DNNs have set the state-of-the-art in a plethora of predictive tasks, becoming a default solution in numerous industries that seek to exploit data.

Amongst the different NNs that came to be used, Convolutional Neural Networks (CNNs) stood out in particular. The unique capability of CNNs to understand and extract information from images was corroborated with ILSVRC2012 — an annual competition to determine the best image classification/localization algorithm — where a CNN (called AlexNet) vanquished the competition by a large margin [5]. With this began a new era of A.I in imagery, and not too long before medical imagery was also implicated. As of today, concrete applications of A.I in medical imagery are already being realized in the field of Radiology, Pathology, Dermatology, Ophthalmology and Cardiology to name a few.

Take the case of standard trauma X-rays, a market segment in which we at AZmed are positioned. The number of examinations has doubled in Europe in recent years, while the number of specialists capable of analyzing this flux of information hasn’t grown in proportion [5]. To add insult to injury, 92% of standard trauma X-rays turn out negative for bone lesions. This translates to, specialists expending their precious time and energy on diagnosing banal cases while they could be attending to more grave and life-threatening cases. To remedy this, we at AZmed are on a mission to lend a helping hand to radiologists to aide with the detection of bone lesions in X-rays. Our computer-aided diagnosis tool harnesses the power of AI to optimize the workflow of radiologists allowing faster and more precise detections, and as a consequence helping them conserve time and energy for intricate cases.

A.I and COVID-19

The world is slowly, but cautiously coming out of the Coronavirus Disease 2019 (COVID-19) pandemic. Having infected 4.8+ million and caused 300,000+ deaths across 188 counties [7], COVID-19 has attacked every link of the chain that constitutes society. While the pandemic has caused much hurt and suffering across all dimensions of wellness, it has also pointed out some gaping holes in the functioning of our society. The Medical sector contains many of these critical holes — including a shortage of personnel, equipment and budget to name a few. In this section, I’ll elaborate on a potential solution to a piece of this puzzle, given the context of medical imagery and COVID-19.

Unfortunately, there’s a general consensus in the medical community that COVID-19 cannot be accurately diagnosed using images from a CT-scan [8, 9, 10, 11], which is further corroborated by theses studies [12, 13]. This does not mean that CT-scans and X-rays have no role to play in the fight. For example, one of the signs of COVID-19 is pulmonary lesions, and chest X-rays are frequently used to diagnose it. However, the cause for those pulmonary lesions could be bacterial, fungi or even related to other viruses, and distinguishing between these causes using chest X-rays is a hard task. This is precisely where CNNs can come into play. The algorithm’s superior ability to distinguish between pixels in an image and extract relevant information could render it pivotal towards pinpointing the cause of the disease. And needless to say, ensuring the infected patient receives the appropriate treatment and care.

Figure 3 — Pneumonia and its causes.

Enough of talk, let’s get our neurons a little dirty with a proof of concept for an A.I solution that can detect the cause of pneumonia from chest X-rays — it’ll get a little technical from here on, so bear with me. Let’s assume we have a dataset containing:

  • images of chest X-rays of patients suffering from pneumonia
  • labels indicating the cause for pneumonia: Bacterial, Fungal, Viral-Other, Viral-COVID-19 (as seen in figure 3)

Predicting the cause for pneumonia from a dataset of chest x-rays falls under the category of image classification. And CNNs are the gold standard when it comes to classification and detection tasks in images. One way of carrying out image classification using a CNN is through a straightforward training procedure (as shown in Figure 4a); the dataset is fed to the CNN, and the CNN trains on the fed data extracting salient features that help tell apart images of one class from another. While this training methodology works fine on traditional datasets, it’s a sub-optimal approach on datasets where there exists an implicit hierarchical relationship between labels.

In our imaginary dataset, the causes for Pneumonia can be split into three principal or parent categories, namely Bacterial, Fungal and Viral. The category Viral can be further sub-divided into child categories namely, COVID-19 and Others. Classical training procedure ignores the hierarchical relationships and proceeds to classify the data into the four categories as indicated earlier. To this end, employing a conditional training approach would help better exploit the hierarchical relationships between diseases in a given dataset. In particular, we’ll focus on the conditional training procedure proposed by Chen et al [12]. The main idea is to effectuate the training in two steps:

Figure 4 : Classical vs conditional training using CNNs.
  • Conditional training: step one of the training procedure which aims at dissecting the dependent relationships between parent and child labels. Here a CNN is trained on a partial training set ( shown as 1) in figure 4b) containing all positive parent categories — categories with at least one child category — to classify the child labels, Viral-Other, and Viral-COVID-19 in our scenario.
  • Transfer Learning: the second step exploits the transfer learning technique. This technique involves incorporating knowledge acquired over the course of a previous training on another training to aid with predictive tasks associated with it. Let’s take an example to further vulgarize transfer learning. Say Novak wants to learn to play tennis, he’s likely to learn the sport quicker was he already versed in table-tennis. This because he’ll be able to transfer certain notions acquired playing table-tennis towards learning tennis. To apply this technique, training weights from step one are used as a starting point to kick-start a second round of training — freezing all except the last layer — on the complete dataset to classify the parent classes — Bacterial, Viral and Fungal.

Inferencing after conditional training involves a specific procedure which I’ll let you inquisitive souls uncover for yourselves from this study [13] There you have it, an A.I solution for pneumonia cause classification. Medical workers equipped with an A.I solution as such would be able to make rapid and reliable diagnosis — two r’s that are essential to compensate for the lack of medical workers.

Conclusions

As a data scientist in the medical sector I often get asked the question, “Will A.I ever replace medical experts?”. Someday, maybe, and come that day, data scientists will most probably find themselves in the same boat as medical experts and individuals from an array of different fields. But that day is not today, so the question one should be asking is, “Will A.I help augment the workflow of medical experts?”, the short and the long answer is yes.

As it stands today, AI is nothing but a tool, a tool that can be shaped towards proficiency on a specific task be it fraud detection, disease classification, language translation, etc. And like with any tool, it must not be mistaken as an alternative to human intervention, which is always going to be present in some wavelength. To conclude I shall borrow the words of the renowned cardiologist, scientist and author Eric Jeffrey Topol,

Machines will not replace physicians, but physicians using A.I will soon replace those who aren’t.

About the writer

Having lived across the eastern hemisphere, I became aware of the societal problems, of which health and environment drew my attention in particular. To complement my resolve to be a part of the solution, I went on to first obtain a masters in management, energy and environment followed by another masters in machine learning and data mining. Armed with the technical know-how and resolute for a positive impact, I decided to join AZmed. Today, I am involved in the R&D of state-of-the-art A.I solutions aimed at augmenting the workflow of medical professionals.

References

  1. Medical technology advances of the past 50 years, David Geffen School of Medicine.
  2. Volume of data/information created worldwide from 2010 to 2025, Statista.
  3. Stanford Medicine 2017 Health Trends Report: Harnessing the Power of Data in Health, Stanford Medicine.
  4. High-performance medicine: the convergence of human and artificial intelligence, Eric J. Topol.
  5. Artificial Intelligence in Medical Imaging: Opportunities, Applications and Risks, Eric R. Ranschaert et al.
  6. Market Analysis for AI in Medical Imaging Industry, Wispro.
  7. Coronavirus Resource Center, John Hopkins University and Medicine.
  8. ACR Recommendations for the use of Chest Radiography and Computed Tomography (CT) for Suspected COVID-19 Infection, American College of Radiology.
  9. The role of CT in patients suspected with COVID-19 infection, The Royal College of Radiologists.
  10. Guidelines for CT Chest and Chest Radiograph Reporting in Patients with Suspected COVID-19 Infection, The Royal Australian and New Zealand College of Radiologists.
  11. Canadian Society of Thoracic Radiology and the Canadian Association of Radiologists’ Statement on COVID -19, Canadian Association of Radiologists.
  12. Clinical Characteristics of Coronavirus Disease 2019 in China, Wei-jie Guan et al.
  13. Chest CT Findings in Cases from the Cruise Ship “Diamond Princess” with Coronavirus Disease 2019 (COVID-19), Shohei Inui et al.
  14. Deep Hierarchical Multi-label Classification of Chest X-ray Images, Haomin Chen et al.
  15. Interpreting chest X-rays via CNNs that exploit disease dependencies and uncertainty labels, Hieu H. Pham et al.

--

--