Brief Intro to Medical Image Analysis and Deep Learning

Saurabh Yadav
7 min readOct 16, 2018

--

I recently started working on a project related to medical image analysis, while looking for resources about image analysis and its medical application, I felt that in general we don’t have some proper articles on such information. In this article we will be going through a brief introduction to how medical images were analysed in the past and what has changed since the introduction of Deep Learning. For image analysis we generally use CNN (Convolutional Neural Network), although explaining it here will make this whole article cumbersome but I will be providing some links which are going to explain CNNs properly.

History

As soon as it was possible to scan and load medical images into a computer, researchers have attempted to built system to automate the analysis of such images. Initially, from 1970s to 1990s, medical image analysis was done using sequential application of low level pixel processing(edge and line detector filters) and mathematical modeling to construct a rule-based system that could solve only particular task. At the same time there were some agents based on if-else rules, popular in field of Artificial Intelligence commonly known as GOFAI (Good Old Fashioned Artificial Intelligence) agent.

Towards the end of 1990s, supervised techniques were becoming popular in which training data was used to train models and they were becoming increasingly popular in the field of medical image analysis. Examples may include active shape model , atlas method. This pattern recognition and machine learning is still popular but with the introduction of some new ideas. Thus, we can see a shift from systems that were designed by humans to systems that are trained by computers based on example data. Computer algorithms are capable enough now to decide edges and important features to analyze the image and give out the best result.

Deep Learning in Image analysis

The most successful type of models for image analysis till date are Convolutional Neural Networks(CNNs). A single CNN model contains many different layers that work on recognizing edges and simple features on shallower layers and more deep features in deeper layers. An image is convolved with filters(some refer it as kernels) and then pooling is applied, this process may go on for some layers and at last recognizable feature are obtained. Work on CNN has been started from 1980s and they were already applied in medical image analysis in 1995. First real world application of CNN was seen in LeNet (1998) for handwritten digit recognition. Despite some little early successes CNNs gained momentum until improved training algorithms in Deep Learning were introduced. Introduction of GPUs have favored the research in this field and since the introduction of ImageNet challenge, a rapid growth in development of such models may be seen.

Illustration of CNN (Convolutional Neural Network)

In computer vision deep CNNs have become the go-to choice. The medical image analysis community has taken notice of these pivotal developments. However, transition from systems that used handcrafted features to systems that learn features from data itself has been gradual. Application of deep learning in medical image analysis first started to appear in workshops and conferences and then in journals. The number of papers grew in 2015 and 2016 as shown in the graph.

Deep learning uses in medical imaging

  • Classification : It was one of the first areas where in medical image analysis where deep learning was used. Diagnostic image classification includes classification of diagnosed images, in such setting every diagnosed exam is a sample and data size is less than that of a computer vision. Object or lesion classification usually focuses on classification of part of a medical image into two or more classes. For many of these tasks local as well as global information about lesion appearance and location is required for accurate classification.
  • Detection : Anatomical object localization such as organs / lesions is important pre-processing part of segmentation task. Localization of object in a image requires 3D parsing of image, several algorithms have been proposed to convert 3D space as composition of 2D orthogonal planes. There has been a long research trend in detection of lesions in a medical image using computer -aided techniques, improving the detection accuracy or decreasing detection time for humans. Interestingly, the first such system was developed in 1995 using a CNN with 4 layers to detect nodules in X-ray images.
  • Segmentation : The segmentation of organs and other substructures in medical images allows quantitative analysis related to shape, size and volume. The task of segmentation is typically defined as identifying set of pixels that define contour or object of interest. Segmentation of lesions combines the challenge of object detection and organ and substructure segmentation in the application of deep learning algorithms. One problem that lesion segmentation shares with object detection is class imbalance as most pixels in an image are from non-diseased class.
  • Registration : Sometimes referred as spatial alignment is common image analysis task in which coordinate transform is calculated from one image to another. Often this is performed in an iterative framework where a specific type of transformation is assumed and a pre trained metric is optimized. Although lesion detection and object segmentation are eyed as main use of deep learning algorithms but researchers have found that deep networks can be beneficial in getting best possible registration performance.
  • Other tasks in medical imaging : There are some other uses of Deep Learning in medical imaging. Content based image retrieval (CBIR) is a technique for knowledge discovery in large databases and offer similar data retrievals for case histories and understand rare disorders. Image generation and enhancement is another task that uses Deep Learning in improve image quality, normalizing images, data completion and pattern discovery. Combining Image data with reports is yet another task that seem to have a very large scale application in real world. This has led to two different fields of research (1) leveraging reports to improve images classification accuracy. (2) generating text reports from images.
Number of papers in different application areas of Deep Learning in medical imaging

Unique challenges in medical image analysis

It is clear that there are lot of challenges in application of Deep Learning in medical image analysis, Unavailability of large dataset is often mentioned as one. However, this notion is only partially correct. The use of PACS systems in radiology has been routine in most of the Western hospitals and they are filled with millions of images. We can also see that large public data sets are made available by organisations. The main challenge is thus not the availability of image data itself, but the labeling of these images. Traditionally PACS systems store free-text reports by radiologists describing their findings. Turning these reports into accurate annotations or proper labels in an automated way is in itself a topic for research that requires sophisticated text- mining techniques.

In medical imaging often classification or segmentation is presented as binary task : normal versus abnormal, object versus background. However, this is often a gross simplification as both classes are highly heterogeneous. for example, the normal category often consists of completely normal tissue but also several categories of benign findings, which can be rare. This leads to a system that are extremely good at excluding the most common normal sub classes, but fail miserably on several rare ones. A straight forward solution would be to turn system in a multi-class system by providing it with detailed annotation of all possible sub classes. Again, this is another issue to expertly label all classes which does not seem practical.

In medical image analysis useful information is not only just contained within images themselves. Doctors often see the history of the patient, age and other attributes to arrive at a better decision. Some researches have been conducted to include such features besides images in Deep Learning, but as results have shown it has not been so effective. One of the challenges is to balance the number of imaging features in the deep learning network with clinical features to prevent clinical features from being ignored.

Although most of the challenges mentioned above have not been properly tackled yet, several high profile successes of Deep Learning in medical imaging can be seen in some recent papers of Esteva et al.(2017) and Gulshan et al.(2016) in the field of dermatology and ophthalmology. Looking at the trend one can infer that Unsupervised learning is gaining popularity in this field as they allow training on unlabeled data. Many consider applying automated system in medical field may give rise to some legal question i.e. who to blame if a machine makes mistake ? But these questions are not coming to haunt us in near future and we can relax for now. Medical imaging in an unexplored area and there are a lot of researches to be conducted, hopefully Deep Learning will have a great impact on medical imaging as a whole.

some links for CNN :

https://towardsdatascience.com/convolutional-neural-networks-from-the-ground-up-c67bb41454e1

--

--

Saurabh Yadav

Grad student,Department of Computer Science, University of Delhi