GAN for Medical Imaging: Generating Images and Annotations

Michael Avendi
How to AI
Published in
3 min readFeb 24, 2018

In this post, we are going to show a way of using generative adversarial networks (GANs) to simultaneously generate medical images and corresponding annotations. We use cardiac MR images for the experiment. For model development, we use Keras with theano backend.

Introduction

Automatic organ detection and segmentation have a huge role in medical imaging applications. For instance, in the cardiac analysis, the automatic segmentation of the heart chambers is used for cardiac volume and ejection fraction calculation. One main challenge in this field is the lack of data and annotations. Specifically, medical imaging annotations have to be performed by clinical experts, which is costly and time-consuming. In this work, we introduce a method for the simultaneous generation of data and annotations using GANs. Considering the scarcity of data and annotations in medical imaging applications, the generated data and annotations using our method can be used for developing data-hungry deep learning algorithms.

Data

We used the MICCAI 2012 RV segmentation challenge dataset. The Training-Set including 16 patients with images and expert annotations, was used to develop the algorithm. We convert the annotations to binary masks with the same size as images. The original images/masks dimensions are 216 by 256. For tractable training, we downsampled the images/masks to 32 by 32. A sample image and corresponding annotation of the right ventricle (RV) of the heart is shown below.

A 32 by 32 MR image and annotation mask.

Method

We use a classic GAN network with two blocks:

  • Generator: A convolutional neural network to generate images and corresponding masks.
  • Discriminator: A convolutional neural network to classify real images/masks from generated images/masks.

Here mask refers to a binary mask corresponding to the annotation.

The block diagram of the network is shown below.

Block diagram of GAN for generating image and annotation.

Algorithm Training

To train the algorithm we follow these steps:

  1. Initialize Generator and Discriminator randomly.
  2. Generate some images/masks using Generator.
  3. Train Discriminator using the collected real images/masks (with y=1 as labels) and generated images/masks (with y=0 as labels).
  4. Freeze the weights in Discriminator and stack it to Generator (figure below).
  5. Train the stacked network using the generated images with y=1 as forced labels.
  6. Return to step 2.

It is noted that, initially, the generated images and masks are practically garbage. As the models are trained, they will become more meaningful. Some sample generated images and masks are depicted below.

Sample generated images and masks.

This was mainly a prototype of a proof of concept. You can expand it to generating higher resolution data.

The code is shared in this Jupiter notebook

References:

--

--