Review: CUMedVision2 / DCAN—Winner of 2015 MICCAI Gland Segmentation Challenge Contest (Biomedical Image Segmentation)

This time, CUMedVision2, also known as DCAN (Deep Contour-Aware Network), by CUHK, is reviewed. In this paper, an intermediate contour label is also used to assist the convergence while training, which is one important concept from this paper. Finally, CUMedVision2 has won the 2015 MICCAI Gland Segmentation Challenge Contest. This is a 2016 CVPR paper with more than 100 citations while I was writing this story. (Sik-Ho Tsang @ Medium)

The Ranking: https://warwick.ac.uk/fac/sci/dcs/research/tia/glascontest/results/

You may ask: “Is it too narrow to read about biomedical Image Segmentation? I am not working in this field, Is it not so useful for me?”
However, we may learn the techniques of it, and apply it to different industries. Say for example, quality control / automatic inspection / automatic robotics during construction / fabrication / manufacturing process. These activities involve quantitative diagnosis. If we can make it automatic, cost can be saved with even higher accuracy.
Segmentation by Experts

What Are Covered

  1. Brief Review of CUMedVision1
  2. CUMedVision2 Network Architecture
  3. Some Other Details
  4. Results

1. Brief Review of CUMedVision1

CUMedVision1
  1. As in the figure above, first, we have an input image from the left.
  2. Then the input image goes through the down-sampling path with convolutional and max pooling layers. This path aims at classifying the semantical meaning based on the high level abstract information.
  3. At certain layers before pooling, the feature maps will go through unsampling path with convolutional and deconvolutional layers. This path is to reconstruct the fine details such as boundaries. Backwards strided convolution is used for upsampling. And we can obtain the results at C1, C2 and C3.
  4. Next, they are added together, and this fuse map will have the softmax.
  5. And post-processing is done on the segmentation result using the contour.

2. CUMedVision2 Network Architecture

CUMedVision2
  1. In CUMedVision2, after going through the conv and pooling, the upper upsampling path is actually the CUMedVision1 network to obtain the segmentation map, po.
  2. And the lower path is the new upsampling path which is similar to the upper path. But instead, the probability map is the contour map, pc. This contour map is marked by experts. By adding this intermediate labels in the network, we can drive the weights within the network to have more focus on the boundaries/separation among glands.
    This is important because there are many touching glands in the image. IF they are classified into one gland, accuracy will be dropped largely.
  3. After obtaining po and pc, we can obtain the m(x), which is the final result according to the following rule:

where to and tc are thresholds and are set to 0.5 empirically.


3. Some Other Details

3.1 Loss Function

The first term: L2 regularization to reduce overfitting.

The second and third terms: the log losses of probability map po and pc.

3.2 Training

The downsampling path is initialized by DeepLab which is trained by PASCAL VOC dataset. Others are initialized by Gaussian. Fine-tuning is done for the whole network.

3.3 Testing

Overlap-tile strategy

Overlap-tile strategy is used for testing, which means that the segmentation is done part by part for the whole image when the image is too large.

And there are post-processing steps which including smoothing with a disk filter (radius 3), filling holes and removing small areas are performed on the fused segmentation results. Finally, each connected component is labeled with a unique value for representing one segmented gland.

4. Results

Three metrics: F1 score, Object-Level Dice index, and Hausdorff distance are measured.

4.1 F1 Score

A score measured by precision P and recall R.

where Ntp, Nfp and Nfn are number of true positive, false positive and false negative respectively. More than 50% overlap is defined as true positive. It is just like object detection problem.

4.2 Object-Level Dice index

where a set of pixels G annotated as a ground truth object and a set of pixels S segmented as a gland object. And:

where Si denotes the ith segmented object, Gi denotes a ground truth object that maximally overlaps Si, ~Gj denotes the jth ground truth object, ~ Sj denotes a segmented object that maximally overlaps ~Gj, nS and nG are the total number of segmented objects and ground truth objects, respectively.

Thus, This object-level Dice index is an important metric for segmentation.

4.3 Hausdorff distance

It is used for measuring the shape similarity:

Hausdorff distance conceptual diagram

Thus, Hausdorff distance is to get the maximum distance (defined as above) between two shapes G and S in the equation (or X and Y in the figure). Object-Level Hausdorff distance is used just like the object-level Dice index as shown above.

Below is the results for the above 3 metrics:

MICCAI 2015 Results

Part A is benign (normal) glands while Part B is malignant (abnormal) glands. CUMedVision2 has got the rank 1 at F1 score, Dice index, and Hausdorff distance (Part A).

Sum Score and Final Ranking

Based on all results, CUMedVision2 won the MICCAI 2015 gland segmentation challenge contest. Some visualized results:

Part A Results (Top: Input Image, Middle: No Contour Labels, Bottom: Have Contour Labels)
Part B Results (Top: Input Image, Middle: No Contour Labels, Bottom: Have Contour Labels)

If we got something want to segment but difficult to segment, some intermediate labels, which involve in the backpropagation, might help to segment. The downside is that we need human power to perform the labeling for the intermediate contour labels. But if human power is not a problem, this is one of the solutions to improve the segmentation accuracy.


References

  1. [2016 CVPR] [CUMedVision2 / DCAN]
    DCAN: Deep Contour-Aware Networks for Accurate Gland Segmentation

My Reviews

[CUMedVision1] [FCN]