Review: RU-Net & R2U-Net — Recurrent Residual Convolutional Neural Network (Medical Image Segmentation)
Improve U-Net by Recurrent Convolutions and Residual Connections
In this story, RU-Net & R2U-Net, by University of Dayton and Comcast Labs, is briefly reviewed.
- RU-Net is Recurrent Convolutional Neural Network (RCNN) based on U-Net.
- R2U-Net is Recurrent Residual Convolutional Neural Network (RRCNN) based on U-Net.
This is a 2018 arXiv tech report with more than 40 citations. (Sik-Ho Tsang @ Medium)
- Experimental Results
- As shown above, RU-Net is based on U-Net except that there are recurrent convolutions before downsampling, before upsampling and before outputting the segmentation map.
- (If interested, please visit my review on U-Net.)
- For the recurrent convolutional layer (RCL), the output O(t) is the output at time step t before ReLU. And it is equal to w * x(t) + w * x(t-1) + b.
- F(x,w) is just O(t) after ReLU.
- In R2U-Net, it is residual learning instead of the one in RU-Net.
2.2. Comparison of Different Kinds of U-Net
3. Experimental Results
3.1. Blood Vessel Segmentation
- Three different popular datasets for retina blood vessel segmentation including DRIVE, STARE, and CHASH_DB1.
- RU-Net and R2U-Net obtain the best performance.
3.2. Skin Cancer Segmentation
- R2U-Net obtains the best segmentation performance.
3.3. Lung Segmentation
- R2U-Net with t=3 obtains the best segmentation performance.
3.4. Computational Time
[2018 arXiv] [RU-Net & R2U-Net]
Recurrent Residual Convolutional Neural Network based on U-Net (R2U-Net) for Medical Image Segmentation
My Previous Reviews
Image Classification [LeNet] [AlexNet] [Maxout] [NIN] [ZFNet] [VGGNet] [Highway] [SPPNet] [PReLU-Net] [STN] [DeepImage] [SqueezeNet] [GoogLeNet / Inception-v1] [BN-Inception / Inception-v2] [Inception-v3] [Inception-v4] [Xception] [MobileNetV1] [ResNet] [Pre-Activation ResNet] [RiR] [RoR] [Stochastic Depth] [WRN] [ResNet-38] [Shake-Shake] [FractalNet] [Trimps-Soushen] [PolyNet] [ResNeXt] [DenseNet] [PyramidNet] [DRN] [DPN] [Residual Attention Network] [DMRNet / DFN-MR] [IGCNet / IGCV1] [MSDNet] [ShuffleNet V1] [SENet] [NASNet] [MobileNetV2]
Object Detection [OverFeat] [R-CNN] [Fast R-CNN] [Faster R-CNN] [MR-CNN & S-CNN] [DeepID-Net] [CRAFT] [R-FCN] [ION] [MultiPathNet] [NoC] [Hikvision] [GBD-Net / GBD-v1 & GBD-v2] [G-RMI] [TDM] [SSD] [DSSD] [YOLOv1] [YOLOv2 / YOLO9000] [YOLOv3] [FPN] [RetinaNet] [DCN]
Semantic Segmentation [FCN] [DeconvNet] [DeepLabv1 & DeepLabv2] [CRF-RNN] [SegNet] [ParseNet] [DilatedNet] [DRN] [RefineNet] [GCN] [PSPNet] [DeepLabv3] [ResNet-38] [ResNet-DUC-HDC] [LC] [FC-DenseNet] [IDW-CNN] [DIS] [SDN]
Biomedical Image Segmentation [CUMedVision1] [CUMedVision2 / DCAN] [U-Net] [CFS-FCN] [U-Net+ResNet] [MultiChannel] [V-Net] [3D U-Net] [M²FCN] [SA] [QSA+QNT] [3D U-Net+ResNet] [Cascaded 3D U-Net] [Attention U-Net] [RU-Net & R2U-Net]
Generative Adversarial Network [GAN]