Using Automated Segmentation Extensions in 3DSlicer

A tutorial on using automated extensions for semantic segmentation of medical images in 3D Slicer

David Simms
15 min readApr 28, 2023
A segmented lung in 3D Slicer

This is the second in a series of blog posts discussing using automated segmentation extensions available for the 3D Slicer biomedical image viewer software. The first post can be found here, and the third, fourth and fifth parts of of the series discuss using the MONAI framework in 3D Slicer.

As we discussed in the first article of this series, 3D Slicer is a great open-source software platform for segmentation and contouring of biomedical images. It contains an assortment of modules and extensions to perform various medical image related tasks, and has an active community developing new extensions that are easily installed.

There is active development of machine learning-assisted extensions for a number of tasks in 3D Slicer, including semi-automated and automated semantic image segmentation. Medical image segmentation consists of indicating the surface or volume of a specific anatomical structure in a medical image²⁰. Semantic segmentation in medical imaging is referred to as pixel-level classification, as it classifies or labels the pixels (or voxels in 3d space) in an image into different classes. Segmentation can be used, for example, to segment a tumor from normal liver tissue, or to segment the aorta in a CT scan of the entire abdomen.

There are a number of artificial intelligence-assisted extensions for automated segmentation and contouring implemented in 3D Slicer which can save the user time. In this article, we will look at using a few of these automated extensions. The segmentation principle described here should apply to other extensions or toolkits.

In this blog post we will download some sample data from 3D Slicer and segment structures from the images using automated segmentation extensions. In a follow up blog post, we will import our own data into 3D Slicer and use another automated extension called MONAILabel. Using that extension, we will segment more structures. We will then use MONAILabel to train our own model on our own dataset.

Before we can begin, we will briefly touch on the deep learning method that is popular in these medical image segmentation machine learning models. We also need to install something called CUDA to speed up our segmentation processing time as well as install some extensions for 3D Slicer.

Convolutional neural networks (CNNs)

Most of these extensions use a type of 3D Convolutional Neural Network that has been proven to work well with semantic segmentation of biomedical imaging data. The models presented here have all been trained using a specific form of a deep neural network called a Convolutional Neural Network (CNN). A CNN uses a combination of convolution steps, meaning that it applies a certain filter to each pixel in the image, and performs pooling steps (i.e. downsampling the image). These steps are performed in different layers of the neural network.

A diagrammatic depiction of a convolutional neural network (CNN). Image taken from Quantib.

After applying both steps a couple of times, the algorithm has filtered out the most important information in the image and is able to determine what the image contains¹. The U-Net is a type of 3DCNN that was developed in 2014 that downsamples and than upsamples the information again. More recently, the V-net and nnu-Net, a self configuring method, have been used in the most robust models. While for the purpose of this series you do not need to understand how these networks work, most medical image segmentation models today use these networks, including the models presented in this article.

Manual segmentation of a single CT scan can take an individual hours to complete. These models can segment numerous objects from images in minutes, saving up valuable clinician time or workflow time.

CUDA

If you do not have a NVIDIA GPU, you can skip this section. You can check if you have a supported GPU by opening a run command window (Windows key + r) and typing the following, and checking for your GPU on this page:

control /name Microsoft.DeviceManager

If you do not have a supported GPU, you will just run the segmentations using your CPU. A CPU is sufficient for basic visualization and segmentations, and I ran every model in this article on my CPU with no trouble. However, for more complex operations, a GPU is recommended. There are also parallel computing methods for AMD GPU’s but they are beyond the scope of this article.

CUDA® is a proprietary parallel computing platform and programming model by NVIDIA²¹. It enables dramatic increases in computing performance by harnessing the power of the graphics processing unit (GPU)¹². Do not worry too much about how CUDA works; just know that we can use CUDA to speed up processing time with various extensions in 3D Slicer.

Head to the download page and download the appropriate installer. In this tutorial we will download the Windows 10 installer and run the .exe. Follow the instructions on the installation manager and choose the “Network Installer” format for a smaller, quicker installer.

To verify that CUDA is installed, open a Command Prompt window(ctrl+R and type cmd) and run the following code:

nvcc -V

We can now download 3D Slicer (if not already installed) and install the necessary extensions that we will be using.

Install 3D Slicer and download extensions

We covered how to install 3D Slicer Viewer in a previous article. 3D Slicer supports plug-ins that it calls extensions. You will find these in the built-in extension manager. Development of many extensions is very active; you can follow at the Slicer development community.

In order to install the extensions, open 3D Slicer and in the top left menu, go to View->Extensions Manager (or, alternatively, hit ctrl+4 to open the Extensions Manager).

3D Slicer Extensions Manager

The Manage Extensions tab of the Extensions Manager allows you to see extensions that you have already installed. Head to the Install Extensions” tab and install the following extensions:

Airwaysegmentation, Chest_Imaging_Platform, Densitylungsegmentation, HDBrainExtraction, Lungctanalyzer, Monailabel, Nvidiaasassistedannotation, Totalsegmentator

We will also install the following extensions which we will just refer to as dependencies for now (although some add some tools to Slicer modules that you may end up using) :

Dcmqi, Diffusionqc, Markupstomodel, Petdicomextension, Pytorch, Quantativereporting, Segmenteditorextraeffects, Slicerdevlopmentoolbox, Slicerdrmi, Slicerheart, Slicerigsio, Slicerigt, Slicermorp, Slicervmtk, Surfacewrapsolidfy, Torchio, Ukftractography

Now that we have the necessary extensions installed, we are almost ready to begin segmenting structures in a medical image, we just need to get some data to use.

In order to do so we will head to the sample data module and download the CT abdomen (Panoramix) dataset. Simply click the sample dataset and it will be loaded into 3D Slicer.

3D Slicer Sample Datasets

TotalSegmentator

Now that we have some data loaded into 3D Slicer we can begin our segmentation efforts. The first extension that we will use is Total Segmentator.

SlicerTotalSegmentator is an extension that provides an automated segmentation model based on the TotalSegmentator model. TotalSegmentator is a automated semantic segmentation model based on the nnu-net CNN. It is trained on 1204 slices and can segment 104 organs³.

Total Segmentator can segment 104 structures(image credit here)

Total Segmentator can robustly segment CTs from any sources, at any resolution, of any region. All of the segments are coded with standard DICOM (SNOMED CT) terminology. We will use it to segment organs in the abdomen in just minutes, compared to the hundreds of expert hours this would take by manual segmentation.

To begin we will select View->Module Finder (alternatively ctrl+F), or click the Magnifying Glass, and search for TotalSegmentator and switch to the module.

The Input Volume will be our sample CT Abdomen data in this case. Select a Segmentation task; we will select “Total” and click Apply. If you do not have a GPU, you will have to choose between Fast mode and Full Resolution mode.

Choose a Segmentation task

Click the Show 3D button. Center the model in the 3D viewer by clicking the target icon the top left corner of the 3D window. You will now see the segmented volumes in the 3D viewer.

Fast mode Segmentation

As you can see in the highlighted area, this processing took 491.63 seconds on my computer. I do not have a GPU so I ran the segmentation in Fast mode which runs on the CPU. In “fast mode” the segmentation takes about 2 minutes and is significantly lower resolution than the High Resolution image.

Fast mode is fine to see results, but if you plan on doing any publishing, or anything further with the segmentation, you will want to use High Resolution in order to get a more accurate segmentation. High resolution segmentation took 29 minutes on my CPU. As you can see, the detail is much better.

High resolution segmentation , t=1773 s

Navigate to the Segment Editor module. Click the eyeball next to the label name to toggle display on/off for each segmented volume.

The segmented model displayed in the Segment Editor module.

You can edit the segmentations if you are not satisfied, or remove them completely. To remove a segment, select the segment you would like to delete then click Remove to delete from the segmentation3.

Now that structures are segmented, you can use the Segment Editor and Segmentation modules to segment separate volumes and save them in separate files, as shown in this video. Segmentation can be saved in many file formats. (NRRD, NIFTI, STL, OBJ, giTF, …)².

Exported segmentation files in file Explorer.

You can see how much time this would save if you are trying to segment your own data. To do so, simply import your own data and run the Total Segmentator model.

That is it! You can convert your segmentation into a 3d model if desired.

Note about processing times:

From the GitHub: “When this module is used the first time, it needs to download and install PyTorch and TotalSegmentator Python packages and weights for the AI models. This can take 5–10 minutes and several GB disk space.

With CUDA-capable GPU: 20–30 seconds in fast mode, 40–50 seconds in full-resolution mode.

Without GPU: 1 minute in fast mode, 40–50 minutes in full-resolution mode”

Find more information on TotalSegmentator here and here. Here is a video tutorial on using the extension.

That was extremely useful! Using TotalSegmentator, you could import your own data and use it to segment these 104 structures in mere minutes.

Next up we will use a tool for automated lung segmentation.

Slicer Chest Imaging Platform

See more information on CIP here and here.

Slicer Chest Imaging Platform is a library of modules created to explore lung diseases. It’s associated packages LungCTSegmenter and LungCTAnalyzer are used for automated lung and lobe segmentation. In the next section we will use LungCTAnalyzer and LungCTSegmenter to segment the lungs and compute their volume.

LungCTSegmenter / LungCTAnalzyer

See more information here and here.

Lung CT Segmenter is a 3D Slicer extension for lung, lobe and airway segmentation as well as spatial reconstruction of infiltrated, emphysematic and collapsed lung. The interactive lobe segmentation module quickly segments the lung lobes by selecting a small number of points on the fissures[3]. We will see how simple it is so segment each lobe of the lung and even the airways with this extension.

If we are continuing from the previous example, in the top left of 3D Slicer, click File->Close Scene(ctrl+W) to clear all previous data. Navigate to the sample data again.

Load the sample CTChest data. Using the drop down menu, the search (magnifying glass), or the history, navigate to the Lung CT Segmenter module.

Press Start to begin and follow the instructions on the screen, placing 3 fiducial points in the right lung in the axial view, 3 in the left lung, repeating that in the coronal view, and finally, placing one point in the trachea in the coronal view. Click Apply.

Place fiducial points in the lung as instructed

Let the model load. If any areas such as high density collapsed areas are missed, than you can place additional points. You could also use the smoothing tools, as shown in this video. Smoothing tools can fill in or remove small areas of missed segments. Modify it with the general segmentation editor tools that we discussed in the first part of the series, and click apply to finalize.

Just like before, you can save or convert the segmentation.

Lung CT Segmenter Extension.

Navigate to LungCTAnalyzer and hit Compute Results, and in 10 seconds it will create a volumetric tissue analysis for different tissue types. You can generate a pdf report as well.

Volume Analysis using LungCtAnalyzer

See this video for a quick 1 minute tutorial.

That was a very quick segmentation of the right and left lungs along with a volume analysis of tissue types. We will now use the same extension to segment the lobes and smaller airways of the lung.

Lobe and Airway Segmentation with CT LungSegmentator

Click here to see video instructions.

We will return to the CT Lung Segmenter and use airway segmentation to segment the lobes and airways of the lungs. Close the scene and load the sample CT Chest data again.

Navigate to Lung CT Segmenter extension. We will make two key changes here. We will check AirWay Segmentation and use AI (experimental). Expand “Toggle lung segmentation outputs” to choose what structures to segment, and choose which type of model you want to use.

Select start and place one fiducial point in the trachea. Click Apply. If you do not have a GPU, it will give you warning about the time. It takes about 10 minutes on CPU.

Lung CT Segmenter

Navigate to the Segment Editor module and you will see all of the segments. You will notice now that the extension has segmented each lobe of the lung as well as the bronchioles and airways. Again, you can touch up any errors with the general segmentation and smoothing tools, and save, export, or convert your data as you wish.

Lobe and lung vessel segmentation

Click here for a quick video tutorial.

That is it for lung segmentation using the Chest Imaging Platform! Certainly a very useful package of extensions when it comes to lung segmentation.

Next up we will quickly strip the skull from a brain MRI.

HDBrainExtraction for AI-based skull stripping

This next tool is useful if you are working with brain MRI images. HDBrainExtraction is an AI-based skull stripping model that blanks out regions outside the brain in MRI images. You can find more details here.

Skull Stripping with HDBrainExtraction

We will download a different sample dataset for this extension. Download MRBrainTumor1 sample data set and navigate to the HDBrainExtraction Module.

As per the instructions on the GitHub:

  • Go to HD Brain Extraction Tool module
  • Select Input volume -> MRBrainTumor1
  • Select Skull-stripped volume -> Create new volume
Select “Create New Volume”
  • Select Brain segmentation -> Create new segmentation
Select “Create New Segmentation”
  • Click Apply
  • Again, computation is about 10 minutes on CPU and less than a minute on GPU.
Viewing the skull-stripped brain segmentation
  • To display the skull-stripped volume in 3D: go to Data module and drag-and-drop the skull-stripped volume into the 3D view.
  • To display the brain mask in 3D: go to Data module and drag-and-drop the segmented into the 3D view.

You have just quickly saved a lot of storage and unnecessary noise if all you desire is a brain segmentation. As always, save your data for use later.

NVIDIA AI Assisted Annotation (AIAA)

Find more information here, here and here.

NVIDIA AIAA is an extension added to Slicer that is found in the Segment Editor module. It is a segmentation extension and has 3 different modes trained on the Medical Decathalon database. It has since been deprecated in favour of NVIDAI’s new framework, MONAILabel, which we will cover in the third article in this series. However, given that it functions similar to MonaiLabel, and, as of the time of writing still works, we will cover AIAA quickly as it can still be of use.

NVIDIA AIAA has 3 modes:

  • Fully automatic segmentation: no user inputs required. In “Auto-segmentation” section, choose model, and click Start. Segmentation process may take several minutes (up to about 5–10 minutes for large data sets on computers with slow network upload speed).
  • Boundary-points based segmentation: requires user to specify input points near the edge of the structure of interest, one on each side. Segmentation typically takes less than a minute.
  • DeepGrow segmentation: requires user to specify few input points (foreground/background) on the structure of interest. This is a 2D operation and Segmentation happens slice by slicer over every point added. Each click operation typically takes about 1–2 seconds

We will load the sample MRBrainTumor1 dataset and cover one quick example of the Boundary Points Segmentation. Head to the Segment Editor module and click the Nvidia AIAA button as shown below.

The NVidia logo is highlighted in red. Clicking it will display the NVidia AIAA options in the Segment Editor.
  • In “Segment from boundary points” section, select “annotation_mri_brain_tumors_t1ce_tc” (model trained to segment tumor on contrast-enhanced brain MRI)
  • Click “Place markup point” button, and click near the edge of the tumor on all 6 sides in slice views, then click “Start” (if a popup is displayed about sending image data to a remote server, click OK to acknowledge it)
  • Click Start. Let the model run and that is all! Since you are in the Segment Editor, you will see the segmented structures. You can segment multiple structures and turn on / off their visibility with the eye icon.
Image credit Github/NVIDIA

Now the quickly segmented brain tumor can be used in a variety of applications, such as training further machine learning models on 3D tumor volumes, or for other uses such surgical planning.

NVIDIA AIAA is a powerful collection of segmentation models currently only surpassed by TotalSegmentator and its successor MONAILabel. We will discuss MONAILabel in the next article of this series. Although it is deprecated, you can see more examples of the other models in the AIAA package here.

RVesselX — AI Liver segmentation

The RVesselX slicer plugin is a plugin for Slicer3D which aims at easing the segmentation of liver, liver vessels and liver tumor from DICOM data for annotation purposes. This could be its own article so I will just link the GitHub here if you would like to explore further.

Conclusion

That ends our introduction to automated extensions in 3D Slicer. In this article, you have seen how we can quickly segment a large variety of structures from medical imaging data using these extensions. Along the way, you:

  1. Installed 3D Slicer and various extensions
  2. Downloaded CUDA for faster segmentation processing
  3. Used Total Segmentator to segment 104 organs from a sample dataset
  4. Used Chest Imaging Platform and its associated components to segment the lung and lobes of a sample dataset
  5. Stripped a skull from an sample MRI brain image using the HD Brain Extraction tool
  6. Used NVidia AIAA to segment a brain tumor on a contrast enhanced MRI from a sample dataset.
  7. Introduced the RVesselX extension for automatic liver segmentation

Up next: The MONAILabel framework for image segmentation

In this article, we saw how automated segmentation extensions can save valuable clinician time when segmenting medical imaging datasets.

In part 3 of this series, we will continue this discussion and install the MONAI framework, a powerful framework with a large variety of models available for segmentation in 3D Slicer.

References

[1] Moeskops, P. (2022, October 25). Deep Learning Applications in Radiology: Image Segmentation. Artificial Intelligence in Healthcare & Radiology. Retrieved April 28, 2023, from https://www.quantib.com/blog/medical-image-segmentation-in-radiology-using-deep-learning

[2] “Automatic whole-body CT segmentation in 2 minutes using 3D Slicer and TotalSegmentator.” Youtube. Uploaded by PerkLab Research. 13 Dec 2022. https://www.youtube.com/watch?v=osvMB5SKcVQ&t=2s

[3] Wasserthal, J., Meyer, M., Breit, H.-C., Cyriac, J., Yang, S., & Segeroth, M. (2022, August 11). TotalSegmentator: Robust segmentation of 104 anatomical structures in CT images. arXiv.org. Retrieved April 28, 2023, from https://arxiv.org/abs/2208.05868

--

--