Published in


MONAI v0.6 and MONAI Label v0.1 Are Now Available — Monai Label Helps Quickly Create Annotated Datasets and AI Annotation Models

MONAI v0.6 and MONAI Label v0.1 are now available! MONAI Label is an intelligent image labeling and learning tool that enables users to create annotated datasets and build AI annotation models quickly.

We’re excited to announce our latest release of MONAI, version 0.6. We continue to expand on our APIs by including a new network called UNETR implemented in PyTorch. We’re also adding new functionality to use existing pre-trained PyTorch models created for NVIDIA Clara Train.

Alongside our core release, we’re releasing a new project that has officially hit version 0.1, called MONAI Label. MONAI Label is an intelligent open-source image labeling and learning tool that reduces the time and effort of annotating new datasets and enables the adaptation of AI to the task at hand by continuously learning from user interactions and data. We’re providing sample applications that use some of our existing PyTorch models to help you get started quickly.

MONAI v0.6

UNETR is a transformer-based model for volumetric (3D) medical image segmentation and is currently the state-of-the-art model on the BTCV dataset leaderboard for the task of multi-organ semantic segmentation. UNETR has a flexible implementation that supports various segmentation tasks.

You can find a tutorial for 3D multi-organ semantic segmentation using UNETR in our tutorials repo.

We’ve included the ability to decollate batches to simplify post-processing transforms and enable flexible operations on a batch of model outputs. By building off previous work on inverse spatial transforms, decollate is an “inverse” operation of the PyTorch collate function. Decollate enables post-processing transforms for each item independently and allows for randomized transforms to be applied for each predicted item in a batch. This also provides inverse operations for data items in different original shapes since the inverted items are returned in lists instead of tensors.

A typical process of decollate batch is illustrated as follows:

You can find a Jupyter notebook tutorial showing a typical decollate workflow here.

MONAI now includes Pythonic support for the Medical Model ARchive, MMAR, format provided by NVIDIA Clara Train. By enabling support for this format, developers can now use pre-trained models created for Clara Train directly in MONAI.

Find all of the Clara Train models on NGC here. We’ve also included a new tutorial showing you how to use one of the pre-trained MMAR models for transfer learning. The results are shown below:

Training from scratch (green), Inference of pre-trained MMAR weights without training (magenta), training from the MMAR model weights (blue)

The base API for metrics has been enhanced to support the logic for iteration and epoch-based metrics. By enabling support for both metric methods, MONAI metrics are now more extensible and are a great starting point for creating custom metrics. The APIs also support data-parallel computation; with the Cumulative base class, intermediate metric outcomes can be automatically buffered, cumulated, synced across distributed processes, and aggregated for the final results.

We’ve included a multi-processing computation example that shows how to compute metrics based on saved predictions and labels in a multi-processing environment.

MONAI continues to accelerate domain-specific routines in common workflows by introducing C++/CUDA modules as extensions of the PyTorch native implementations. We now provide two ways to build a C++ extension from PyTorch:

  • Via `setuptools` for modules including `Resampler`, `Conditional random field (CRF)`, and `Fast bilateral filtering using the permutohedral lattice`.
  • Via just-in-time (JIT) compilation for Gaussian mixtures module. Using JIT compilation allows for dynamic optimization according to the user-specified parameters and local system environment.

As we move closer to the MONAI 1.0 release, we’re focusing on creating the proper mechanisms to provide a fast and collaborative codebase development.

As a starting point, we’ve created some basic policies for backward compatibility. New utilities are introduced on top of the existing semantic versioning modules and the git branching model. We’re also working on a complete CI/CD solution that is efficient, scalable, and secure.

MONAI Label v0.1

MONAI Label is a server-client system that facilitates interactive medical image annotation by using AI. As a part of Project MONAI, MONAI Label shares the same principles as MONAI and focuses on being Pythonic, modular, user friendly, and extensible.

Open-source and easy-to-install, MONAI Label can run locally on a single machine with one or multiple GPUs. The server and client run on the same machine and don’t currently support multiple users or communication with an external database.

To quickly install and run MONAI Label using DeepEdit, it’s as easy as following the steps below:

$ pip install monailabel$ monailabel datasets --download --name Task02_Heart --output C:\Workspace\Datasets$ monailabel apps --download --name deepedit_left_atrium --output C:\Workspace\Apps$ monailabel start_server --app C:\Workspace\Apps\deepedit_left_atrium --studies C:\Workspace\Datasets\Task02_Heart\imagesTr

Once you start the MONAI Label Server, by default, it will be serving at Opening the URL in the browser will provide you with the list of Rest APIs available. You can also use the 3D Slicer extension by filling in the MONAI Label Server field with the serving URL.

MONAI Label focuses on two types of users, Researchers and Clinicians.

For Researchers, MONAI Label gives you an easy way to define their pipeline to facilitate the image annotation process. They can use the provided Slicer MONAI Label plugin or customize their own workflow to process inputs and outputs sent to the App.

For Clinicians, MONAI Label gives you access to a continuously learning AI that will better understand what the end-user is trying to annotate.

MONAI Label comprises the following key components: MONAI Label Server, MONAI Label Sample Apps, MONAI Label Sample Datasets, and a 3DSlicer Viewer extension.

The MONAI Label server is the main integration point. It creates the REST API that allows communication between the MONAI Label server and the client (e.g., Slicer plugin, OHIF plugin, etc.).

The included sample apps for MONAI Label are the following:

  • Left-atrium semantic segmentation in the heart using both DeepEdit and DeepGrow.
  • Spleen semantic segmentation using both DeepEdit and DeepGrow.
  • Multiple label segmentation (e.g., heart ventricles segmentation and liver and tumor)

These sample applications showcase the speed-up provided when using MONAI Label when used to create your annotation model. For example, using the Spleen Application, you can quickly begin to utilize your annotation model after only a few images are segmented.

The figure below compares the “Interactive” vs. “Standard” way of annotating and training.

In the “Interactive” way, the annotation and model training complement each other. The user can assist, in the form of clicks, to guide the AI model to annotate the object of interest better. This method allows you to quickly start using your annotation model, giving you a significant speedup in your overall annotation process.

While the “Standard” way is where the user uses a classical technique of using the paintbrush or click-based contours to annotate the image. This method requires annotating all images before training, which means a longer time before you can begin to utilize your model.

A comparison of the “Interactive” vs. “Standard” way of training an annotation model, assuming ~10 minutes per image for a skilled user to segment a spleen CT image manually.

MONAI Label uses the Medical Segmentation Decathlon datasets to showcase how easy it is to create MONAI Label Apps using the three different paradigms: DeepGrow, DeepEdit, and automatic segmentation

MONAI Label currently employs the following annotation algorithms:

  • DeepGrow is a click-based interactive segmentation model, where the user can guide the segmentation with positive and negative clicks. Positive clicks guide the segmentation towards the region of interest, while negative clicks guide the model away from the over-segmented areas.
  • DeepEdit is an algorithm that combines the power of two models in one single architecture. It allows the user to perform inference using a standard segmentation method and interactive segmentation using clicks.
  • Automatic Segmentation is the non-interactive paradigm available in MONAI Label. It allows the researcher to create a segmentation pipeline using a standard UNet or any network available in MONAI (e.g., UNet, Highresnet, ResNet, DynUnet, etc.) to segment images automatically.

The 3D Slicer extension handles calls and events created by user interaction and sends them to the MONAI Label server. The current version supports click-based interaction and allows the user to upload images and labels.

The MONAI Label server also supports other interaction styles such as closed curves and ROI. A researcher can modify this plugin to make it more dynamic or customized to their MONAI Label Apps.


We’re excited as we continue expanding on our portfolio of projects that are a part of Project MONAI. We also have two new working groups focused on Deployment and Digital Pathology, so keep an eye out for more releases later this year, including a prototype that focuses on the end-to-end Medical AI lifecycle.