Detectron2; augmentations, monitor & log train/validation metrics, inference script (Part 2)

Spyros Dimitriadis
Innovation-res
Published in
3 min readMay 24, 2022
source: https://github.com/facebookresearch/detectron2

In part 1 on how to use Detectron2 we saw how to set up the configuration file, how to use any optimizer and learning rate scheduler.
It is recommended to check part 1 before jumping into part 2.
On the second part we will show how to use augmentations, how to calculate and monitor the validation loss and validation metrics, and how to do inference.

All references and sources are noted as hyperlinks.

Contents:

  • Augmentations
  • Validation Metrics
  • Metrics monitor and log
  • Inference

Without further ado, let’s jump into the code.

Augmentations

Detectron2 has build-in augmentations that you can use in detectron2.data.transforms. So to add augmentations, you need to add a method in the Trainer class.

Add augmentations in the Trainer class

Validation metrics

The code that we will use is borrowed by Marcelo Ortega and uploaded on the github gist. You can check his amazing medium post here!
To calculate validation metrics while training, we can use the hook LossEvalHook (first block of code, named: LossEvalHook.py) of the github gist by Marcelo Ortega.

Then, it is needed to add two methods build_evaluator and build_hooks in the MyTrainer class (second block of code, named: MyTrainer.py).

Note: When using COCOEvaluator, by default it tracks both box AP and mask AP, in order to calculate only the mask AP in COCOEvaluatorclass pass tasks=['segm'] or to calculate only the box AP pass tasks=['bbox'].

Note: if you use cfg.INPUT.MASK_FORMAT='bitmask', you can try the MAPIOUEvaluatorfrom https://www.kaggle.com/code/slawekbiel/positive-score-with-detectron-2-3-training#Define-evaluator by Slawek Biel.

by Marcelo Ortega

Using the third block cell (PlotTogether.py) from the above github gist, you can plot the train and validation loss after the training using Matplotlib.

Train and Validation loss

Metrics monitor and log

By default the Detectron2 is logging all metrics in tensorboard.

# Load the TensorBoard notebook extension
%load_ext tensorboard
%tensorboard — logdir logs

Additionally, we could use Weights & Biases (WandB) to have all training metrics on WandB platform and track all runs metrics with the used config file. For personal use WandB is free, to use it you just need to sign up and get your API_KEY. It is a great platform to monitor and log your projects.
source: Detectron2 GitHub issue

You should set the following code after creating the config file and before training starts.

import wandb
import yaml
wandb_api = “YOUR_API_KEY”
wandb.login(key=wandb_api)
cfg_wandb = yaml.safe_load(cfg.dump())
run = wandb.init(project=”project_name”, name=”ResNet101-Lr3e-4", config = cfg_wandb, sync_tensorboard=True)

Your experiments on WandB will be accessible from everywhere by logging in the WandB website and will look like this:

Inference

Let’s adapt from Detectron2 DefaultPredictor class a simple end-to-end predictor class with the given config that runs on single device for one or more input images.

DefaultPredictor source here.

This class does the following:
1. Loads a checkpoint from cfg.MODEL.WEIGHTS.
2. As the input takes a list of image paths (could be a list with only one image path), reads them and applies transformation defined by cfg.INPUT.FORMAT.
3. Applies resizing defined by cfg.INPUT.MIN_SIZE_TEST and cfg.INPUT.MAX_SIZE_TEST.
4. Returns the outputs, which is a dictionary with the output from Detectron2.

Use case example:

from detectron2.config import get_cfg

OUTPUT_DIR = 'Detectron2/logs/20211206'
cfg = get_cfg()
cfg.merge_from_file(OUTPUT_DIR + '/config.yaml')
cfg.MODEL.WEIGHTS = OUTPUT_DIR + '/model_final.pth'
cfg.MODEL.ROI_HEADS.NMS_THRESH_TEST = 0.4 # set a custom testing threshold
cfg.MODEL.DEVICE = 'cuda' # or 'cpu'
cfg.SOLVER.IMS_PER_BATCH = 8 # you can change this following the device capabilities
cfg.INPUT.MIN_SIZE_TEST = 0 # set to zero to disable resize in testing.
pred = MyPredictor(cfg)
img_paths = ['/content/test_images/image1.jpg', '/content/test_images/image2.jpg']
outputs = pred(img_paths)

Keep Learning 🚀

References:

[1] Detectron2 GitHub by Facebook Research Group

[2] Training on Detectron2 with a Validation set, and plot loss on it to avoid overfitting, medium post by eidos.ai

[3] MAPIOUEvaluator, Kaggle kernel by Slawek Biel

[4] Weights and Biases

--

--