Object Detection in Google Colab with Custom Dataset

RomRoc
HackerNoon.com
4 min readAug 1, 2018

--

This article propose an easy and free solution to train a Tensorflow model for object detection in Google Colab, based on custom datasets. To demonstrate how it works I trained a model to detect my dog in pictures.

Object Detection with my dog

All the code and dataset used in this article is available in my Github repo.

Peculiarities of this proposal are:

  • Only requirement is the dataset created with LabelImg
  • A single Google Colab notebook contains all the steps: it starts from the dataset, executes the model’s training and shows inference
  • It runs in Google Colab’s GPU enabled and Google Drive storage, so it’s based exclusively on free cloud resources

Furthermore, important changes have recently been made to Tensorflow’s Object Detection api, that made obsolete other available tutorials.

Making dataset

The only step not included in the Google Colab notebook is the process to create the dataset.

With an appropriate number of photos (my example have 50 photos of dog), I created the annotations. The tool I used is LabelImg. For the sake of simplicity I identified a single object class, my dog. It’s possible to extend it to obtain models that perform object detection on multiple object classes.

I renamed the image files in the format objectclass_id.jpg (i.e. dog_001.jpg, dog_002.jpg). Then in LabelImg, I defined the bounding box where the object is located, and I saved annotations in Pascal Voc format.

Finally I uploaded annotations files in my Google Drive account, using a single zip file with the following structure:

Check my dataset file in Github to see an example.

Dataset example

Model training

All the next steps are included in the Google Colab notebook. I execute cells in sequence to train the model and run inference:

Install required packages: install packages, repositories and environment variables for object detection in Tensorflow, then run a test.

Download and extract dataset: download in the filesystem the dataset created. It’s important that the zip file has the structure explained above.

Empty png files: this is a cell to avoid error in create_pet_tf_record.py, it has not any effect in training process.

Create TFRecord: from the dataset it creates the TFRecord. In this simplified version, algorithm will train model only for one class.

Download pretrained model: download pretrained model from ModelZoo as initial checkpoint for transfer learning. In the example we download the model faster_rcnn_inception_v2_coco, to use another model from ModelZoo change MODEL var.

Edit model config file: set the fields of the config file, identified by PATH_TO_BE_CONFIGURED. If you choose a different initial checkpoint model, update accordingly filename var and re.sub functions in the cell.

Train model: this is the main step, it performs the train of the model with the data and the configurations so far created. It is possible to change the number of steps in train and in validation.

Below the Tensorboard charts resulting from training process:

Tensorboard charts

Inference

Export trained model: export model to run inference. The cell converts last trained model to the format to run inference.

Upload image for inference: browser upload of test image file to run inference in the next step.

Run inference: finally it performs inference of the uploaded image, and shows the result below.

Next goals

Thanks a lot for reading my article. In this article we easily trained an object detection model in Google Colab with custom dataset, using Tensorflow framework. If you liked, leave some claps, I will be happy to write more about machine learning.

In next articles we will extend the Google Colab notebook to:

  • Include multiple classes of object detection
  • View Tensorboard in a different browser tab during model training
  • Perform instance segmentation to do pixel wise classification

--

--

RomRoc
HackerNoon.com

Computer engineer focused on machine learning and computer vision.