TensorFlow’s Object Detection API Using Google Collab.

Narendiran Krishnan
The Startup
Published in
8 min readAug 8, 2020

Here I will walk you through the steps to create your own Custom Object Detector with the help of Google’s TensorFlow Object Detection API using Python 3 not on your CPU. It is with the free Google Cloud which provides you around 12 Hrs of free GPU for training your model.

You can find the code to the entire code here in my GIT HUB repo.

Before we proceed, I recommend you to kindly read my previous Blog which is also TensorFlow’s Object Detection API, where I trained using CPU.

I recommended you guys to read this because we are going to follow the same steps here and there is going to be a small difference, where we are just going to train the model here in the Google Collab which is free for 12 Hrs per day.

Steps to be followed to implement this project:

  1. Setting up Object Detection Directory
  2. Configure path variables
  3. Training Data
  4. Updating Google Drive
  5. Setting up Google Collab
  6. Configuring Training
  7. Training your model
  8. Inference Graph
  9. Testing our model

I have refereed multiple tutorials and blogs and worked on this and special thanks for the author of pythonprogramming.net, Edje Electronics, Nathaniel O Solomon and Dat Tran.

Here the first 3 steps are going to be the same, there is not going to be any changes i.e. this is not entirely going to be done in a cloud environment and it is more of a combination of both.

Before setting up the Google Drive and Collab, I have to say here I have used a different Data Set and Since it is still Covid-19 situation and so I have decided to use Masked Dataset.

Here in his git repo go to “ /experiements/data/ “ you will find the details of the mask. Well he has done some more of the complex data augmentations work in “ mask_classifier/Data_Generator/ “ you guys can check that too.

I have got only 2 classes for this mask detection and you need to update those details in the first level in “ labelmap.pbtxt “ as well as in “ faster_rcnn_inception_v2_pets.config” and “ generate_tfrecord.py

You can also refer to my previous blog where I have given the details of how it has to be done.

However I will also show you how to do it here too.

Now go to the folder Training under as shown below

folder you need to access

In these both you have to make changes with the class name, i.e. with mask and no mask.

labelmap.pbtxt file

This is the changes that is supposed to be done in “ labelmap.pbtxt

In the “ faster_rcnn_inception_v2_pets.config “ file, we need to do many changes according to Google Drive.

The first change is, line number 9 where you need to change the number of classes to 2,

change the number of classes line 9

Then scroll down to line 126 and you need to do the following changes.

start changing from line 126

You have to make sure that you give the path exactly the same way I have given here. Moreover I should note that line 132.

Num_examples → this is nothing but the total images which you have used for testing.

Now you need to search for “ generate_tfrecord.py “ which will be in the Object Detection folder.

Here are a few extra lines of codes that you’re supposed to change since, most of them in my previous Blog are kinda deprecated, that doesn’t mean that it will not run. It will give you warning error.

The first thing is to update the number of classes, as shown from line 32.

with mask and no mask

Now these steps are to be followed only if your using my other Git Repo, if not you can skip this step.

Now these steps are to be followed only if your using my other Git repo, if not you can skip this step.

tf.io.gfile.GFile

The code is “ tf.io.gfile.GFile “ → this is the new code which ain’t deprecated yet.

Next is line 88 where you can see the difference in the code it is nothing big, but still a difference is a difference.

line 88

Last but not least the last line,

Last line of the program

Now once you’re done with these changes your good to go.

Updating Google Drive

Now you need to save all the changes that you made to the code and now your going to create a new folder in your google drive as tensorflow1

Then Import all the files and folder inside tensorflow1 in your Local Machine to your Google Drive, I guess it will be around 350 Mb once your done uploading into Drive.

Once you done it you will be having all the files and folder as shown below,

Google Drive

I guess we are done updating the datas required in Google Drive,

Now you need to select new and follow the 1, 2 and 3 as follows,

Create Google Colaboratory Notebook

Now you will be redirected to your Google Colab Notebook, which is pretty much 100% similar to jupyter notebook all the shortcuts are all the same.

Now type the code as given below,

Mount Your Google Drive

from google.colab import drive

drive.mount (‘/content/gdrive’)

Now if you run this cell you will find a URL given below you need to select that and allow that for accessing your Google Drive you need to give some auth key just copy and paste that auth code and you’re done.

Now you can access any files from your Google Drive.

Now we need to select the Version 1.x in TensorFlow as this project hasn’t been updated to TensorFlow 2.x yet.

So get along and follow the steps as shown below

Check TensorFlow Version

Code is as given below and you use this technique to select any specific version you want.

%tensorflow_version 1.x

Setting up Google Collab

First move to the tensorflow1 location in you gdrive,

Setting up Google Collab

Once you’re done with that you need to install a protobuf-compiler but whatever you’re supposed to always add a “ ! “.

Now we need to compile the model definition. So we need to change the directory to the research folder and run the following code.

change directory

!protoc object_detection/protos/*.proto — python_out=.

Set the environment variables

import os

os.environ[‘PYTHONPATH’] += ‘:/content/gdrive/My Drive/tensorflow1/models/research/:/content/gdrive/My Drive/tensorflow1/models/research/slim’

Google Colab environment setup

For a session restart do the following

session restart

Always add a “ ! “ before. Don’t forget this.

Now in our Google Drive we need to change the directory to object detection directory that contains your train and test images with a respective xml file of each image. We need to create a TensorFlow record file from the xml file we have.

Use the following code to change directory,

%cd /content/gdrive/My Drive/tensorflow1/models/research/object_detection

For creating a training record, we are using the following code,

!python generate_tfrecord.py — csv_input=images/train_labels.csv — image_dir=images/train — output_path=train.record

Now for creating a testing record, we are using the following code,

!python generate_tfrecord.py — csv_input=images/test_labels.csv — image_dir=images/test — output_path=test.record

Since all the work that is supposed to be done for training are all done.

Now all that is left is only training the model.

Training your model

Now make sure that you are in the object_detection directory and then you can execute the following code,

!python train.py — logtostderr — train_dir=training/ — pipeline_config_path=training/faster_rcnn_inception_v2_pets.config

Well I have trained the model up to 64335 steps,

Training Output

Since it is a 12 Hrs free GPU, you can proceed.

Next step is similar to my previous blog. which is step 8 Inference Graph

You can follow this step if your testing in Google Colab.

Testing our model

Once you have the frozen graph, you need to open the “Object_detection_image.py” file in the object detection folder.

Copy the entire code and paste it in a new cell in your Colab. Once your done with that run with a image.

you will have an output as shown below,

Finally Output.

If you guys want to work with TensorFlow 2.x in your Local Machine then you can follow the blog given below and they have also added few information which I haven’t covered here like,

  • how to use LionBridge for data annotation
  • script for converting the XML / JSON another Open Source contribution
  • how to update the Batch size and training steps
  • how to update the loss function that is used for classification
  • along with how to train your model if you have a GPU

If you wish to stay connected,

you can just google “ narenltk / narendiran krishnan ” or just drop a mail to → narenltk@gmail.com → Happy to help..!!!

--

--

Narendiran Krishnan
The Startup

AI blogger. Inspiring & working towards a better future through technology & Artificial Intelligence. Join me in the quest ..!!