How to run TensorFlow Object Detection model on Jetson Nano

Chengwei Zhang
Apr 22, 2019 · 4 min read
Image for post
Image for post

Previously, you have learned how to run a Keras image classification model on Jetson Nano, this time you will know how to run a Tensorflow object detection model on it. It could be a pre-trained model in Tensorflow detection model zoo which detects everyday object like person/car/dog, or it could be a custom trained object detection model which detects your custom objects.

For this tutorial, we will convert the SSD MobileNet V1 model trained on coco dataset for common object detection.

Here is a break down how to make it happen, slightly different from the previous image classification tutorial.

  1. Download pre-trained model checkpoint, build TensorFlow detection graph then creates inference graph with TensorRT.
  2. Loads theTensorRT inference graph on Jetson Nano and make predictions.

Those two steps will be handled in two separate Jupyter Notebook, with the first one running on a development machine and second one running on the Jetson Nano.

Before going any further make sure you have setup Jetson Nano and installed Tensorflow.

Step 1: Create TensorRT model

Run this step on your development machine with Tensorflow nightly builds which include TF-TRT by default or you can run on this Colab notebook’s free GPU.

In the notebook, you will start with installing Tensorflow Object Detection API and setting up relevant paths. Its official installing documentation might look daunting to beginners, but you can also do it by running just one notebook cell.

Next, you will download and build a detection graph from the pre-trained ssd_mobilenet_v1_coco checkpoint or select another one from the list provided in the Notebook.

Initially, the default Tensorflow object detection model takes variable batch size, it is now fixed to 1 since the Jetson Nano is a resource-constrained device. In the call, several other changes apply to the Tensorflow graph,

  • The score threshold is set to 0.3, so the model will remove any prediction results with confidence score lower than the threshold.
  • IoU(intersection over union) threshold is set to 0.5 so that any detected objects with same classes overlapped will be removed. You can read more about IoU(intersection over union) and non-max suppression here.
  • Apply modifications over the frozen object detection graph for improved speed and reduced memory consumption.

Next, we create a TensorRT inference graph just like the image classification model.

Once you have the TensorRT inference graph, you can save it as pb file and download from Colab or your local machine into your Jetson Nano as necessary.

Step 2: Loads TensorRT graph and make predictions

On your Jetson Nano, start a Jupyter Notebook with command where you have saved the downloaded graph file to . The following code will load the TensorRT graph and make it ready for inferencing.

Now, we can make a prediction with an image and see if the model gets it correctly. Notice we resized the image to 300 x 300, however, you can try other sizes or just keep the size unmodified since the graph can handle variable-sized input. But keep in mind, since the memory in Jetson is quite tiny compared to a desktop machine so it can hardly take large images.

If you have played around Tensorflow object detection API before, those outputs should look familiar.

Here the results might still contain overlapped predictions with different class labels. For example, the same object can be labeled with two classes in two overlapping bound boxes.

We will use a custom non-max suppression function to remove the overlapping bounding boxes with lower prediction score.

Let’s visualize the result by drawing bounding boxes and labels overlays.

Here is the code to create the overlays and display on the Jetson Nano’s Notebook.

Image for post
Image for post

In coco label map, class 18 means a dog and 23 is a bear. The two dogs sitting there are incorrectly classified as bears. Maybe there are more sitting bears than standing dogs in coco datasets.

A similar speed benchmark is carried out and Jetson Nano has achieved 11.54 FPS with the SSD MobileNet V1 model and 300 x 300 input image.

Conclusion and further reading

In this tutorial, you learned how to convert a Tensorflow object detection model and run the inference on Jetson Nano.

Check out the updated GitHub repo for the source code.

If you are not satisfied with the results, there are other pre-trained models for you to take a look at, I recommend you start with SSD MobileNet V2(ssd_mobilenet_v2_coco), or if you are adventurous, try ssd_inception_v2_coco which might push the limits of Jetson Nano’s memory.

You can find those models in Tensorflow detection model zoo, the “Speed (ms)” metric will give you a guideline on the complexity of the model.

Thinking about training your custom object detection model with a free data center GPU, check out my previous tutorial — How to train an object detection model easy for free.

Originally published at https://www.dlology.com.

Image for post
Image for post

This story is published in The Startup, Medium’s largest entrepreneurship publication followed by +445,678 people.

Subscribe to receive our top stories here.

Image for post
Image for post

The Startup

Medium's largest active publication, followed by +717K people. Follow to join our community.

Chengwei Zhang

Written by

Programmer and maker. Love to write deep learning articles.| Website: https://www.DLology.com | GitHub: https://github.com/Tony607

The Startup

Medium's largest active publication, followed by +717K people. Follow to join our community.

Chengwei Zhang

Written by

Programmer and maker. Love to write deep learning articles.| Website: https://www.DLology.com | GitHub: https://github.com/Tony607

The Startup

Medium's largest active publication, followed by +717K people. Follow to join our community.

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store