How to easily Detect Objects with Deep Learning on Raspberry Pi

The real world poses challenges like having limited data and having tiny hardware like Mobile Phones and Raspberry Pis which can’t run complex Deep Learning models. This post demonstrates how you can do object detection using a Raspberry Pi. Like cars on a road, oranges in a fridge, signatures in a document and teslas in space.

Sarthak Jain
Mar 20, 2018 · 10 min read

Disclaimer: I’m building to help build ML with less data and no hardware

If you’re impatient scroll to the bottom of the post for the Github Repos

Detecting Vehicles on the Road of Mumbai

Why Object Detection?, Why Raspberry Pi?

Now you will be able to detect a photobomber in your selfie, someone entering Harambe’s cage, where someone kept the Sriracha or an Amazon delivery guy entering your house.

What is Object Detection?

To mimic human level performance scientists broke down the visual perception task into four different categories.

  1. Classification, assigns a label to an entire image
  2. Localization, assigns a bounding box to a particular label
  3. Object Detection, draws multiple bounding boxes in an image
  4. Image segmentation, creates precise segments of where objects lie in an image

Object detection has been good enough for a variety of applications (even though image segmentation is a much more precise result, it suffers from the complexity of creating training data. It typically takes a human annotator 12x more time to segment an image than draw bounding boxes; this is more anecdotal and lacks a source). Also, after detecting objects, it is separately possible to segment the object from the bounding box.

Using Object Detection:

How do I use Object Detection to solve my own problem?

  1. Is an object present in my Image or not? eg is there an intruder in my house
  2. Where is an object in the image? eg when a car is trying to navigate it’s way through the world, its important to know where an object is.
  3. How many objects are there in an image? Object detection is one of the most efficient ways of counting objects. eg How many boxes in a rack inside a warehouse
  4. What are the different types of objects in the Image? eg Which animal is there in which part of the Zoo?
  5. What is the size of an object? Especially with a static camera, it is easy to figure out the size of an object. eg What is the size of the Mango
  6. How are different objects interacting with each other? eg How does the formation on a football field effect the result?
  7. Where is an object with respect to time (Tracking an Object). eg Tracking a moving object like a train and calculating it’s speed etc.

Object Detection in under 20 Lines of Code

YOLO Algorithm Visualized

There are a variety of models/architectures that are used for object detection. Each with trade-offs between speed, size, and accuracy. We picked one of the most popular ones: YOLO (You only look once). and have shown how it works below in under 20 lines of code (if you ignore the comments).

Note: This is pseudo code, not intended to be a working example. It has a black box which is the CNN part of it which is fairly standard and shown in the image below.

You can read the full paper here:

Architecture of the Convolutional Neural Network used in YOLO
YOLO in <20 lines of code, explained

How do we build a Deep Learning model for Object Detection?

The workflow for Deep Learning has 6 Primary Steps Broken into 3 Parts

  1. Training the model
  2. Predictions on New Images

Phase 1 — Gather Training Data

Step 1. Collect Images (at least 100 per Object):

Step 2. Annotate (draw boxes on those Images manually):

Phase 2 — Training a Model on a GPU Machine

Step 3. Finding a Pretrained Model for Transfer Learning:

You can find a bunch of pretrained models here

Step 4. Training on a GPU (cloud service like AWS/GCP etc or your own GPU Machine):

Docker Image

To start training the model you can run:

sudo nvidia-docker run -p 8000:8000 -v `pwd`:data -m train -a ssd_mobilenet_v1_coco -e ssd_mobilenet_v1_coco_0 -p '{"batch_size":8,"learning_rate":0.003}' 

Please refer to this link for details on how to use [-m mode] [-a architecture] [-h help] [-e experiment_id] [-c checkpoint] [-p hyperparameters]-h          display this help and exit
-m mode: should be either `train` or `export`
-p key value pairs of hyperparameters as json string
-e experiment id. Used as path inside data folder to run current experiment
-c applicable when mode is export, used to specify checkpoint to use for export

You can find more details at:

To train a model you need to select the right hyper parameters.

Finding the right parameters

The art of “Deep Learning” involves a little bit of hit and try to figure out which are the best parameters to get the highest accuracy for your model. There is some level of black magic associated with this, along with a little bit of theory. This is a great resource for finding the right parameters.

Quantize Model (make it smaller to fit on a small device like the Raspberry Pi or Mobile)

Small devices like Mobile Phones and Rasberry PI have very little memory and computation power.

Training neural networks is done by applying many tiny nudges to the weights, and these small increments typically need floating point precision to work (though there are research efforts to use quantized representations here too).

Taking a pre-trained model and running inference is very different. One of the magical qualities of Deep Neural Networks is that they tend to cope very well with high levels of noise in their inputs.

Why Quantize?

Neural network models can take up a lot of space on disk, with the original AlexNet being over 200 MB in float format for example. Almost all of that size is taken up with the weights for the neural connections, since there are often many millions of these in a single model.

The Nodes and Weights of a neural network are originally stored as 32-bit floating point numbers. The simplest motivation for quantization is to shrink file sizes by storing the min and max for each layer, and then compressing each float value to an eight-bit integer.The size of the files is reduced by 75%.

Code for Quantization:

curl -L "" |
tar -C tensorflow/examples/label_image/data -xz
bazel build tensorflow/tools/graph_transforms:transform_graph
bazel-bin/tensorflow/tools/graph_transforms/transform_graph \
--in_graph=tensorflow/examples/label_image/data/inception_v3_2016_08_28_frozen.pb \
--out_graph=/tmp/quantized_graph.pb \
--inputs=input \
--outputs=InceptionV3/Predictions/Reshape_1 \
--transforms='add_default_attributes strip_unused_nodes(type=float, shape="1,299,299,3")
remove_nodes(op=Identity, op=CheckNumerics) fold_constants(ignore_errors=true)
fold_batch_norms fold_old_batch_norms quantize_weights quantize_nodes
strip_unused_nodes sort_by_execution_order

Note: Our docker image has quantization built into it.

Phase 3: Predictions on New Images using the Raspberry Pi

Step 5. Capture a new Image via the camera

For instructions on how to install checkout this link

Code to Capture a new Image

Step 6. Predicting a new Image

Once your done training the model you can download it on to your pi. To export the model run:

sudo nvidia-docker run -v `pwd`:data -m export -a ssd_mobilenet_v1_coco -e ssd_mobilenet_v1_coco_0 -c /data/0/model.ckpt-8998

Then download the model onto the Raspberry Pi.

Install TensorFlow on the Raspberry Pi

Depending on your device you might need to change the installation a little

sudo apt-get install libblas-dev liblapack-dev python-dev libatlas-base-dev gfortran python-setuptools libjpeg-devsudo pip install Pillowsudo pip install clone apt-get install -y protobuf-compilercd models/research/
protoc object_detection/protos/*.proto --python_out=.
export PYTHONPATH=$PYTHONPATH:/home/pi/models/research:/home/pi/models/research/slim

Run model for predicting on the new Image

python --model data/0/quantized_graph.pb --labels data/label_map.pbtxt --images /data/image1.jpg /data/image2.jpg

Performance Benchmarks on Raspberry Pi

Benchmarks for different Object Detection Models running on Raspberry Pi

Workflow with NanoNets:

We at NanoNets have a goal of making working with Deep Learning super easy. Object Detection is a major focus area for us and we have made a workflow that solves a lot of the challenges of implementing Deep Learning models.

How NanoNets make the Process Easier:

1. No Annotation Required

2. Automatic Best Model and Hyper Parameter Selection

3. No Need for expensive Hardware and GPUs

4. Great for Mobile devices like the Raspberry Pi

Here is a simple snippet to make prediction on an image using the NanoNets API

Code to make a prediction on a new Image using NanoNets

Build your Own NanoNet

You can try building your own model from:

1. Using a GUI (also auto annotate Images):

2. Using our API:

Step 1: Clone the Repo

git clone
cd object-detection-sample-python
sudo pip install requests

Step 2: Get your free API Key

Step 3: Set the API key as an Environment Variable


Step 4: Create a New Model

python ./code/

Note: This generates a MODEL_ID that you need for the next step

Step 5: Add Model Id as Environment Variable


Step 6: Upload the Training Data

python ./code/

Step 7: Train Model

python ./code/

Step 8: Get Model State

watch -n 100 python ./code/

Step 9: Make Prediction

python ./code/ PATH_TO_YOUR_IMAGE.jpg


NanoNets: Machine Learning API