Autonomous Driving Object Detection on the Raspberry Pi 4

Ethan Dell
Jan 29 · 4 min read

Deep Learning on the edge is now possible with lightweight computing hardware such as the Raspberry Pi 4 using TensorFlow Lite!

Testing a Tensor Flow Lite Model on the Raspberry Pi 4

For this project, object detection performance was analyzed to see how the Raspberry Pi 4 performed when mounted and processing video feed in a moving vehicle. The trained model operated at 2.73 fps online and did impressive classifying and localizing objects on the road!

Background

Object detection is a challenging task in computer vision. It involves computing the positions of objects and a confidence score on those predictions, by processing the pixels of images. Deep Learning has greatly accelerated performance in this domain and allows for high-performance models in domains such as autonomous driving!

For those unfamiliar with machine learning/deep learning/object detection. A great introduction can be found here:

https://www.youtube.com/watch?v=pIciURImE04&t=138s&ab_channel=bitsNblobsElectronics

Training Details for the Model

An SSD-MobileNet-V2 TensorFlow Lite model was trained to perform single-shot object detection. Transfer learning was used on a model trained on the COCO dataset as a starting point. The BBD100K autonomous driving dataset was then used for performing additional training. This dataset contains 10 different autonomous driving classes: traffic sign, traffic light, car, rider, motor, person, bus, truck, bike, and train. Training over the dataset until loss convergence took around 17 hours on a Windows 10 Machine.

Training Losses

As a sanity check, some test images were run through the model to make sure it was accurately detecting the object classes it was trained to detect.

Vehicle Detection Test
Pedestrian Detection Test

These tests are examples of how the model performs well at detecting vehicles (even in low light) and pedestrians too!

The TensorFlow model was then quantized. Quantizing a model takes 32-bit float weights and reduces them to 8-bit integer weights. This enables faster detection with a small drop in model accuracy. This also allows the model to operate on lightweight hardware like the Raspberry Pi!

Testing Setup Hardware

Once the model was trained, quantized, and all the software was configured on the Pi, a testing setup was configured.

For this project, the following hardware components were used:

  • Raspberry Pi 4 (4 GB RAM — 1.5GHz CPU)
Hardware Configuration Before Mounting in Vehicle

After getting everything wired together, the camera was mounted to a tissue box and secured on the dashboard of my car. The Pi was then powered through a 115V AC outlet in my car.

Hardware Configuration After Mounting in Vehicle

When the GPIO button was pressed, the RGB LED turned on, and the Pi began to process video frames and save them to the Pi while driving.

The gif at the top of the post shows how the model performed! Here is a still frame from the gif for reference!

If you’d like to test out this project for yourself, I made a GitHub tutorial here:

Here’s a YouTube video I made if you’d like a video tutorial:

Let me know what you think!

Future Work:

  • With more time spent on this project, it would be interesting to see how the detections could be used within a Multi-Object Tracking (MOT) system! Link for more background on Multi-Object Tracking: https://en.wikipedia.org/wiki/Multiple_object_tracking

Sources:

Analytics Vidhya

Analytics Vidhya is a community of Analytics and Data…