Making a Banana Seeker Robot with Coral Edge TPU
--
TL,DR; In this article you will learn how to make a robot that moves closer to a banana, with object detection model accelerated by Edge TPU.
How does it work?
This robot is built on Raspberry Pi 3 B+ connecting L298N motor driver that drives 2 motors, and detecting object with camera accelerated by Coral USB Accelerator.
Let’s walk through how it works.
Object Detection
Object Detection is an algorithm or a program that literally detects particular object(s) within an image. Typically it addresses locations of objects with rectangles (see attached image).
Today you can easily implement object detection because these technologies are mostly using deep learning, and pre-trained models are provided by several sources.
Raspberry Pi + Coral USB Accelerator + MobileNet SSD v2
It is a bit tough to run an object detection model on Raspberry Pi, because it requires bunch of CPU resources thus it causes slow response on detection (typically more than a second per image) even though you use a model made for mobile usage such as MobileNet SSD.
With Coral USB Accelerator, you can make the detection time more than 10x faster.
You can find a quick & easy walkthrough object detection demo with Edge TPU on Coral website, but it doesn’t have real-time demo. If you want to monitor what’s detected on camera in real-time, you can make it with picamera and its overlay function.
Motor Driver
Meanwhile, we need to drive motors so that Raspberry Pi works like a robot! Fortunately Raspberry Pi already has GPIO pins that can send and control signals to external devices. But do not drive motors directly by GPIO, use L298N H-Bridge DC motor driver instead. L298N motor driver can drives DC motors by Raspberry Pi GPIO’s LVTTL signals.
You can control this motor driver with 3 GPIO pins for each motor, feeding HIGH or PWM to EN pin and HIGH/LOW to IN1/2 pins. The code would be something like below.
import RPi.GPIO as GPIO
# Setup
GPIO.setmode(GPIO.BCM)
GPIO.setup(14, GPIO.out) # connect to IN1
GPIO.setup(15, GPIO.out) # connect to IN2
GPIO.setup(18, GPIO.out) # connect to ENA# Run a motor CW or CCW (depends on how you connect)
GPIO.output(14, GPIO.HIGH)
GPIO.output(15, GPIO.LOW)
GPIO.output(18, GPIO.HIGH)
If you are interested in the mechanism of this driver, refer to this tutorial.
Simple Logic to Seek a Banana
Now we have an object detection model with Raspberry Pi Camera, and can drive motors. Let’s combine these two functions and make it work like an intelligent robot. At this time we want the robot to seek a banana (which here means following a banana), how can we achieve this?
This can be very complex, but be simple at this time. If the banana is in the left side of camera frame, turn left, if right side, turn right.
Wrap up — why on edge? why need accelerator?
You may wonder why we use object detection model on Edge TPU instead of Cloud nor CPU? It depends on what application you need. In this application the robot detects objects and drives motors for each frame, which is 10 frames per second. What if it takes more than a second per frame? The robot must wait a second to detect, drives motors, and wait a second and drives motors… You see, fast iteration is needed for this application. As it’s written earlier CPU is a way too slow for object detection, what about Cloud? It can be faster than Edge TPU because you can use powerful GPU or even Cloud TPU. However detecting objects on Cloud means you have to send images to Cloud for each frames, it requires bandwidth and doesn’t work when offline.
Check out the whole source code on my GitHub repository.
There will be an article for integrating with Cloud.. maybe!