Camera-based line following with the Revolution Robotics Challenge Kit using a Convolutional Neural Network

David Dudas
7 min readMar 4, 2020

--

In this tutorial you will learn how to train a line followidg robot from the Revolution Robotics Challenge Kit.

Line following with the Revolution Robotics Challenge kit using Tensorflow

While the concept of a camera based line following robot has been around for a while, here we’ll learn how to detect and follow lines with a CNN. CNN stands for Convolutional Neural Network which is a class of deep neural networks most commonly applied in computer vision applications.

There are three main types of CNN based computer vision models: classification, detection, and segmentation.

Classification determines a given category (aka — “class”) that an image, or an object in an image, belongs to, from a simple yes/no to hundreds or thousands of classes. For example, classification complexity could range from, “Does this image contain a bird?” to “What species of bird is in this image?”

In this tutorial we’ll use a classification model to follow a line with 4 possible class results:

  1. Straight ahead,
  2. Left turning line,
  3. Right turning line, or
  4. No line located.

(In later tutorials I’ll show a “simple” non neural network based line following model using pure OpenCV functions and more advanced detection and segmentation based models too.)

First, we are going to capture a few images using a USB camera plugged into the robot to create our training image set, then we’ll train and deploy our CNN to classify the direction of the line in each image.

In this blog post, we’re going to use ROS, OpenCV and Tensorflow running on the Raspberry Pi Zero W that is inside the Revolution Robotics robot brain. First, let’s cover a few assumptions that go along with this tutorial.

Let’s get started.

Assumptions

In this tutorial, I will assume that you already own a Revolution Robotics kit and you know how to install and setup the latest Raspbian OS based on Debian Buster. I will also assume that you know what ROS, OpenCV and Tensorflow are and that you know how to use SSH and GIT. You should also know how to install and build packages on Raspbian.

The Raspberry Pi Zero W uses the Broadcom BCM2835 SoC with a 1GHz ARM1176JZF-S processor using the ARMv6-architecture and a VideoCore IV graphics processing unit. Fun fact: the ARM1176JZF-S is the same CPU core used in the first iPhone in 2007.

Unfortunately, we cannot utilize the VideoCore IV GPU in Deep Learning applications because popular libraries for using neural networks like TensorFlow and PyTorch do not officially support OpenCL. This means we have to run our trained neural network on the CPU of the Raspberry Pi Zero W. Yes, you read that correctly, we’ll run it on the CPU of the Raspberry Pi Zero W!

Due to the limited CPU resources, building all the necessary packages takes a really, really long time on the Raspberry Pi Zero W so I suggest compiling and installing the packages on a more powerful Raspberry Pi if you have access to one. For example, I used a 4B version with 4GB RAM for this purpose and then just swapped the SD card into the Zero W when I was done. Be careful using Python’s pip package manager because OpenCV and Tensorflow have different packages for ARMv6 and ARMv7. We’ll need the packages for ARMv6 on the Zero W.

For purposes of this tutorial, you should install and be familiar with the following software:

  • Python 3.7
  • OpenCV 3 (In this tutorial I’m using 3.4.6.27)
  • Tensorflow 1.14.0

ROS Melodic with the following packages:

And of course, I’m assuming that you have an assembled robot, something like this one:

Revolution Robotics Challenge Kit

Step #1: Capture the training set

Before we can start training our network, we need the four sets of images for each classes described above:

  1. Straight ahead,
  2. Left turning line,
  3. Right turning line, or
  4. No line located.

Once we train our CNN, it will be able to classify where is the line on the camera image and we can implement the control logic what kind of action is required to follow the line.

For this tutorial, I recommend a very simple training track — a straight line — to create the data-set for training.

Simple training track
Image through the wide angle lens

Place the robot at different positions on the track and capture a set of images. The save_training_pictures.py ROS node will capture a series of images with a 2Hz rate.

You should save about 50–60 pictures for each of the four conditions and you should apply the same pre-processing as you’ll apply during the line following scenario. In my case, I cropped a smaller 200x45 pixel sized rectangle from the 320x240 pixel camera image because I used a super wide angle lens camera. Due to the limited computing power of the Pi Zero, I suggest using a small resolution camera because it struggles to handle too many pixels. Besides, our final CNN model will use a 28x28 pixel image to classify the line.

Data flow during saving the training images

First, you have to launch your camera node and then start the save_training_pictures node with the following ROS command:

rosrun line_follower save_traning_pictures.py
Training images. Line in the center (top), line on left (middle) and line on the right (bottom)

Once you capture all the images for the four conditions, place them in separate folders with names forward, left, right, nothing respectively.

Step #2: Training the CNN

You can train your network on the Raspberry Pi Zero W but I suggest doing that on a more powerful computer where you have a compatible version of Tensorflow installed to build the trained model.

I decided to use the LeNet-5 architecture similarly to this great tutorial, however I did some changes in the network to make it compatible with the ARMv6 architecture of the Raspberry Pi Zero W because there is a Tensorflow bug when fusing the Conv2D and ReLU operations.

LeNet-5 architecture

We’ll create the LeNet-5 with 4 parameters: width, height, depth, and classes. Width and height are equal to the width and height of the image that we will be using as an input to the CNN. In our case, we’ll use 28x28 pixel images in the input layer. Depth defines the number of channels in the input image. Depth is 1 for monochrome and 3 for RGB. We’ll use monochrome pictures to shrink down and speed up the model. Finally, classes define the number of different types of images we want to classify. We have to distinguish the images with straight, left, and right lines and images without any lines.

During the training we can observe the accuracy of our model, the achieved 99.5% is a satisfying number. Once the model is trained, we can deploy it onto our robot brain.

The training process

Step #3: Line following with the trained CNN on the Robot Brain

We’ll use the line_detector_cnn_pi.py ROS node to run the CNN and determine the class of the line in front of the robot and the line_controller_cnn_pi.py node to control the robot. You can find my ROS launch file here to run every necessary node.

Data flow during line following

Let’s start a ROS core, our framework and the diff_drive_controller node. Next we start the camera handler node, I set my camera’s resolution to 320x240 pixel (as we discussed above, we don’t want to deal with too many pixels) and I set the frame rate to 10Hz. Without the CNN based line detector node all the other existing nodes consumes about the 20% of the CPU so we have a pretty tight budget to fit into with our neural network. Let’s see how well we could implement a small and fast CNN!

The line detector node first crops the image, resizes it and runs the inference on the processed image using the trained model:

Based on the result of the inference we can control the robot to follow the track using the line controller node:

We can achieve about a 10Hz image processing rate (with 100% CPU load :-)) with our CNN which is a pretty nice result from the Raspberry Pi Zero’s CPU!

CPU load (right) and image processing time in ms (left)

Today we built a line following robot with the Revolution Robotics Challenge kit and learnt how to train and run a CNN model on our Raspberry Pi Zero W with ROS.

If you liked this post, please give it a clap and let me know what you would like to see in my next tutorial.

--

--