Automated driving Robot with a Raspberry Pi, an Arduino, a Pi Camera and an Ultrasonic Sensor

Nouamane Tazi
7 min readNov 17, 2019

--

Following a course about autonomous cars in CentraleSupélec in France, we had a self driving car challenge in which I, along with a small group of my friends, programmed a Cherokey 4WD arduino mobile robot that uses a Raspberry Pi, an Arduino Uno, an UltraSonic Sensor, and a Pi Camera. We used Python OpenCV for image processing. We hope that we can help you, through this article, to reproduce your own version of the robot !

You can find the scripts we used in here.

Objective :

The first challenge was to be able to follow the white line through the “8” course while detecting obstacles with the help of the Ultrasonic Sensor. And then, to be able to detect intersections, and move to a certain coordinate on the grid course.

Pictures of the grid and the “8” paths

Final result :

The Hardware :

Pictures of the used robot

The Software :

  • VNC Viewer : To remotely control the desktop interface of the Raspberry Pi
  • C++ on the Arduino
  • Python + Numpy + OpenCV on the Raspberry Pi
  • Serial protocol for Arduino <-> Raspberry Pi communication (robust-serial)
  • Python + ZMQ on the server (the server can be a laptop for example)

Architecture Overview :

The server establishes the connection between the controller(s) and the robot(s), it transfers the commands (for example turn around or go to the coordinate (2,3) in the grid) to the Raspberry Pi which can realise the command while continuously processing the images coming from the Pi Camera, and making sure the robot is still following the line. The latter can only be done through the communication with the arduino that can order the motors. The Ultrasonic Sensor is there for obstacle detection.

VNC Viewer :

First of all, we started by connecting the laptop and Raspberry on the same Wifi (for example from the laptop’s Wifi Hotspot, then we configured the VNC Viewer on the Raspberry Pi, and started controlling the Raspberry from the laptop.

Note: The Raspberry only needs to be configured on the Wifi Hotspot the first time. Afterwards, it’ll get connected to the same Wifi automatically.

User Interface :

  • The controller : In our case, it’s the terminal. It’s where the user can enter the commands to be executed. (or more simply, just a keypress on the terminal)
  • The server : Receives a command from the controller, processes it, and forward to one or multiple robots. In the case of going to a certain coordinate for example, it calculates the shortest path to go there (using A* algorithm), and sends the list of commands to be done to the Raspberry Pi.

Note : We chose to calculate the path on the server side and not on the Raspberry because the latter needs all the memory resources it can find to do the image processing in order for the robot to keep pursuing the line, as we will detail later.

An easy implementation example can be found here.

Arduino :

To make the arduino communicate with the Raspberry Pi, measure the distance to the closest object ahead of the Ultrasonic Sensor, and give commands to the motors. We used a classical serial protocol as found in the Cherokey docs.

Image Capturing :

We initialize our camera object, which allows us to access the Raspberry Pi camera module. We’ll define the resolution of the camera stream to be 320 x 240 with a maximum frame rate of 90 FPS . Then, as we can see here, Pi Camera has a many capture modes. That’s why we thought about designing this small script that allows us to compare the performance of each capturing method (we also used it for the image processing later), and we finally found that this is the most suitable method of capturing for us, since it avoids the expensive compression to JPEG format, which we would then have to take and decode to OpenCV format anyway.

Image Processing :

Detecting the centroid

In order, for the robot to follow the white line, we try to naively find its centroid, but before that the images need some preprocessing. Since HSV format is more suited for picking color ranges (white in our case), we start by transforming the image from RGB to HSV (Fig. 2), then we pick a range for the desired white color, and add some blur and dilatation effects for neater results (Fig. 3), as shown here. Afterwards, we need to find the centroid of the white contour, but to avoid getting corrupted results because of some other white areas in the original picture, we only consider the biggest white contour. At last, we find the desired centroid (the blue dot in the last figure) which is the point that we must follow.

Image preprocessing, and the centroid found in blue.

Detecting the intersections

For the grid course, we also needed to detect intersections. So, in addition, to the processing we did before, we added another process which consists of approximating the white contour with a polygon with the help of this opencv function, and by counting the number of sides this polygon has, we can tell if it’s an intersection or not.

Results of image processing. Blue : Centroid // Red : Intersection // Green : Line

Here’s what the final script looks like :

Following the line :

Now that we have the centroid’s coordinates, we can consider its x coordinate as the error we must reduce. For example, the code above gives us an error between 0 and 320, so we substract 160 to make it between -160 and 160, and divide by 160 to normalise it. We get an error between -1 and 1.

To regulate this error, we had multiple controller’s choices. Our main problem is that the friction between the wheels and the carpet the car moves on is too high. So just by using a proportional controller, the car was already stable. After many trials of different value for the proportional coefficient, and the maximum and minimum velocity, this script was very convenient for us :

Tip : To keep track of what the robot is seeing, and to see if the image processing is working as you wish, you can use cv2.imshow(“test",img) and cv2.waitKey(1) to show the result of the image processing in real time as shown below.

Obstacle Detection :

By using the docs found here, the Arduino was sending all the measured distances to the Raspberry Pi, which then had to process all of them, and give the command to stop when an obstacle is near. This method was working fine at first, but upon integrating the image processing part, we noticed a huge latency in the values received from the Arduino. To make up for that, we had the genius idea (not that genius but it still saved our lives) of processing the measured distances in the Arduino, and only send a signal:

  • when an obstacle is encountered.
  • when the path is clear.

without ever sending the same signal twice in a row. This way we were able to keep detecting obstacles in real time, all while using the Pi Camera, doing the image processing.

Conclusion

We had so much fun working in this project, so we hope we could transmit to you a part of our joy in reading this article ! =D

Any question or remark are very welcomed in the comments section below !

I just want to conclude with this quote by Ken Goldberg :

“We’re fascinated with robots because they are reflections of ourselves.”

--

--