My Lilliputian DIY Robocars, a stay at home side project

Benoit Trinite
Renault Digital
Published in
8 min readJun 22, 2020

I started to learn on DIY Robocars by the end of 2017. At this time, one of my colleague here at Renault Digital decided to start the first French DIY Robocars local meetup, (wich has since held more than more that 20 successful events). It was a perfect way for me to discover stuff about AI and autonomous driving. After having raced over 2 years on a 1/10 scaled model provided by my company (as a sponsor of the event), I found out that the main issue that was slowing down my learning curve was the difficulty to find a place to test such large RC car model.

My 1/8 Robocar (A Donkeycar based model powered by NanoPI M4 and Arduino Genuino 101)

No room in my flat would be large enough and clear to welcome a training track to test this car, compared to the usual track we use for the event which is roughly 20m*6m as you can see below :

Typical racing track used for French DIY Robocars event

Last year, I set myself a challenge to build from scratch a scaled down version of my robocar, using a 1/28 Mini Q platform from Sinohobby you can see below. The goal was to be able to run it not only at home but also in various events my company attend as an exhibitor. Unfortunately, I didn’t find enough spare time during that period. Later I bought a google Coral USB stick EdgeTPU, thinking it as a good option in my search to miniaturize my robocar, but again, not enough spare time to work on it. This challenge was frozen…

© Sinohobby TR-Q5OP-BL Mini-Q model

…until those last days. Few weeks ago, COVID-19 pandemic crisis has hurt a lot of countries, and many companies, following stay at home orders, have slowed down operations and activities. The time I was looking for to progress on my challenge was definitely there.

The Hardware platform

Regarding hardware, my basic requirements were to :

  • use a RC chassis equipped with regular ESC and Steering Servo
  • use a single front USB camera for autonomous driving
  • be able to control the car with my 6 channels Futaba Radio (for manual training but also to engage/disengage autonomous driving)
  • use the single LiPo battery to power everything with enough capacity to either run a training or run a race.
  • use a SBUS based radio Rx to reduce wiring
  • use two SBC (Single Board Computer), one for Real Time and IO handling (an ESP32, interesting for its various available IO), and one to host core processes powered by Linux/ROS (A Friendlyelec Nanopi Duo2 running Armbian)
© Friendlyelec Nanopi Duo2

Since my goal was first to demonstrate that it could work (being able to autonomously drive on a track at low speed), I did not pay too much attention to the design of the car. That’s how my first prototype looked:

The Software platform

My original 1/8 robocar was based on open source Donkeycar project, which was very good to jump start on topic like AI, Tensorflow CNN and autonomous driving. After having worked on ROS for another project, I decided that my new car would be fully ROS powered. I used to develop software in C/C++, and I like the event driven architecture of ROS, providing an elegant way to implement distributed and low coupled architecture. Meanwhile TF Lite was released, a good option to look at as well since it was much more suited for such project compared to the full Tensorflow package.

A first step, Manual driving and data capture

I wasn’t sure whether my challenge was achievable or not, I decided to go step by step and iterate.

The first step was about putting everything in place to be able to operate in training mode. Operate in training mode means to be able to drive the car manually from a radio controller, while the car itself continuously records images from the front camera and associated driving parameters (basically steering and throttling value received from radio)

To achieve this first step, I developed the following components :

  • esp32-companion, an Arduino embedded software for ESP32 that 1/ relays Radio messages received from radio controller to core host, 2/ generates PWM signal to control Steering servo and Motor Throttling through ESC. I discovered rosserial, a ros package that can extend ROS messaging bus between a regular ROS host and an Arduino, the perfect solution to connect my ESP32 to my core host. I just made some fix to be able to enforce configuration of serial port on both ESP32 and core host side.
  • robocars_brain, a ROS node that implement a Finite State Machine (based on TinyFSM library) to maintain global car state (disarmed, manual driving, autonomous driving) based on Radio channel order received (I used channel 5 and 6 to control this state)
  • robocars_steering_ctrl, a ROS node to build steering order accordingly to the global car state and steering radio signal received
  • robocars_throttling_ctrl, a ROS node to build throttling order accordingly to the global car state and throttling radio signal received
  • robocars_data_capture, a ROS node to capture images and associated throttle and steering value (normalized) and burn it down to the micro SD card of the core host.
  • robocars_ros_launch, a collection of ROS launch file and shell script to manage the ROS stack (configure and start all nodes, script to save configuration, to invoke calibration logic, …)
  • robocars_msgs,a repo containing definition of project specific ROS messages I used

For front video capture, I have used ROS usb_cam and image_proc packages. However, to improve performance, I recompiled them (the one distributed through apt is not optimized).

For remote debug, I also deployed ROS web_video_server to control video stream, and rosbridge_suite to be able to use ros-control-center console to inspect for example topic content.

After fighting several hours to fit all those ROS packages on my little Nanopi Duo2, I finally got successful manual driving and data recording :

First image capture from my new tiny Robocar (160*120 px), driving manually

Second step, Autonomous driving running TFLite on core host

The second step was about adding autonomous driving capability. For this part, I decided to reuse to existing Keras model proposed in original Donkeycar code.

Having battled by the past with Tensorflow on ARM architecture, TFLite was definitely a good news. Compiling tensorflow lite directly on my core host was trivial, just had to install the prerequisites and invoke ./tensorflow/lite/tools/make/build_lib.sh. Result is a static lib libtensorflow-lite.a to link against with my new ROS robocars_autopilot node.

The most difficult part was on how to handle my model with TFLite C++ API. Examples or tutorials are not so widely available. It happens that the best example was indeed in Tensorflow repo itself, the label_image example.

Once the “smoke test” was done, I did an end to end test :

  • manually drive the car on a small track made of the slack lane of my son, and record about 10k images and associated data
  • train the model using Google Colab using Tensorflow 1.15
  • convert the model for TFLite
  • load the model into the car and engage the autonomous driving mode

Here is the result :

The first run of my tiny robocars, powered by TFLite

From performance perspective, the car is able to perform at 10–15 FPS which is enough for driving slowly in my kitchen.

FPV view

Third step, improve performances with Google Coral EdgeTPU

The next step was to improve on performance and demonstrate that Google Coral EdgeTPU could support the use case. It wasn’t a trivial step and major issues were :

  • Use Coral through C++ TFLite API (and not EdgeTPU native API),
  • Quantizing my model,
  • Adapt my model to match Coral available operators

Regarding first issue, after reading several discussion, I found that :

First, For working with google Coral, you have to build TFLite on a specific commit (https://github.com/google-coral/edgetpu/issues/44#issuecomment-589170013)

But, that won’t compile everything w/o patch pushed here (https://medium.com/r/?url=https%3A%2F%2Fgithub.com%2FEd-Swarthout-NXP%2Ftensorflow%2Fcommit%2F7c25249abb0d09a8ec4053fd237576337125c919)

And finally, a good C++ example to get inspired here : https://medium.com/r/?url=https%3A%2F%2Fgithub.com%2FNamburger%2Fedgetpu-minimal-example

The next step was about quantizing the model. To achieve good performance, Google Coral EdgeTPU rely on pure integers operations only. Generally speaking, a Keral model use floating point operation, giving priority on accuracy over performance. Coral EdgeTPU does the opposite somehow.

There are two ways to adapt an existing model : perform quantization aware training, or proceed to post training quantization. It looks like the first way is not possible today with Keras model. I gave a try to the second way and found that the only operation that was not supported by the EdgeTPU compiler was a Crop operation that I removed. That was the third step.

Another tricky bug to fix was the USB OTG port of my Nanopi Duo2. It was not working out of the box with the Armbian distro. A patch in the device tree was required to solve that issue.

It added also a small OLED screen to display some essential information like IP address currently in used (I still need to fix my mDNS).

Robocar equipped with Google Coral EdgeTPU.

With Google Coral, I was able to perform at 30FPS using only 30–40% of core host CPU, running at 60FPS was also possible, but at the cost of losing some frame. First test shown an accuracy of the driving comparable to the model running on core host only.

htop reported on core host while driving at 30pfs (performing 30 predictions per seconds)

Next step : everything under the hood to look like a real car !

Having demonstrated that it works at this scale, my next baby step on this long term challenge is to re-design the hardware so that the equipped car looks like a real car, putting everything under the hood.

Here is a first picture about the re-design.

The main change is about USB camera, replaced by a smaller OV2640 CSI/MIPI camera. Having trouble to control it from the Nanopi Duo2 (Device tree story again), I’m giving a chance to ESP32CAM (which is smaller than regular ESP32 dev board) and a high speed ROSSerial link.

All electronics almost packed
with car body mounted

Eager to learn more on DIY Robocars, join the worlwide communit. Living near Paris/France, join the local chapter on DIY Robocars France, make your own robocars and participate to our challenge.

--

--