DeepNNCar: A Testbed for Autonomous Algorithms

This is my first blog post on Medium! The goal of this post, and the theme of the blog in general, is to help make technology accessible and interesting. Computer science is everywhere in our lives, yet to many it still remains unapproachable. However, the true fun in computer science isn’t intentionally confusing people with terminology and details, but equipping them with the tools to imagine, explore, and ultimately create. Technology can’t answer every problem in the world, but you might as well give it a good shot.

Like many new things, the hardest part is often getting started. So for the first blog post, I am going back to a project that helped me get started: DeepNNCar. By the end of the post, you should have a basic understanding of how to design and drive your own autonomous vehicle which you can train at your house! To help with this, we have also made our code repository public.

This project has been supported by DARPA Assured Autonomy Grant and National Science Foundation US Ignite Grant’s REU Supplement. Also, I would like to give a special thanks to Dr. Abhishek Dubey, Shreyas Ramakrishna, Gabor Karsai, Ohad Beck, and the rest of my colleagues at the SCOPE-lab at Vanderbilt University‘s Institute for Software Integrated Systems who have used DeepNNCar in the following publications:

Section 1. What is DeepNNCar?

DeepNNCar is the combined words of deep neural network car. I designed DeepNNCar during my sophomore year at Vanderbilt’s Institute for Software Integrated Systems (yes ISIS) as part of a research program. In general, DeepNNCar is just an remote-controlled (RC) car that has been hacked to explore different autonomous algorithms.

Figure 1. DeepNNCar Components

Section 2. DeepNNCar: The “Hard” Stuff

If you have no interest in the hardware, feel free to skip this section. This section is to provide a quick overview of the hardware and also a few of the necessary mechanical aspects of DeepNNCar to understand how we can provide steering and acceleration controls to autonomously drive.

Part 1. Hardware

DeepNNCar is built on the framework of the Traxxas Slash RC vehicle which can actually go faster than 60 mph. So, for a quick disclaimer, if you try this at home, you probably don’t want to go that fast. However, in reality, any RC car will likely ~work~ with this system as they all have similar controls. Voltage requirements may be your only restriction. So for the parts list below, the Traxxas Slash RC is preferred but not required. Furthermore, if you want to swap the computer (I use a Raspberry Pi 3) feel free.

  • RC Vehicle: Traxxas Slash 2WD 1/10 RC Car (ASIN B07GBR4B66) ($230)
  • Computer: Raspberry Pi 3 (Model# 4328498196) ($30)
  • Camera: Generic USB Webcam (30 FPS recommended) ($20)
  • Wires: Jumper wires (Part# B0040DEI9M) ($8–10 for a pack)
  • Computer Power Source: Portable Power Charger (20,000 mAh) ($8–10)
  • Storage: 16 GB Micro SD Card & USB Flash Drive ($30)
  • (Optional) Speed Sensor: IR Slot-Type Coupler (Part# 723585712433) ($1)
  • (Optional) LIDAR: Any suitable USB Lidar (ASIN B07L89TT6F) ($150 or greater)

Substituting the Traxxas Slash vehicle with a smaller RC vehicle can easily bring this project just around $100.

Part 2. Mechanical Controls

DeepNNCar, like many RC cars, has two motors. The first motor (a servos motor) controls the steering and the second motor controls the acceleration. Both motors are controlled using pulse width modulation (PWM). To break down PWM, some basic understanding of electronic motors is useful but not required. You can always skip to the code at the end.

Pulse Width Modulation (PWM) and Duty Cycles

In its simplest form, imagine a motor being controlled by a light switch. When we flip the switch on, the motor is supplied with total power and when we flip the switch off, the motor is supplied no power. But what happens if we flip the switch quickly on and off? Well, over a specific time slice, we can control the average amount of power we deliver to that motor by flipping the switch on and off. Doing this, viola! We have the ability to control the output of an electronic motor.

Pulse Width Modulation (PWM) is used to control the motors of DeepNNCar by adjusting the average power supplied to the motor.

Using the concept of PWM and Figure 1 above, let’s explain how the acceleration motor is controlled. For each interval of 10ms, the average power delivered to the motor dictates the power of the motor or in this example the speed of DeepNNCar. The period of on-ness is often represented as a percentage of the interval time and is known as the duty cycle. For safety, we only allow a duty cycle of 20% or 2 ms per 10 ms interval so that we don’t go 60 mph in a tiny lab room.

The following list below summarizes roughly which duty cycles correspond to what controls for DeepNNCar and for most RC cars. If in doubt, you can always break out an oscilloscope to exactly measure the voltage spikes of your RC car if you are not using the Traxxas Slash car.

Steering Servos (Interval = 10ms)

  • 10% (1 ms): Left turn
  • 15% (1.5 ms): Straight
  • 20% (2 ms): Right turn

Acceleration Motor (Interval = 10ms)

  • 14.5 (1.45 ms): ~ 0.5 meters per second backward (reasonable)
  • 15% (1.5 ms): No movement
  • 15.7% (1.57 ms): ~1–2 meters per second forward
  • 20% (2 ms): Have fun! (very fast, autonomous algorithms likely will fail without faster computation)

Part 3. Raspberry Pi 3

Now that the basics of the mechanical controls have been presented, you may be wondering how do you create these signals and actually send them to the steering and acceleration motors! For that, we use the Raspberry Pi 3 and a little wiring magic.

The Raspberry Pi 3 (RPI3) is a computer just like the one you are reading this article on, but with a few other interesting features and probably less computational ability (but its only $30!). It can run an operating system (like Windows but it uses Raspbian stretch which is based on linux and free), connect to the internet wirelessly, and has general purpose input output (GPIO) pins which can control peripheral devices like a motor. Trust me, these things aren’t hard to use. If you really need the desktop experience to feel comfortable, you can connect a keyboard and a mouse to the RPI3, sign in to your wifi, and browse the internet! For a detailed explanation on how to get started with a brand new RPI3, please see https://www.raspberrypi.org/documentation/

For now, it is assumed that you have a RPI3 and it is connected to the internet. Figure 3. Below shows the GPIO pins of the RPI3 which are used to control DeepNNCar.

Figure 3. Detailed GPIO of the Raspberry Pi 3 [1]. DeepNNCar uses pins 12 (GPIO18) and 35 (GPIO19) to control the steering and accleration motors respectively. Both motors are powered by 5V and share a ground (GND) with the Raspberry Pi 3.

DeepNNCar uses pin 12 (GPIO18) to control steering and pin 35 (GPIO19) to control the acceleration by using the pins to control the PWM signal. Setting this up using software can easily be accomplished in Python. A code snippet below shows how this can be done.

Code Snippet 1. Python code to control the PWM of DeepNNCar using a Raspberry Pi 3. For the code controlling all of DeepNNCar’s periphals go here https://github.com/burrussmp/DeepNNCar-Research/blob/master/DeepNNCar/Peripherals.py.

In the code above, the init function initializes the PWM class. The instantiated object can then call changeDutyCycle(acc=XXX,steer=XXX) to adjust the acceleration and steering values where the inputs range from 10–20. For details regarding the Python library that I am using, please see this page.

Part 4. Wiring the Motors with the RPI3

Back in my sophomore year, I destroyed a lot of Raspberry Pi 3’s using bad technique, so hopefully you are more careful than I am. Wiring the pins requires a soldering iron for best and safest results and electrical tape. First, double check that both the RPI3 and the RC car are turned off. The wiring is pretty simple and follows a color pattern (great!). In general, red means voltage, black means ground, and white means signal. To hack the two motors, wire the thin red lines of the two motors to the RPI3’s 5V pin and the thin black lines to the GND pin.

Do not touch the thick red and black wires, those are high voltage, doing so may short the motors which are actually expensive!

Then, wire the white line of the steering servos to pin 12 and the white line of the acceleration motor to pin 35.

With all electrical work, double check and then check again to make sure the pins are in the right spot before turning everything on. If after turning on, nothing smokes, you probably didn’t burn anything up which is great! Hopefully, it all works :)

Part 5. Connecting the Camera & Speed Sensor

Now that we have connected the Raspberry Pi to the RC car, we only need to attach a few more peripherals, mainly the camera and the optional speed sensor and LIDAR.

The USB camera can be connected by inserting the USB into the RPI3. Using openCV library which provides useful computer vision tools, we can configure and capture images using the camera. We use a LIDAR system that is USB-based to provide easy access. However, because it is an optional feature I will not go in depth on how to connect the LIDAR to the RPI3.

The slot type IR sensor which is used to measure the RPM of the wheel and thus its speed can be connected by cross referencing the GPIO pins in the code with the GPIO pins in Figure 3. Note: For this to work, you may need to tape or better yet 3D print a small plastic piece to the car such that each rotation, the piece passes through the IR sensor slot. You may also consider replacing the speed sensor with a Hall effect sensor and a magnet so that you don’t have moving parts that might collide. However, this piece is optional and just provides some useful information. The slot type IR sensor and the LIDAR are not required to get DeepNNCar autonomously driving.

Section 3. DeepNNCar: The “Soft” Stuff

Ok ok, if you’ve made it this far, great. If you skipped ahead also great. Time for the fun software stuff.

Section 1. Connecting to DeepNNCar

To connect to DeepNNCar, we use a secure socket shell (SSH) which allows for command line access to the Raspberry Pi 3. To connect to the SSH, a Window’s user can use a tool like PuTTY or download Windows Subsystem for Linux (WSL) to be able to execute Linux commands from a Windows computer. The following tutorial will use Linux Commands so I highly recommend using WSL if using Windows. Mac commands are generally similar to Linux.

For the following section, I will assume the RPI3 has been correctly wired, is connected to the internet, and SSH has been enabled on the RPI3. To enable SSH on the RPI3, please see this.

To connect to the RPI3, you first need to know its IP address on your local network. To do so, you can plug the Raspberry Pi 3 into a monitor and using a mouse, hover over the WIFI/ethernet icon in the upper top right corner (next to bluetooth icon) of the GUI which is shown in Figure 4 below

Figure 4. The GUI of the Raspberry Pi 3 using the Raspbian Stretch OS [3].

For the rest of the tutorial, let’s assume the IP address of the RPI3 is 10.112.52.129.

Part 1. SSH into the RPI3

To connect to the RPI3 using SSH, issue the following command from a Linux terminal on your local machine.

ssh pi@10.112.52.129

You will then be prompted for a password. By default, the password is raspberry. If this is your password, you can change it by entering in passwd in the RPI3 terminal and following the prompts. It is really important that you do this, otherwise someone can easily hack into the RPI3 remotely.

However, for the best security, you can generate a private-public key pair which will allow for passwordless SSH access to your RPI3. Luckily, Raspberry Pi has released official documentation to accomplish this!

Part 2. Successfully clone the code

So you’ve successfully SSH’d into the RPI3! Great! Now, you want to get started and driving! To do so, first clone this repository on both your local machine and on the RPI3.

git clone https://github.com/scope-lab-vu/deep-nn-car.git

On the RPI3, issue the following command to allow you to control DeepNNCar using the GPIO and then navigate to the DeepNNCar directory and run the script.

sudo pigpiod
cd DeepNNCar
python3 DeepNNCar.py

If you have installed all of the necessary python packages (openCV, tensorflow, keras, etc) you should see the RPI3 server successfully start up. Otherwise, please take the time to download the necessary packages. Note: Python3 is used in all of the code.

On your local machine, in the code repository navigate to the Controller folder and change the IP address in the first line of the main method to your IP address. For example,

# in ./Controller/Controller.py
if __name__=="__main__":
controller = DeepNNCarController(IP = "10.112.52.129",Port = "5001", maxForwardRange = 1)

And then execute the code using python3 Controller.py in a terminal.

Part 3. Drive DeepNNCar!

Upon starting the client code on your local machine (Controller.py) you should see your local machine connect to the RPI3 server. You will then receive prompts on your local machine to configure the mode of operation. DeepNNCar supports four driving operations described below.

Mode 1: Normal

In normal mode, you can control DeepNNCar using your computer’s mouse. Placing the mouse on the right side of the screen will result in a right turn and placing it on the left side will result in a left turn. You can also go forward by placing the mouse closer to the top of the computer. During this mode, the camera will not collect any data nor will any autonomous algorithm be executed. It is simply to see if everything has been wired correctly and the car can be controlled remotely from your local machine.

Note: Controlling DeepNNCar using the Xbox controller is no longer supported.

Mode 2: Livestream

In the livestream mode, you will control the car like you would in normal mode, but you will also receive a feed of the images in real-time on your local machine from the USB camera. This is useful in ensuring that the camera is properly set up and that you can collect good data from the car.

Mode 3: Data Collection

In data collection mode, you can specify the number of data points you want to collect. By default, a data point includes the image captured by the camera as well as the corresponding duty cycles for the steering and acceleration PWM controls.

In order to properly configure the data collection mode so that the data is uploaded to a google drive after collection, the following function in ./Controller/HelperFunctions needs a few changes. Namely, the pathToClientSecrets and the google drive folder_id need to be changed. A tutorial to setup the Google Drive API can be found here. This is necessary to follow in order to have access to the client secrets which let you access your google drive in a program. You can think of these secrets (which is just a .json folder) as the credentials to verify that you are, in fact, you and that you can upload the file to your drive.

Note: The client secrets should be located on the local machine due to the necessity of clicking upon a redirection to an OAuth to approve the upload. How this looks is after collecting data, DeepNNCar sends the local machine the data set over the WIFI. After, you will see a browser window pop up to authorize the upload by signing into the associated Google account.

Code Snippet 2. A helper function to upload a csv file to a google drive folder.

In general, 3000 is reasonable number of data points to collect without overloading RAM while still having reasonable upload speed and data set size. Obviously, this will change if you choose to use a high resolution camera or capture larger images than the default 320x240x3 images.

Finally, a method to read the CSV data set into an X and Y numpy array which can be converted to a tensor to train a neural network can be found here.

Mode 4: Autonomous Mode

To test an autonomous algorithm, the “auto” function in ./DeepNNCar/DeepNNCar.py can be updated. Currently, DeepNNCar supports using a convolutional neural network (CNN) to process an image collected from the camera and produce steering controls. The weights of the neural network are stored and loaded from a USB drive into a pre-defined CNN architecture.

For simplicity, the data set can be split into a classification task. An example of training a classifier can be found here which includes a pre-discretized classification data set. This google colab notebook shows how to load the data set and train a basic classifier as well as a more complex deep radial basis function (which is beyond the scope of this Medium post but can be used to simultaneously provide predictions and a rejection class). Furthermore, the google colab notebook has a function to read in a CSV file containing a data set directly collected by DeepNNCar which can be used to train a regression network. Going into the details of the training is beyond the scope of this blog post; however, in the future may describe the training procedure as well as theory surrounding deep neural networks and deep radial basis function networks. In general, our research lab uses a modified version of NVIDIA’s DAVE-II CNN architecture [2] which is used for end-to-end autonomous driving. It is called end-to-end because the controls (steering outputs) come directly from the input using a convolutional neural network (CNN) with no processing or decision making in-between. Obviously, it is a terrible idea to trust a machine with your life with literally zero decision making in between, but for simplicity, this shows how the autonomous mode can work.

Other Configuration Details

Upon selecting the mode, you will then be prompted for other configuration details. For safest performance, use “user-controls” to control the acceleration and for best performance, disable all feedback (like CPU temperature, CPU utilization, etc).

Upon finalizing the last configuration, you will be able to explore collect data and explore various autonomous driving algorithms. The code is also modularized to allow for (hopefully) easy extension to other algorithms like SLAM or LIDAR-based navigation. If there are any cool developments, we would love for people to reach out to us and let us know what they have done!

Section 4. What Can I Do With This?

If you actually made it through the whole tutorial, I would be surprised (and happy!) but like all things I get how most importantly, you need to know how you can use this, what’s important, and why should you care! In my research, I have used DeepNNCar to extend simulations performed on the self-driving CARLA simulator to real-life to explore the effects of real-time safety procedures. My master’s thesis which explored this in part can be found here. However, there are certainly way more interesting things that you can do and I highly encourage exploring new and exciting areas using the pre-collected data-sets to do it and code my code as a starting point!

Idea 1. Weakly-Supervised Segmentation

Figure 5. Segmentation using SegNet (https://www.youtube.com/watch?v=CxanE_W46ts)

One of the more interesting topics in computer science is segmentation which is an extension of classification. Classification tasks attempt to assign a discrete label to a particular input. For example, a binary classifier may try to classify images of cars and bikes where an output of 0 is a car and a 1 is a bike. Segmentation extends this concept by assigning each piece of the input a particular label and works intuitively best with image data.

As you may know, images are represented as pixels where each pixel contains 3 values that correspond to the red, green, and blue (RBG) channels of the input. In a binary segmentation class, each pixel is assigned a value 0 or 1; however, this can be extended to a multi-class task by assigning each pixel a 0, 1, … or k when you have k-classes you want to classify.

The issue with segmentation is that it requires a LOT of data to be labelled which isn’t always possible. For example, if our input is a 200x66x3 image, we have 200x66 pixels or 13200 labels to assign. That’s a lot of manual labeling!

One solution to this problem is weakly-supervised segmentation. Just check out all the literature. It is called “weak” because rather than labeling each pixel, we only provide a rough “estimate” or partial information regarding the ground truth segmentation or what we want to be able to predict.

One popular technique is to use a bounding-box to generate a rough estimate of the pixels that belong to a single class; however, even this technique is difficult because first we have to detect the various classes (like road, road signs, etc.) and figure out a way to place a bounding box around them. Already, there is work to support detection and segmentation using weak supervision from labels [4]. However, extending this to a realistic multi-class situation is difficult. But not impossible :)

Using creativity, this can be solved and potentially implemented on DeepNNCar to show real-time multi-class, weakly supervised segmentation using just class-labels. One approach could be using a heatmap collected by layer-wise relevance propagation or some way of looking at what a mutli-class model found interesting in classification to first perform a basic detection mechanism. Next, you can segment the heatmap, yet you will still be lacking particular class labels. This is where things may get interesting! One method could be to look at class scores and simply assign the ground truth output of all strong heatmaps to that probability score; however, it is likely that a better approach can be found! Hopefully, that is where you can come in!

Idea 2: The Open Set Problem

The “open set recognition” problem is a general problem in classification tasks and is central to many computer vision tasks. For example, if we are determining whether an image is of a “road” or a “stop sign” we may also want to know if it is neither a road nor a stop sign. The ability to reliably classify this unknown class is the solution to the open set problem.

Figure 6. DeepNNCar using an anomaly detector is able to stop when faced with an anomaly. For the video, please see this link. The description of the anomaly detector which was designed using a deep radial basis function (RBF) network can be found here.

The simplest way to solve this problem is using some anomaly detector. For example, autoencoders have been proposed using the latent space of the autoencoder to find anomalies. Even simpler one-class support vector machines have been proposed to say whether or not it is data we have trained on. However, these techniques have the negative effect of requiring a second model to run alongside the model making the primary predictions.

In my research, I partly solved this problem using radial basis functions (RBF) networks to have a rejection class to catch such anomalies. Videos of DeepNNCar using the RBF networks to catch anomalies and prevent a crash scenario can be found here, as well as a video of the thesis presentation describing deep radial basis functions for mitigating other security threats facing deep neural network classifiers.

Future work could look at how RBFs or some new technique lower their confidence or are able to detect changes in lighting, possible occlusion in the image, rotation issues, etc. that may result in DeepNNCar or some other model making a dangerous prediction. Furthermore, one can use these defenses in real-time and just throw a bunch of attacks at DeepNNCar to see if it is robust to these attacks. Such a solution would be impactful for not only self-driving but also many other tasks like medical image analysis where anomalies are not only common but very important in making decisions!

Idea 3: Your own work!

If you are interested in computer vision techniques like SLAM, other autonomous algorithms like reinforcement learning (RL), or hardware specific algorithms like LIDAR, you can use DeepNNCar to see how an autonomous car may be able to benefit from such algorithm. I think this is a really exciting area because I can only imagine how creative people can get with this!

Section 5. In Conclusion…

DeepNNCar is something I worked on sophomore year of college and for a summer, I’m not going to lie, was the bane of my existence (haha but not kidding). Mainly because I ran into so many road blocks that I never expected, but I also learned a lot and appreciate that looking back. Hopefully, people can use this tutorial to either use DeepNNCar directly or borrow some of the code to implement their own ideas for autonomous algorithms.

You couldn’t pay me money to get into a car that was being controlled by my algorithm I made to run on DeepNNCar, and, I’m being honest, I don’t think we’d leave the driveway. But that wasn’t the point of the project! With so many interesting problems to solve in the world, this was just my gateway into understanding ways that computer science can be applied to real problems to find real solutions.

If I could give any advice to sophomore me regarding this project, it wouldn’t be anything remotely technical or related to DeepNNCar in particular, but instead to get involved in the project earlier or at least something similar because you will have no idea what new things you will discover.

For direct feedback or questions, please message me privately to exchange email addresses or directly post them in the comments below. Cheers!

Sources

[1] “Raspberry Pi GPIO Programming in C: Big Mess o’ Wires,” BMOW. [Online]. Available: https://www.bigmessowires.com/2018/05/26/raspberry-pi-gpio-programming-in-c/. [Accessed: 30-Mar-2020].

[2] Bojarski, Mariusz, et al. “End to end learning for self-driving cars.” arXiv preprint arXiv:1604.07316 (2016).

[3] S. Long. “Introducing Pixel” Raspberry PI Blog. [Online]. Available: https://www.bigmessowires.com/2018/05/26/raspberry-pi-gpio-programming-in-c/. [Accessed: 30-Mar-2020].

[4] Feng, Xinyang, et al. “Discriminative localization in CNNs for weakly-supervised segmentation of pulmonary nodules.” International conference on medical image computing and computer-assisted intervention. Springer, Cham, 2017.

Analytics Vidhya

Analytics Vidhya is a community of Analytics and Data…

Sign up for Analytics Vidhya News Bytes

By Analytics Vidhya

Latest news from Analytics Vidhya on our Hackathons and some of our best articles! Take a look.

By signing up, you will create a Medium account if you don’t already have one. Review our Privacy Policy for more information about our privacy practices.

Check your inbox
Medium sent you an email at to complete your subscription.

Analytics Vidhya

Analytics Vidhya is a community of Analytics and Data Science professionals. We are building the next-gen data science ecosystem https://www.analyticsvidhya.com

Matthew Prasad Burruss

Written by

https://matthewpburruss.com/ | https://github.com/burrussmp

Analytics Vidhya

Analytics Vidhya is a community of Analytics and Data Science professionals. We are building the next-gen data science ecosystem https://www.analyticsvidhya.com

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store