How We Built a Self-Driving Car for Web Summit 2017

Tomasz Korzeniowski
codequest
Published in
6 min readDec 21, 2017

--

codequest recently had the pleasure of attending Web Summit 2017 in Lisbon. It was the first conference where we had a booth of our own, and we were eager to showcase our team’s forte — turning ideas into awesome software. So, our COO, Adam Mańkowski, proposed that we use this opportunity to demonstrate our AI skills and ability to quickly deliver results. We only had a few hours per week to prepare, so, naturally, we opted for a simple straight-forward project.

Just kidding. 😀

A couple of us here at codequest are very interested in AI and Machine Learning in the context of autonomous vehicles. So why not build a self-driving vehicle? 🚗 🤖

Building a self-driving car

Our primary goal: create a car that is fully autonomous and performs all computation on its own, without involving the cloud.

We decided to create a small car that would use a single sensor — a camera — and Deep Learning Methods to drive autonomously and keep itself on a defined track. We also wanted the car to stop automatically the moment it encountered a human Lego mini-figure.

Smooth communication between the car and an iPhone application enabled us to use our smartphone to switch the car from manual to self-driving mode.

Here’s how we did it

Our first step was getting the right hardware:

  • Car: We got a remote-controlled LaTrax Rally Car and original controller. We made some construction modifications to affix the camera in its position. We used the car’s battery, connected to the engine and servo, for movement but added an Anker power bank to power up other components: Raspberry Pi and Arduino.
  • Arduino Leonardo: We installed a microcontroller that could receive PWM signals from the pilot (in both remote and learn modes), and send them to Raspberry Pi and/or passthrough to engine and servo. The Arduino could also receive serial communication (about state, turns, and speed changes) and power via USB from the Raspberry Pi.
  • Raspberry Pi 3: We used it to host an HTTP server responsible for controlling the car. We hosted an API on that server as well. It switched the car from manual to autonomous and also collected training data from the camera. The Raspberry Pi hosted Keras with Tensorflow backend frameworks to calculate turn angle based on the view from the camera and send it to the Arduino.

The learning process was carried out directly by the Raspberry Pi. We prepared scripts that initialized a program that would build a neural network model and generate H5 files.

Our neural network

The network we used in our car is one of the solutions included in the Udacity Self-Driving Course project (network). It’s based on SqueezeNet. We chose it because of the small number of parameters that allows fast network training and execution — as it turned out, it’s essential when you’re dealing with low-performance hardware like small cars.

We tested 3 variants of this network: 52, 159 and 1005 parameters. The last one won because it brought the best results during real track tests.

We generated our own dataset by driving the car on the track in both directions. Our data was video recorded by the camera on our car. We simply drove the car and recorded the route, then processed the video which by then worked as a series of images. We used about 11k samples with all turn angles. The images were scaled to 64x64 pixels, and we used only V channel from HSV colorspace.

Mr Fix-It

One of the biggest challenges? Mounting the camera in a fixed position. The problem was that our camera was set to a specific position which then changed when the car moved. That meant as we were gathering data for a given camera position, in reality the video reflected a different position. That made our car confused so we needed to find a way for fixing the camera firmly so it wouldn’t move during the ride. Thanks to Karwer’s, our CTO, and Maciek’s, our Software Engineer, engineering skills and set of magical tools, as well as a fortunate piece of plate, we were able to set the camera in a stable position.

Day one at Web Summit — and a major challenge!

Everybody has a plan until they get punched in the face. — Mike Tyson

We worked up to the wire, but after extensive training, testing, and fine tuning our self-driving car was ready to show off at the big event. Once we arrived in Lisbon, we headed straight to the Web Summit venue to set up our booth. We set up the track and verified that our car followed it without any problems. Everything went swimmingly, so we headed off to explore Lisbon and met up with friends.

Things didn’t go as smoothly when we turned up the next morning.

That’s when we noticed that many elements in our environment had changed. The organizers set up a blue floor and new intense lighting. We also noticed that plenty of people wearing white shoes walked near our track. This resulted in our car confusing the shoes for the strong white track edge it was trained to look for.

This new environment was completely different from our office that served as the training ground for the car. Needless to say, our car got a little confused and would sometimes leave the route we set up for it.

That’s when we decided to generate more training data and train the network again to generate a new model that would work for our car in this new setting. Adam drove the car for almost two hours in total and following that training, our car started working again until the end of the conference (apart from moments when it encountered problems like unplugged wires or overly enthusiastic audience members picking it up).

During the following days, we frequently challenged our car — we changed its route dynamically, set up a new route, or paused it in the middle of the route. We even put objects on its route to check whether the car could find itself and return to the route. It successfully returned to route every time, surpassing our expectations. It worked!

Adam and Gosia challenge our car and change car route on the go.

Our car generated a lot of attention during the Summit. Many attendees filmed it, and we used that boost of attention to organize a contest for people who would help us collect training data.

What’s next?

One of the key lessons we learned is about the role of mechanical engineering when it comes to self-driving cars, or IoT projects in general.

The installation of the camera was without question our biggest challenge. After the conference, we were approached by an incredibly smart woman who turned out to be a robotics specialist. Marie suggested that we install the camera a little deeper and told us that other types of sensors might help us improve our car. We’re really grateful for her feedback!

All in all, our first experience with building a self-driving car was super inspiring.

This project generated a lot of learning and a lot of new ideas for future projects.

Our first adventure in space of autonomous vehicles could never happen without Maciek and Karwer’s dedication. They put their hearts and souls into this project — and we couldn’t be more grateful! 🙏 Guys, you rock! 🤘

--

--