Autonomous RC Car Part 5

Mikkel Wilson
5 min readFeb 25, 2019

--

We’ve got a small camera. How do we calibrate it?

New here? You may want to start from Part 0.

Snow

I was really hoping to generate some test data with the car. Getting data early and often is important when developing new products and can prevent premature optimizations. However, this is the track I was planning on running the car on:

Can you see the painted lane markers? Me neither.

So, there are a few optimizations that I’ve been considering. Premature or otherwise, we’re stuck indoors and we’re going to explore one of them. Small camera lenses often add spherical distortion to the images. This could affect our steering angle estimations and — with a little math — can be reduced or eliminated.

Camera Calibration

OpenCV has a convenient tutorial on how to undistort images. The jist of this is to take a series of pictures of a known-flat item, find some points on it, and reshape the image so these lines are straight. The OpenCV tutorial images look like this:

I initially printed an image of squares taped to a plate of glass (very flat) but this wooden chess board that has larger squares was easier to detect. My results were similar to the tutorial.

Overall the differences are slight. I changed the hyperparameters for the number of inclusive vertexes to 6 x 6 rather than the tutorial’s 7 x 6. I found that the Pi camera was a blurry around the edges so reducing the dimensionality yielded better results. I was moving the board around to capture different areas of the field of view of the camera, and that may have contributed to the blur around the edges. I used 25 images for calibration data running the code below several times and selecting only the images where findChessboardCorners() found results.

With my selected images in an images directory, we can run the code below to calibrate and undistort the images. When the calibration phase is complete, we’ll save off the parameters necessary to undistort using Python’s pickle feature (line 40). Later, we’ll use these parameters to undistort any images coming from this camera.

Putting it Together

In Part 2 we wrote some RPi image capturing code, in Part 3 we did lane detection, and now we’ve established a baseline calibration for our camera so we can undistorted the images it produces. Let’s put these three things together. This will be a large code block, but we’ve seen most of it before. The only substantial changes are around handling bad or missing data. Line 79 has our pickle calibration loading, line 92 should be familiar from where we initiated the Pi Camera, line 98 does the undistortion from above, and line 118 is where we add our detected lines to the newly corrected image. This will just grab the first 10 images and write them to disk. When we’re building training data we’ll store the original images and sensor data instead.

The output running in my basement is less than thrilling, but does demonstrate that it works. We’ll remove the frame limits around line 125 when we get this running on the AV.

Door frame lanes? Kinda.

Roads Not Taken

Let’s step back for just a moment and consider what we’re not doing. This robot is going to have its brain on-board. This limits us in compute power but increases our reaction speed and autonomy. If we relax those constraints, what could we have done instead?

FPV Camera with integrated transmitter

First Person View cameras used in racing drones are inexpensive wireless cameras mounted on fixed-wing and multi-rotor drones. The cameras are light weight, use little power, and transmit in near real-time (no compression) in the 5.8Ghz band. They’re intended to be received by VR goggles worn by a drone pilot, but there are also USB receivers that can interface with a laptop or video capture card. We could cram a laptop with a bunch of GPUs and dramatically outstrip the performance of our little Raspberry Pi. That would let us use multiple cameras, support parallax depth approximation, run YOLOv3 at a high frame rate, etc. There are some voices in the AV community that think autonomous vehicles won’t be safe without external resources helping to drive the car. I can certainly understand the attraction.

To make the RC transmitter/receiver portion easier, we could use simple 433Mhz transmitter/receiver pairs like the nRF24 or RMF96 LoRa chips. I have a bunch of these so I may prototype this just for fun.

We’re not taking this road because we want the constraints of autonomy to be with the robot and not external. The point of this project is to explore the limitations of full-sized autonomous vehicles with on-board computing and what sensor-fusion we can accomplish with limited resources.

What’s Next?

My LiDAR unit is in the mail, so we’ll get started playing with that. When the snow melts I’ll get the camera mounted to the car and do some driving. Once we have gathered some training data we can start building our regression model to predict steering angles from sensor input.

Update: Continue to Part 6.

--

--