Autonomous RC Car Part 3

Mikkel Wilson
3 min readFeb 8, 2019

--

Now we have OpenCV. Let’s write some code.

If you haven’t caught up, here’s the links to parts zero, one, and two.

Lane Detection

There are so many lane detection implementation articles I hesitate to join the fray. Does the world really need another article / video / repo with the same algorithms described in it?

I’m going to keep this light and just focus on the necessities of running this on a Raspberry Pi, using the Pi Camera module, and the resource constraints of this platform.

Get Image From Pi Camera

Let’s install the picamera library and grab some video frames. We need to specify the [array] ‘extra’ for piccamera so we can more easily get numpy arrays from the camera frames.

  • pip install “picamera[array]”

Running the code above with python videocap.py will give you 10 images in 720p resolution. It’s worth noting that it takes about 300ms for the camera to warm up. We may have to account for that later. I’ve converted and compressed these into an animated gif. The originals are higher quality.

Yes, that’s a plastic traffic cone. More on that later.

Test Image

We don’t have a robot yet, so let’s use this test image as a proxy for our eventual camera image. This is a 1/10th scale vehicle so putting it on a road and expecting a camera a few inches off the ground to behave doesn’t seem like an accurate proxy for a full-sized AV. We need 1/10th scale lanes to work with. I’m going with a high school running track.

Random image from the internet resampled to 1280x720

I mentioned that there are many lane detection samples on the internet. I won’t claim to have a better one, but there’s one difference on tracks like this that is notable. The horizontal ‘finish’ lines really tend to mess with the average line angles. Removing lines with a very low slope seems to improve line angle detection significantly (around line 47 in the code). Here’s my code with an output image below.

Lanes Detected

We could do some performance metrics here. We need to be able to sample images and update our steering angle as many times per second as possible. Low frame rate will likely yield jerky ‘bang bang’-like steering. Our Pi Camera can do up to 60fps in 720p mode. How fast did our python code run?

$ time python lanes.py(left, right) [[349 720 491 432], [946 720 816 432]]python lanes.py  0.36s user 0.09s system 95% cpu 0.467 total

0.36s per frame is 2.77fps. Nowhere near 60fps. There are a lot of factors like Python VM startup time, file read/write speed, and code optimizations we could make that will affect this speed later. We just learned one place where we may have to seek performance optimizations.

Onward

It would be trivial to combine the video test code and the line detection code, but that’s not super useful without the camera being mounted on a robot. In the next installment, let’s see about getting some telemetry.

Update: Continue to Part 4.

--

--