Steering With Vision
When driving, finding where the road is and deciding where to steer is easy. For humans. For machines, not so much. Here’s my progress so far on writing an OpenCV lane finding system. It works, and next up, I’ll make it more robust to different lighting conditions, lane boundaries, and angles. Oh, and yes, see if you can guess what I’m using for the “road.”
First, we detect vertical edges:

Then, we accidentally run edge detection on the wrong image format and briefly experience a psychedelic freakout.

Then we go back to a sensibly tuned and optimised edge detection system:

Next, we classify vertical line pixels on the left as left lane (red), and those on the right as right lane (blue). We run a small window up from the bottom of the image on both sides, tracking this line as it goes up.

After a bit of tuning, we get it working pretty consistently for sensibly angled lane lines:

Then I try some perspective warping and original image overlay, which gets this cool receding perspective look on the windows, but doesn’t detect the lanes as well:

And putting it all together, without the warping, and adding a centre line so we can then calculate steering angle, we get:


