Experiment Using Deep Learning to find Road Lane Lines

Paul Heraty
3 min readJan 12, 2017

--

After creating an OpenCV pipeline to detect road lanes for self driving cars (see https://medium.com/@heratypaul/udacity-sdcnd-advanced-lane-finding-45012da5ca7d#.yskumiuql), I was curious to see what results I could get using a CNN to determine lane lines. Having gone through the traditional CV method of finding lanes, I figured I could generate labeled data that I could then use to train a network.

So I modified a neural network that I had used in the SDCND BehavioralCloning lab (5 CNN layers followed by 3 FCNN layers), and added 5 new outputs to it. So now the network looks like 5 CNN layers with 6x 3 FCNN layers. The outputs are generating lane polynomial coefficients for both the left and right lanes, i.e. a*y² + b*y + c where I’m predicting a, b & c for each lane.

As input, I provide the undistorted RGB image (so I still do camera calibration and distortion correction), and the left/right a,b,c coefficients that my CV solution calculated in perspective space after fitting a polynomial to the lanes. I have about 900 images in my training set, and ~150 in my validation and test sets.

After training, I tested on a single image to see what the predicted lanes looked like compared to the actual. Here’s a plot of what that looks like (blue = actual, red = predicted).

It was doing a reasonable job of predicting the c values, but the a and b were way off. It occurred to me that normalizing the a, b, and c values was something I should do to help the network, so I ran again with normalized values. Here’s what the predicted lanes looked like now.

Not too bad for a first pass, considering I’ve not done any optimization of the model params and only have 900 input samples.

After some tuning of the CNN, the results are looking better:

So then I decided to use this model with the project video to see what the result looked like. Obviously I had to do an inverse perspective xform to get the lane questions back to video space. Here’s what it looks like:

Note: no smoothing or tracking of anything between lanes. No sanity checking of lane equations. Every lane calculated and drawn on every frame.

I was pleasantly surprised to be honest. It’s managing to do a reasonable job of tracking the lanes. With more work/training, and more samples, it could probably do an ever better job.

It runs about 5x faster than my CV implementation too!

--

--