A pipeline for a self-driving car to detect lanes on the road

Raj Uppala
3 min readJun 4, 2017

While self-driving cars incorporate a wide array of technologies (deep learning, cloud computing, and robotics to name a few) powered by data from a number of sensors, one of the relatively simpler tasks is for the car to detect the lane lines on the road to remain within its lane while driving. This project focuses on how to implement a lane line detection algorithm on images and videos using Python and Computer Vision techniques.

Let’s start by looking at a real world image that I feed to the algorithm:

This image is fed to a pipeline with 5 steps to detect the lane lines:

Image processing pipeline to detect lane lines

The five steps in the above pipeline are discussed in detail below.

a. Convert the input image to grayscale and apply Gaussian Blur to the image to smooth out the image and reduce the noise:

b. Apply the Canny function:

Apply the Canny function on the image in the previous step to create an image that shows all the edges. The Canny function generates edges by measuring the gradients of adjacent pixels and identifying the edges where there is high change in the gradients:

c. Apply the Hough transform function:

Since the lanes are in the bottom half of the image, I created a “region of interest” trapezoidal mask to ensure that none of the other lines outside the region of interest interfere with the algorithm. I then applied a hough transform to the edges within the mask to extract the lane lines in the image. For the hough transform to work properly, a few parameters need to be tuned to get the desired results. Some of these parameters are: minimum number of pixels that are needed to create a line, maximum gap in pixels between connectable line segments, and the minimum number of votes (to ensure the algorithm picks up lines based on a confidence level, and ignores the others) etc. Here’s how the image looks like after this step:

d. Extend the lines to draw a single long lane line for each of the left and right lane lines respectively:

I accomplished this by filtering lines by slope, thereby selecting the lines only within a certain range of the actual slope of the lane lines and ignoring others. Once I separated the left lane lines and right lane lines, I found the best line fit through the points for both the left and the right lanes along with their slopes and intercepts. Since the slopes and intercepts of the left and right lines with the best line fit are now known, I extended these lines to the edges of the ROI mask. The result:

e. Overlap the lane lines back on to the original image

This is the final step of the pipeline where I overlap the lane lines on top of the original image. The final result:

The above solution was extended to work with videos as well and the code is posted on my GitHub page.

--

--