Lane Detection using OpenCV

Digant Patel
5 min readMay 5, 2018

--

So.. Why Lane Detection???

Because it’s always better to :D

Intro

This was my first project as a part of Udacity’s Self Driving Car NanoDegree Program and in this project I learned some concepts of Computer Vision like Canny Edge Detection and Hough Transform and also some processing on images.

So let’s jump into the project. We as humans can easily detect a lane and drive accordingly but how do you teach a car to detect lanes?

Let me ask you a question. What are the key features from this image which are required to detect a lane?

The answer is Color,Shape,Orientation and Position in image.

If you guessed it right,

We will now move ahead in making use of this features to reach our goal. The lighter pixels in the image represents white lanes. So we can apply a color threshold to select only the lighter pixels.Also the camera is mounted at a fixed position on the car so the lane lines will appear in the center of the image. We don’t need the other portion of the image(e.g:- upper part of the image which contains sky), so we can apply a region mask and apply color thresholds to only specific region of image.

Example of region selection for masking.

Code for applying color threshold to a region masked image:

color_select = np.copy(image)
red_threshold = 200
green_threshold = 200
blue_threshold = 200
rgb_threshold = [red_threshold, green_threshold, blue_threshold]
# Identify pixels below the threshold
thresholds = (image[:,:,0] < rgb_threshold[0]) \
| (image[:,:,1] < rgb_threshold[1]) \
| (image[:,:,2] < rgb_threshold[2])
color_select[thresholds] = [0,0,0]
Output after applying color threshold.
Superimposing the detected lanes on orignal image.

It’s done!!! We have succesfully detected lane lines in an image. But where was computer vision in this? Well it turns out this method works only in limited conditions.

The lanes lines are not always of the same color and even the same color lane lines are not detected under different lightning conditions by this simple color selection formula.

So now the Computer Vision comes into picture to help us out.

We will use two techniques: i) Canny Edge Detection and ii) Hough Transform to complete the project.

i) Canny Edge Detection

The goal of Canny Edge Detection is to identify boundaries of object in an image.

I won’t be discussing how the Canny Edge Detection and Hough Transform works internally as it would make this post very lengthy and a separate post can be written for them.

Some useful resources to learn about canny edge detection. Link 1, Link 2.

An edge in an image is where the pixel values change drastically.Canny Edge Detection algorithm makes use of this principle to detect edges in an image.

edges = cv2.Canny(gray_image,low_threshold,high_threshold)

A grayscale image image is passed to Canny function along with low and high threshold. The algorithm will first detect strong edge (strong gradient) pixels above the high_threshold, and reject pixels below the low_threshold. Next, pixels with values between the low_threshold and high_threshold will be included as long as they are connected to strong edges. The output edges is a binary image with white pixels tracing out the detected edges and black everywhere else.

A low_threshold to high_threshold ratio of 1:2 or 1:3 is recommended.

Output after applying Canny Algorithm.

ii) Hough Transform

Hough Transform is the operation when applied to output of Canny Algorithm gives us array of line segments[i.e x and y coordinates of two ends of line].

I strongly recommend you to go through this link to understand Hough Transform.

In this way we will get line segments for all parts of the image. We can apply Region of Interest mask to filter out only line segments of lanes.

Output after applying Hough Transform and Region mask.

As you can see there are many individual line segments on both sides of the lane. How do you connect these line segments to form a single line on both sides?

Let’s recall our basic Math principles. A line travelling upwards will have a positive slope and line travelling downwards will have a negative slope. We can make use of this principle to filter out left and right line segments. But in our case left lane will have a negative slope and right lane will have a positive one because here our origin is at the top left corner.(Remember that array index starts with 0).

With this technique we will calculate the average slope and intercept for left and right lanes. The y value of the line will be fixed the max_y value will be at the bottom of image and the min_y value, we can find out from the line segments obtained from the output of Hough Transform.

As we have slope,intercept and y value, we can calculate x value from the equation y = mx + c where m = slope and c = intercept.

Finally you can draw lines on the image with cv2.line(img, (x_min_left, y_min), (x_max_left, y_max), color, thickness)

Final Result

And Yes!! It’s finally done.

We can apply this pipeline to a video which will in turn run the series of steps on all the frames of video. See the output of my pipeline on a test video.

See the code implementation here.

Possible improvements:

  • Information from past N frames can be averaged for smoother lines.
  • HSL color space can be used instead of grayscale to detect the lane lines better in different lightning conditions.
  • Use some advanced techniques to detect curve lanes.

--

--