Advanced Lane Lines Finding

Author: Guang Yang

The goals of this project are the following:

  • Compute the camera calibration matrix and distortion coefficients given a set of chessboard images.
  • Apply a distortion correction to raw images.
  • Use color transforms, gradients, etc., to create a thresholded binary image.
  • Apply a perspective transform to rectify binary image (“birds-eye view”).
  • Detect lane pixels and fit to find the lane boundary.
  • Determine the curvature of the lane and vehicle position with respect to center.
  • Warp the detected lane boundaries back onto the original image.
  • Output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position.

Camera CalibrationEach lens has its unique lens distortion based on its parameters. The most common distortions include radial distortion and tangential distortion. Since we will be using camera to track lane lines, as well as determine curvature of the road, it becomes critical for us to remove these optical distortions. Based on the documentation from OpenCV, the radial distortion can be modeled with the following function:

and the tangential distortion is modeled as the following:

Therefore, we can define the distortion coefficients d as:

The camera matrix is defined as:

, where f_x, f_y is the focal length of the camera lens and c_x, c_y is the optical center. To optain camera matrix C and distortion coefficient d, we can utilize cv2.calibrateCamera( ) function from OpenCV. We can now transform the original chessboard image to the undistorted image:

The code is shown as the following:

Through this process, we can also obtain both camera calibration matrix mtx and distortion coefficient dist, which will be used later.

Image Distortion Correction

Once we obtain the camera matrix and distortion coefficients, we will use it for distortion correction for lane line images. To demonstrate this process, I will describe how I apply the distortion correction to one of the test images like this one:

Sobel Opeartor for gradient measurements

The Sobel Opeartor are used to perform convolution on the original image to determine how gradient changes. Intuitvely, we want to measure the change of gradient with respect to both x axis and y axis, as well as direction of gradient.

In this project, I use 3 x 3 Scharr filters (set ksize = -1) kernels, namely S_x and S_y for the two axises. Gradient along x axis:

Gradient along y axis:

The magnitude S and direction, denoted as theta, of the gradient can be easily obtained through trigonometry:

The Sobel operator code is the following:

Here is a comparison of original image and the image after applying Sobel thresholding:

Color Thresholding

Another thresoholding techqniue is used in color space. Intuitively, we want to extract the color of interest from the image that resembles the lane line (In this case, yellow and white colors) The standard RGB color space is a three dimensional vector space with Red, Green and Blue for each axis. In theory, we can directly perform color thresholding in RGB color space. However, the surrounding light can change dramatically in real-life situation, which can lead to poor performance and various of other issues. Alternatively, we can represent an image in Hue, Saturation and Value (HSV) color space or Hue, Lightness and Saturation (HLS) color space. Why do we want to perform color thresholding in those color spaces? Well, the use of Hue and Saturation are critical because they are indepentdent of brightness.

For the project, I decide to use HLS color space for color thresholding. To convert the image from RGB to HLS, I use the OpenCV function cv2.cvtColor(im, cv2.COLOR_RGB2HLS). After some testings, the saturation channel (S channel) performs the best in terms of extracting lane line, but I will combine it with Hue thresholding to get more degree of freedom. The following code demonstrate how a binary image is generated through S and H channel thresholding.

The code is the following:

Here is an example of applying color thresholding:

By combining both gradient and color threshold together, we can achieve more degree of freedom to filter out unwanted background:

To further improve the result, I performed a masking on thresholded image by using the following code:

Here is a example of processed image using combined thresholding technique and image masking:

Perspective Transform

The code for my perspective transform includes a function called perspective_transform(image), as shown below:

To make the code more general to cameras with different resolution, here is the source and destination chart with respect to image size (In this project, width = 1280, height = 720).

I verified that my perspective transform was working as expected by drawing the src and dst points onto a test image and its warped counterpart to verify that the lines appear parallel in the warped image.

Lane Line Detection

The next step is to fit lane line using a polynomial as the following: y = ax²+bx+c. The fitting process is nothing more than finding the correct coefficients for the polynomial function. Before fitting the lane line, we need to process raw images with functions that we have defined above and output thresholded, masked, transformed binary images. We name these “warped_images”. Here is an example of a warped image:

One approach to detect lane lines is to count the number of pixels along the x-axis and then get a histogram from it.

The location where the highest peak is where the lane line is approximately located. From here, we can further improve the result using a sliding window technique. In this example, I created nine windows for each lane line and detected lane line from bottom and moved upward using the following code:

Here is the final result:

Note in video, we can speed up the process for line fitting by using the coefficients we have obtained from previous frame. The idea is to search around the previous fitted lane line given a specific margin, so we can avoid searching the entire image. Here is the code I have used: https://gist.github.com/ef17b8a276b23fbad737656cbdac63e8

Lane Lines Curvature and Car Position Estimation

Now we have the polynomial coefficients to fit both left lane line and right lane line. Now we want to extract more useful information such as road curvature and how much does the car off from the center. To calculate the curvature, I used the following equation:

To calculate the relative position of the car from the center, I fitted two lane lines with respect height (720 in this case) and use the center pixel (1280/2=640 in this case) to measure the absolute offset.

The code is here:

Final Result

After finish writing all the codes, here is the final result I have for a test image:

Pipeline (video)

Here’s the final video:

Discussion

My approach to this project yields a really nice result for the project video. However, it fails to detect lane lines correctly in the challenged videos, where roads are curved greatly. I believe the reason for my algorithm to fail in those situation is because of the fixed masking vertices, i.e., I do not change the region of interest as the road situation changes. For future improvements, I think the mask vertices should change based on the calculated lane line curvature. In addition, having a second camera could also help since we could do feature matching in that case to detect lane lines more accurately.


Originally published at gist.github.com.