Udacity Self-Driving Car Engineer Nanodegree Project 1: Finding Lane Lines on the Road

Ashish Malhotra
4 min readOct 8, 2017

--

Udacity launched a Self-Driving Car Engineer Nanodegree program last year. I was recently accepted into the October cohort of this program. I believe self-driving cars have massive potential to improve our transportation system and I am super excited to be part of this program. This is the first in a series of posts describing my journey through Udactiy’s Self-Driving Car Engineer Nanodegree program.

The goal of this project is to identify lane lines on the road from a dashcam video. For example, an input video looks like this:

Since a video is just a collection of images, I started with creating a pipeline that can detect lane lines in an image:

Pipeline

The pipeline consisted of 5 steps:

  1. Convert image to grayscale

2. Applying a Gaussian blur to the image with (5, 5) kernel size

3. Using canny edge detection to detect edges in the blurred image

4. Applying a region of interest mask on the detected edges to limit where we find lanes. Since the camera is mounted in a fixed position on the car we limit the edge detection to a rectangular area defined by (0, 540), (450, 325), (550, 325), (960, 540).

5. Applying a hough lines transform on the masked edges to detect lines. This was applied with the following params: rho = 2, theta = pi/180, threshold = 50, min_line_len = 50, max_line_gap = 150.

6. Performing extrapolation on the separate lines detected by Hough transform to stretch them to the length of the mask. This was done by separating the lines based on slope into positive and negative slope lines. The positive slopes were averaged to generate the slope for the right lane line and negatives were averaged for the left lane line. A similar procedure was done for calculating intercepts of the left and right lane lines. Outlier detection was performed on the slopes to remove any slopes more than 1 standard deviation away from the mean.

7. Adding these lines to the original image by taking a weighted sum.

Repeating the above steps on each frame in a video I got the following output:

The above pipeline does a fairly good job of estimating the lane lines, however the detected lanes are very jittery.

Smoothening

In order to smoothen the output of the pipeline and reduce jitter I modified the hough_lines method by adding a friction term to the calculation of average slopes and intercepts.

new_slope = prev_slope * 0.9 + current_slope * 0.1

The friction coefficient of 0.9 was chosen by testing out various values and choosing the one that performed best for the input.

After smoothening the output looks like this:

Shortcomings

  • Current pipeline can only detect lane lines that are straight. If the lanes start curving the hough_lines method cannot detect the curvature of the lanes.
  • Another shortcoming is noisy lines detected by Hough transform when there are changes in color of the road. This is evident when running the hough_lines detection on the provided challenge video.

Improvements

  • A possible improvement could be changing the hough_lines method to fit second order polynomials (ax² + bx + c). This would allow us to detect lines that are curving (parabolas).
  • Another improvement could be pre-processing the grayscale images at the beginning of the pipeline to eliminate difference between road colors. This could be performed by using a combination of erosion and dilation.

Code

Full code and implementation details can be found here.

--

--

Ashish Malhotra

Software Engineer. Enrolled in Udacity Self-Driving Car Engineer Nanodegree.