Lane Line Detection — Self Driving Car

Bantwale Diress
Jul 22, 2017 · 6 min read

I am really exited being accepted to the Self-Driving car engineering Nanodegree program offered by Udacity (https://www.udacity.com/drive) . I was accepted for the May 25, 2017 cohort, however due to inconvenience and lack of time, I deferred the program to the July 13, 2017 cohort. At first I was hesitant to join this program, thinking that I could not have enough computer science program background and also time to devote. Given that I have completed the Data Analyst Nanodegree and half-way to complete the Machine Learning Engineer Nanodegree program, I believe that I have the necessary background, and above all the fascination and motivation to join the self-deriving car Nanodegree.

I am big fan of many of the Udacity programs that are offered as an alternative to the traditional education, as the former focus on an up-to-date teck-skill based education. They offer best curriculum in computer science, Data science, AI and self-deriving car Nanodegree among others. Most of the programs are offered by a highly skilled professional, expertise and university professors that otherwise you couldn’t get such quality education if you are not admitted by the ivy league universities. That is unfortunately not possible for many people like me due to a combination of many factors. I firmly believe in Sebasian Thrun’s (founder of Udacity)or Udacity’s motto-’”Democratizing Education”.

Actually my point here is not to write about everything what Udacity do, but to write about what I did in the first project of the self-deriving car Nanodegree program. Lane Line detection is the first project in the Nanodegree. Humans have about many sense of organs to perceive or understand what is going on around their vicinity. And so a human deriver manly uses his/her eyes to see the lane line, traffic signs and signals to make decision on the road. However self-deriving cars don’t have eyes to see lane line, rather they depend on lane line detection technologies.

The objective of this project is to write a code that helps to identify lane lines on the road. First lane detection will be made from different images of roads and next the code will be applied in a video stream which is a series of images to detect lane lines.

The images below describe what is expected from the project, the first image show line segments detected from the image and the second image depict line segment which is connected/extrapolated/averaged from the previous lines segments shown in the previous image. The method and other details on how to achieve this are explained in the following paragraphs.

Line segments detected from an Image
extrapolated/connected /averaged line segment

The pipeline

1. Convert the image into gray scale

2. Performing Gaussian blurring (smoothening) on the gray scale image

3. Canny edge detection on the blurred image

4. Define region of interest and mask out portion of the image

6. Apply Hough transform

The open source computer vision software (OpenCV) was mostly used to implement the above pipeline. The first step in the lane detection pipeline is to convert the color image with dimension of WxH*3 (where W is width and H is hight and 3 the color(RGB) or depth of the image) into gray scale image of dimension W*H*1. This process helps us to identify edges based of difference in brightness of neighboring pixels(i.e by computing the gradient , (df/dx,df/dy) ).

grayscale = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)

The image below is the resulting image from the above process

Gray scale image

This process helps us to identify edges based of difference in brightness of neighboring pixels(i.e by computing the gradient , (df/dx,df/dy) ). We don’t need to do the gradient calculation by ourselves, the OpenCV library called Canny() can help us to identify edges on the gray image. But before performing the canny edge detection we need to smooth the gray image using the OpenCV library (Gaussian_blur()), the function can take n*n pixel and then average the pixel values which gives us blurred gray image. This is done by convolving image with a normalized box filter. It simply takes the average of all the pixels under kernel area and replace the central element(http://docs.opencv.org/3.1.0/d4/d13/tutorial_py_filtering.html). In the process the places with a strong gradient standout which make the edge detection simple.

gray_blurr = cv2.GaussianBlur(grayscale, (kernel_size, kernel_size), 0)

edges = cv2.Canny(gray_blurr, low_threshold, high_threshold)

The resulting image from the above process is shown below

edges detected by Canny edge detection

The low_threshold and high_threshold variable in the Canny() function are image pixel values used to calculate the gradient and help to detect edges on those pixel value intervals.

After canny edge detection is performed masking out the undesired area was done by specifies vertices on the image. This help us to focus our lane line detection on the desired area of the image. The following three lines of codes can do this in python.

mask = np.zeros_like(image)

cv2.fillPoly(mask, vertices, color)

masked_image = cv2.bitwise_and(imgage, mask)

The first line of code create an image which has the size of an original image at which all pixel values are zero (i.e we get a black color image). The next line code fill the masked image at which its region is defined by the vertices with the color use specify in the color parameter. The last line of code overlays the original image with the masked image.

The last steps are most important in detection lanes. The Hough transform takes in the edge detected image above and converts them into many line segment which instead are composed of points.

lines = lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)

By manipulating the parameter of Hough transform we can identify line /lanes of our interest (image below).

However the Hough transform alone couldn’t help us to detect the left line and right line of a lane. To do that we need a help function which takes all the lines detected by the Hough transform and identify the right and left line of a lane based on their slope and finally extrapolate the line (you can see the code here(https://github.com/kulu80/Finding_Lane_lines_SDCND)). This step in the pipeline is the most time consume and crucial step in the lane detection process.It involves trying to perfectly fit the left and right lane lines using different techniques of line fitting (Linear regression .. etc.).The final output for these process of lane line detection gives the following images.

These code are applied in a video stream which is a series of images to detect lane lines. The following two videos show the lane line detection in video streaming, the first on white right lane and the second on yellow left lane.

Summary

One of the most important thing in self driving car is the Lane line detection as the car used it to run forwards, to turn left or right and to change direction. So lane detection in both situation and condition should be taken into consideration while processing lane detection. All the lane line detection done in this project are performed on road images mostly taken on day time and the lane color are also visible clearly in most of the images. This is only the first project in the Nanodegree, which I have got acquainted with lots of new skills and ideas on the self-derive car technology. I hope the next or comping projects would teach us an advanced skill which takes into consideration other many factor, road condition, environmental condition(weather ) and others which affect self-driving,

)
Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade