Chapter 1: Self-Driving Car, Finding Lane Lines on the Road_Udacity Project [Python Tutoriel]

Mouhcine Snoussi
4 min readApr 2, 2020

--

Overview

When we drive, we use our eyes to decide where to go. The lines on the road that show us where the lanes are act as our constant reference for where to steer the vehicle. Naturally, one of the first things we would like to do in developing a self-driving car is to automatically detect lane lines using an algorithm.

The goal of this project is to make a pipeline that finds lane lines on the road using Python and OpenCV. See an example:

The pipeline will be tested on some images and videos provided by Udacity. The following assumptions are made:

  • The camera always has the same position with respect to the road
  • There is always a visible white or yellow line on the road
  • We don’t have any vehicle in front of us
  • We consider highway scenario with good weather conditions

You can find my Project here : Finding Lane Line Project — Github

Reflection

1. Pipeline description

Udacity provided sample images of 960 x 540 pixels to train our pipeline against. Below are two of the provided images.

1.1 Build a Lane Finding Pipeline

  • We will start with Color Selection, firstly, I applied a color filtering to suppress non-yellow and non-white colors. The pixels that were above the thresholds have been retained, and pixels below the threshold have been blacked out.
  • The original image is converted in grayscale. In this way we have only one channel.
  • Before running the Canny detector, I applied a Gaussian smoothing which is essentially a way of suppressing noise and spurious gradients by averaging. The Canny allows detecting the edges in the images.
  • I defined a left and right trapezoidal Region Of Interest (ROI) based on the image size. Since that the front facing camera is mounted in a fix position, we supposed here that the lane lines will always appear in the same region of the image.
  • The Hough transform is used to detect lines in the images.
  • Now I need to average/extrapolate the result of the Hough transform and draw the two lines onto the image. You can use the function below.
def average_slope_intercept(image, lines):
left_fit = []
right_fit = []

for line in lines:
x1, y1, x2, y2 = line.reshape(4)
parameters = np.polyfit((x1, x2), (y1, y2), 1)
slope = parameters[0]
intercept = parameters[1]
if slope < 0:
left_fit.append((slope, intercept))
else:
right_fit.append((slope, intercept))


left_fit_average = np.average(left_fit, axis=0)
right_fit_average = np.average(right_fit, axis=0)
left_line = make_coordinates(image, left_fit_average)
right_line = make_coordinates(image, right_fit_average)
return np.array([left_line, right_line])

Results:

frames = os.listdir("test_images/")
for i in frames:
image = mpimg.imread("test_images/" + i)
canny_image = canny(image)
cropped_image = roi(canny_image)
lines = cv2.HoughLinesP(cropped_image, 2, np.pi / 180, 50, np.array([]), minLineLength=100, maxLineGap=160)
averaged_lines = average_slope_intercept(image, lines)
line_image = display_lines(image, averaged_lines)
combo_image = cv2.addWeighted(image, 0.8, line_image, 1., 0.)

plt.figure(figsize=(25,15))
plt.subplot(171), plt.imshow(canny_image)
plt.subplot(172), plt.imshow(cropped_image)
plt.subplot(173), plt.imshow(line_image)
plt.subplot(174), plt.imshow(combo_image)

plt.show()

Videos

Setup

Three videos were also provided to run our pipeline against them:

  • a 10 seconds video with only white lane lines
  • a 27 seconds video with a continuous yellow lane line on the left and dotted white lane line on right
  • a challenge video where the road is slightly curved and the resolution of frames is higher

Here some results :

2. Potential shortcomings with the current pipeline

  • This approach could not work properly:
  • if the camera is placed at a different position
  • if other vehicles in front are occluding the view
  • if one or more lines are missing
  • at different weather and light condition (fog, rain, or at night)

3. Possible improvements

Some possible improvements:

  • Perform a color selection in the HSV space, instead of doing it in the RGB images
  • Update the ROI mask dynamically
  • Perform a segmentation of the road
  • Using a better filter to smooth the current estimation, using the previous ones
  • If a line is not detected, we could estimate the current slope using the previous estimations and/or the other line detection
  • Use a moving-edges tracker for the continuous lines

--

--