Advanced Techniques for Lane Finding (Self Driving Cars)
Advance Concepts like Gradient threshold, Color spaces and thresholding for the working of Self Driving Cars
Advanced Lane Detection Project of Self Driving Car Engineer Nanodegree which includes advanced image processing to detect lanes irrespective of the road texture, brightness, contrast, curves etc. Using Image warping and sliding window approach to find and plot the lane lines. Also determined the real curvature of the lane and vehicle position with respect to center.
Introduction to Gradient Threshold
We can use Canny edge detection to find pixels that were likely to be part of a line in an image. Canny is great at finding all possible lines in an image, but for lane detection, this gave us a lot of edges on scenery, and cars, and other objects that we ended up discarding as shown in the image below:
Realistically, with lane finding, we know ahead of time that the lines we are looking for are close to vertical.
So, how can we take advantage of that fact?
Well, we can use gradients in a smarter way to detect steep edges that are more likely to be lanes in the first place. With Canny, we actually taking a derivative with respect to X and Y in the process of finding edges.
Introduction to Color Spaces
We should convert our road images to grayscale, before detecting edges. But in making this conversion, we lose valuable color information. For example, in specific images, when we convert to grayscale, some regions almost disappears. In this article, we’ll investigate color spaces, which give us more information about an image than grayscale alone. And we’ll see, for example, that when we switched to another color space for a specific image, then we can get the disappeared section back from that image. Now, let see how that works.
Color Spaces and Thresholding
For the road images, we know that they’re all composed of red, green, and blue values or RGB. And in the previous medium article, we’ve used some combination of masking and color thresholds on these RGB values to pick out bright white lane pixels.
And this lane detection can work well alongside gradient detection which relies on grayscale intensity measurements. However, RGB thresholding doesn’t work that well in images that include varying light conditions or when lanes are a different color like yellow. We can break any road image down into it’s separate RGB components which are often called channels.
The brighter pixels indicate higher values of red, green, or blue, respectively. There are many other ways to represent the colors in an image besides just composed of red, green and blue values. These different color representations are often called color spaces. RGB is red, green, blue color space. We can think of this as a 3D space where any color can be represented by a 3D coordinate of R, G, and B values as shown in the image below:
There’s also HSV color space, for hue, saturation, and value. And there’s HLS, for hue, lightness, and saturation. These are some of the most commonly used color spaces in image analysis. For both of these, H has a range from 0 to 179 for degrees around the cylindrical color space.
The Steps Involved
- Computing the camera calibration matrix and distortion coefficients given a set of chessboard images. (9x6).
- Applying distortion correction to raw images.
- Using color transforms, gradients, etc., to create a thresholded binary image.
- Applying perspective transform to rectify binary image (“birds-eye view”) to get a warped image.
- Detecting lane pixels and fit to find the lane boundary.
- Determining the real curvature of the lane and vehicle position with respect to center.
- Warping the detected lane boundaries back onto the original image and output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position.
The first step in the pipeline is to undistort the camera. Some images of a 9x6 chessboard are given and are distorted. Our task is to find the Chessboard corners an plot them. For this, after loading the images we calibrate the camera. Open CV functions like findChessboardCorners(), drawChessboardCorners() and calibrateCamera() help us do this as shown in the image below:
Thresholded Binary Image
Detecting edges around trees or cars is okay because these lines can be mostly filtered out by applying a mask to the image and essentially cropping out the area outside of the lane lines. It’s most important that we reliably detect different colors of lane lines under varying degrees of daylight and shadow.
So, that our self driving car does not become blind in extreme daylight hours or under the shadow of a tree.
I performed gradient threshold and color threshold individually and then created a binary combination of these two images to map out where either the color or gradient thresholds were met called the combined_binary in the code.
Perspective Transform is the Bird’s eye view for Lane images. We want to look at the lanes from the top and have a clear picture about their curves. Implementing Perspective Transform was the most interesting one for me. Also, made a function warper(img, src, dst) which takes in the Binary Warped Image and return the perspective transform using cv2.getPerspectiveTransform(src, dst) and cv2.warpPerspective(img, M, img_size, flags=cv2.INTER_NEAREST). The results are shown below:
Sliding Window — Fitting a Polynomial
Once I got the Perspective Transform of the binary warped images, I first used the sliding window method to plot the lane lines and fitted a polynomial using fit_polynomial(img) function.
Search from Prior Technique
Later on, I used the Search from prior technique and fitted a more accurate polynomial through my perspective transformed images using search_around_poly(image) funtion. Proper markings are there in the code to indicate each and every step.
Radius of Curvature and Central Offset
For calculating the radius of curvature and the position of the vehicle with respect to center, I made a function called radius_and_offset(warped_image) which returns curvature_string and offset. Used left_lane_inds and right_lane_inds for performing the task. Used function fit_poly(image.shape, leftx, lefty, rightx, righty) which returns left_fitx, right_fitx, ploty to calcuate the real radius of curvature and offset.
After implementing all the steps, it’s time to create the pipeline for one image. Created a function process_image() as the main pipeline function. Also, I put the Radius of Curvature and Center Offset on the final image using cv2.putText() function. The result is shown below:
The code and project video are shown below for reference:
- Computer Vision Fundamentals — Self Driving Cars (Finding Lane Lines)
Computer Vision Fundamentals — Self Driving Cars (Finding Lane Lines)
This medium article focusses on Computer Vision Techniques about the working of Self Driving Cars.
2. Introduction to Neural Networks For Self Driving Cars (Foundational Concepts Part — 1)
Introduction to Neural Networks For Self Driving Cars (Foundational Concepts Part — 1)
Foundational concepts in the fields of Machine Learning and Deep Neural Networks
3. Introduction to Neural Networks For Self Driving Cars (Foundational Concepts Part — 2)
Introduction to Neural Networks For Self Driving Cars (Foundational Concepts — Part 2)
Foundational concepts in the fields of Machine Learning, Deep Neural Networks and Self Driving Cars
4. Introduction to Deep Learning for Self Driving Cars (Part — 1)
Introduction to Deep Learning for Self Driving Cars (Part — 1)
Foundational Concepts in the field of Deep Learning and Machine Learning
5. Introduction to Deep Learning for Self Driving Cars (Part — 2)
Introduction to Deep Learning for Self Driving Cars (Part — 2)
Foundational Concepts in the field of Deep Learning and Machine Learning
6. Introduction to Convolutional Neural Networks for Self Driving Cars
Introduction to Convolutional Neural Networks for Self Driving Cars
Introductory concepts in the field of Image Recognition using Convolutional Neural Networks
7. Introduction to Keras and Transfer Learning for Self Driving Cars
Introduction to Keras & Transfer Learning for Self Driving Cars
Introduction to Keras and the use of Transfer Learning in the development of Deep Learning architectures
8. Computer Vision and Camera Calibration for Self Driving Cars
With this, we have come to the end of this article. Thanks for reading this and following along. Hope you loved it! Bundle of thanks for reading it!
prateeksawhney97 — Overview
Share Split app enables quick and easy file transfer without internet usage. Share Split app created by Prateek Sawhney…