Road Lane Detection For Autonomous Driving

Pamoda Dilranga
Rootcode AI
Published in
7 min readDec 22, 2020
Image courtesy pr.kia.com

Why Road Lane Detection?

Road lane detection has a fundamental functionality in autonomous driving vehicles. It has significant functionality in path planning, control braking, and steering. Even vehicles without autonomous functions use lane detection to notify the driver to correct the car’s position if the car starts to drift out of its respective lane to prevent accidents.

How Did We Implement Lane Detection Using the Hough Line Method?

The Hough transform is a feature extraction technique used in image analysis and computer vision. The purpose of the technique is to find imperfect instances of objects within a certain class of shapes by a voting procedure. This voting procedure is carried out in a parameter space, from which object candidates are obtained as local maxima in a so-called accumulator space that is explicitly constructed by the algorithm for computing the Hough transform.

Hough lines based lane detection is a purely mathematical way to implement lane detection. For this, we need a clear picture or a video source of the road.

Since fully coloured RGB images take lots of computational power to process we need to convert our full-colour image into a grey-scale (Black & White) image. Processing black and white images take less computational power than processing fully coloured RGB images.

Another notable problem in lane tracking is that if an image has unnecessary sharpness that can be detected as a false edge, that can be a problem to edge detection algorithms. So we need to smoothen the image by reducing the noise of the image which can be done using a Gaussian Filter. A Gaussian filter uses a kernel to make each pixel value equal to the weighted average of the neighbouring pixels, which makes the image smoother.

Road lines can be detected as edges inside the picture. To detect Edges we need to measure changes of brightness over adjacent pixels. Changes of brightness over adjacent pixels are known as the gradient. For example, in the image given below, the white line has a higher brightness than its adjacent pixels.

So the gradient between the black part and the white part is high so that it can be detected as an edge. We used the canny edge detector to perform the tracking, the canny detector measures changes in the intensity of a given pixel with the adjacent pixels. This way we can detect the edges in our grayscale picture.

Looking at the above picture we can see there are a lot of edges along with the picture. So we need to select the region of the pixels that we are interested in. And after that, we mask other parts of the image and process only the needed parts. We use three points to create a triangle, that can be used to isolate the part of the picture that we want to focus on.

Now we need to use Hough Space to detect lines inside the selected area. Mathematically every line can be represented using the equation y = mx + b In the Cartesian space, we use these x and y points to represent lines. In Hough space, we use m (slope) and the b (intercept) to represent data. Each individual coordinate in the Hough space denotes a line in a 2-dimensional cartesian space.

As an example, the below graphs depict the line with the equation y = 2x +2 in Cartesian space and the Hough space.

image courtesy towardsdatascience.com

So as shown in the above picture, the Green line in Cartesian space is shown as a point in a Hough space. And red and orange points in Cartesian space are shown as lines in a Hough space.

If two lines intersect at one point in a Hough Space, that means they are representing a Cartesian line in the corresponding Hough space point.

So the grayscale image created earlier had a series of edges that represented a series of white points. We can create Hough space using these points. Which will be something like this,

In the above examples, only five lines are shown, which represents 5-pixel points in the image. The real Hough space will be much more complex. In this image, we can see that there are four intersecting points. This means that these points make a line in the picture with corresponding pixels. So to figure out what are more visible lines and what are not lines at all we can use a voting system. First, we split Hough space into a grid. Something like this.

If we look at the cell where the red point is located, for the whole grid there is only one intersection point. And if we look at the grid where purple, pink, and gray dots are located there are three intersection points. So that cell will get more votes than other cells since it has more interaction points.

For example, above Hough space cells will have these kinds of votes.

After that, the algorithm picks out the most voted cells and decides that there are lines in those corresponding cells. This gives us the lines in our selected area.

But there is a problem using Hough space with m and b axis,

If we take a vertical line that has the same x value, Intersection (b value) will be zero. Therefore m will be equal to this,

m=(change of y / change of x)

Since the change of x is equal to zero. m will move to infinity. (m→∞)

So creating Hough space using m and b as axes will be impossible. For this kind of scenario, we use something called a polar coordinate system. Which use this equation to detect a line,

ρ=x Sinθ+x Cosθ in this equation, ρ represents the distance from origin to a point. And represent angle from x-axis to line. x Sinθ Represent distance in x-axis and x Cosθ represent distance in y-axis.

So we draw a chart using ρ and θ as axes.

But still, this graph follows all the rules from Hough Space so as mentioned before we can assume that if two or more lines are intersecting each other then there will be a line. So we use the voting system as same as before.

In real-life scenarios, there will be more points and smaller grids. So getting the most voted grids and drawing lines in the image to those corresponding grids will give us the line locations of the image. Combining that line location with the original image will give us the final outcome something like this.

When Things Get Real.

In practice, the usage of Hough line-based lane tracking is very rare in modern autonomous vehicles because of their computational cost and lack of speed. Also, hough line tracking algorithms find it hard to detect curved lines in the road, which is very common on real roads. Modern autonomous vehicles use deep learning techniques such as instance-based segmentation and detection to perform lane tracking using multiple embedded cameras used on the vehicle rather than the Hough space approach to detect road lines. Another reason is that the accuracy of hough line tracking heavenly depends on the weather, speed, and line marker conditions, and if by any chance the camera gets obstructed by dirt or snow, the lane detection system will be deactivated, but on the other hand, hough line depicts another purely algorithmic mathematical computer vision approach that was used in the past for lane tracking and serves as a foundation for other tracking based computer vision algorithms.

For more information visit us on rootcode.ai

--

--

Pamoda Dilranga
Rootcode AI

Intern Artificial Intelligence Engineer @ Rootcode Labs.