VEHICLE WARNING INDICATOR SYSTEM

Ashish Gusain
5 min readJun 27, 2020

--

The present work focuses on developing a deep-learning and computer vision based warning system for vehicles that obtains three-dimensional video frames using the dash-camera. A warning or a suggestion is displayed for the driver whereby the driver can take a quick action for avoiding hazard. Work done in developing such systems includes detecting lanes, predicting day-night, recognizing traffic signals, possibility of collision. The complete work is present at:

https://github.com/AshishGusain17/Vehicle-Warning-Indicator-System

  1. Traffic Light Recognition

Firstly, all the traffic lights are detected through SSD and the bounding box thresholding is applied where all the regions other than traffic lights are thresholded with black pixel values. Figure 1 is the original frame from a video footage and Figure 2 shows how everything is thresholded to black pixels other than the traffic lights. Now, we will apply color thresholding in the entire frame. All the contours or patches of the red color found inside the traffic light bounding boxes are taken under considerance. After obtaining contours, the one with maximum area is considered and if the area is above a certain threshold value, a red signal is predicted and a warning to stop the vehicle is shown to the driver. Figure 3 shows when 2 traffic lights were found, both contours were analyzed and the one with larger radius was considered. Further, when no contours are found, a suggestion to move is displayed. Figure 4 shows that the red signal is lost and the suggestion to move forward is shown.

For implementation, visit:https://github.com/AshishGusain17/Vehicle-Warning-Indicator-System/blob/HEAD/utils/signalDetection_utils.py

2.) Lane Detection

Figure 1 is the original frame obtained from a video footage with distinct lanes visible. It is firstly converted to gray scale to increase the processing speed. Next, we remove some noise by applying a gaussian filter. Finally, canny edge detection can be applied. Figure 2 shows the image after applying canny edge detection, resulting in a binary image with only edges. Now, we just require that part of the lane which is useful for us, so we require a mask to apply over the image. Figure 3 shows the mask or bird’s eye view that will be applied over the image to get only useful edges. The mask in Figure 3 is applied to the image in Figure 2 and the resultant image is Figure 4 with only the edges present of brighter pixels. At last, Hough lines concept is used to get all the lines in the binary image. These lines are verified to be lanes by certains aspects like their length, line gaps and most importantly slope. Figure 5 depicts the lanes that are found in the original image.

For implementation, visit:https://github.com/AshishGusain17/Vehicle-Warning-Indicator-System/blob/HEAD/utils/lane_detection_utils.py

3.) Tail Light Detection

For working on any frame, we should know beforehand whether it’s day or night. So, we will be taking 10 initial frames and using black color thresholding, we will be detecting whether it’s day or night. As it is obvious that object detection would fail in the dark. Therefore, we will be detecting the tail lights of the vehicles that glow whenever brakes are applied for the driver’s convenience. Figure 1 shows the green coloured contours of all the tail lights of the vehicles in front of us. The night shown at the top right corner is identified using initial frames and thus this tail light detection is initiated. Many times these contours are hard to understand, as they may extend in any direction unevenly, so convex hulls are drawn for clearer representation. Figure 2 shows much clearer results with 2 different convex hulls of tail lights, that are much clearer to visualize.

For implementation, visit:https://github.com/AshishGusain17/Vehicle-Warning-Indicator-System/blob/HEAD/utils/break_light_utils.py

4.) Collision Estimation

Points taken into account while estimating collision. Firstly, object detection of pedestrians and vehicles present ahead is a must. If the area of the bounding box of the vehicle is more than 30 percent the size of the frame, then we can say it’s closer to us. To check whether it is in our lane or not, we need to check x and y coordinates of the box. If the x coordinates are in the range 0.2–0.8, we are heading towards collision. If a pedestrian is in the bird’s eye view i.e. the same view that we have seen in the mask while detecting lanes, then a warning is issued to the driver to avoid any future mis-happenings.

For implementation, visit:https://github.com/AshishGusain17/Vehicle-Warning-Indicator-System/blob/HEAD/utils/estimate_collide_utils.py

The entire code is present at: https://github.com/AshishGusain17/Vehicle-Warning-Indicator-System

You can reach me via mail , linkedIn , github.

--

--