ml-concepts.com
Published in

ml-concepts.com

Smart Traffic Management System

1. Introduction

1.1 Problem Statement

The endless growth in the number of vehicles has led to an increase in the congestion of traffic in urban cities. In the context of Karachi, which is the biggest and the most economically generated city of Pakistan, the people of this city go through traffic congestion daily. Which causes wastage of fuel, increase in pollution, the cost of transportation and loss of time, due to mismanagement leading to an increase in the number of accidents.

As Karachi is ranked as a beta-global city, it is Pakistan’s premier industrial and financial centre.

The traffic jam issue is directly affecting the economy of the country because half of the people in Karachi often prefer public transport, and the other half of the people prefer their private vehicle including bikes, cars, etc. and due to daily congestion, the working people do not reach their destinations on time and cannot work efficiently in the required time.

One of the major examples is PSL matches, people have to spend hours in traffic congestion, most of the time the emergency vehicles including ambulances, the fire brigade is not able to reach their destination on time and it causes several problems. The existing traffic is controlled manually by traffic police, and while they try their best, in terms of communicating to the people they are not as good in terms of efficiency and safety.

Some of the main reasons for traffic congestion are:

  • Road violations
  • No attendance of law that directs parking
  • Large vehicles on narrow roads
  • Obstacles on the road
  • Extra waiting time at signal even when there is no jam
  • Slow vehicles
  • Accident

The annual cost of traffic congestion in Karachi is 688 million USD. With the increase in population, roads are widened and additional bridges are made, but it has only increased traffic proportionally. With the growth in factors affecting the congestion rate, it is examined that introducing an efficient management technique has become the need of the hour. Also, the data of registered vehicles from 2005 to 2015 shows the rate is increasing exponentially and that should be managed in some way efficiently to avoid any tougher situation [1].

Figure 1: Pakistan’s Registered Motor Vehicles from 2005 to 2015

Even the number of public vehicles has increased exponentially from 1990 to 2017 [2].

Figure 2: Pakistan’s Registered Motor Cabs and Taxis from 1990 to 2017

This issue should resolve soon to prevent from sudden fallout of traffic problems. The signals must regulate via machine’s intelligence instead of only being operated through traffic police’s intelligence. Lights should change after evaluating the immediate situation. It will help in keeping everything organized and proper.

1.2 Literature Review

In this current, there is a rise in the overall world population. Many people also prefer to drive their vehicles such as cars and bikes, while fewer prefer to take the public buses for transport. This huge influx of vehicles can easily lead to traffic congestion. To tackle this issue, algorithms can be used with the proper hardware and implementation to make it easier to handle.

While the manual method of handling traffic is done with humans and traffic lights, this method is somewhat inefficient as human error always exists and the human eye cannot count or keep track of the number of vehicles on each road as well as see how far the traffic stretches in a particular lane. Traffic lights are also often set on a timer, which is also inefficient as it only changes at a set interval rather than setting its time based on the number of vehicles.

This project of Smart Traffic Management employs the use of hardware such as Raspberry Pi, Cameras and object detection algorithms to perform the job of handling traffic congestion far more efficiently, as well as being cost beneficial.

There have been other attempts to tackle the issue of traffic congestion, most of them with their personal and custom algorithms as well as different types of software to do the job. As an example, the model proposed by Anurag Kanungo, Ayush Sharma and Chetan Singla [1] make use of MATLAB and C++ and while it does get good results in comparing dynamic traffic light timing to the hardcoded intervals, it is still limited by C++ which is an older coding language as many others have been introduced that are far more efficient.

When it comes to algorithms, multiple algorithms can be used and implemented. Some of them can be used together as well. Machine Learning is very popular when it comes to creating a system that can perform calculations by itself. It employs algorithms such as Neural Networks, Decision Trees, Regression, Apropriori etc. [3] With a proper Neural Network, setup and algorithm, it becomes possible to identify the different types of vehicles that are often observed on the roads. The only limit after that is how far the virtual eye of a camera can see.

Our project makes use of the Python programming language as well as the YOLO algorithm. It makes use of several methods to reach an accurate result when it comes to object detection. Such as background subtraction to only focus on the main objects and ignore any background “noise”, using Convolutional Neural Networks to differentiate and specify certain types of objects to perform feature extraction for accurate results etc. [2]

1.2.1 References

[1] Smart Traffic Lights Switching and Traffic Density Calculation using Video Processing by Anurag Kanungo and Ayush Sharma, Chetan Singla

[2] https://www.researchgate.net/publication/337464355_OBJECT_DETECTION_AND_IDENTIFICATION_A_Project_Report
last accessed on 30th of January 2021

[3] https://www.diva-portal.org/smash/get/diva2:1414033/FULLTEXT02
Real-Time Object Detection and Recognition Using Deep Learning Methods Sai Krishna Chadalawada
last accessed on 2nd of February’ 2021

[4] https://arxiv.org/pdf/1506.02640.pdf
You Only Look Once: Unified, Real-Time Object Detection by Joseph Redmon, Santosh Divvala, Ross Girshick, Ali Farhadi
last accessed on 15th of January 2021

1.3 Aims and Objective

The main purpose and reason for creating this project are to provide better traffic conditions and allow for a much better flow rather than constant congestion. This project will make use of Machine Learning and methods for processing images such as image and video framing. It will assist in calculating the change of time of LEDs concerning the current traffic load. The camera will take the video to generate image frames at intervals to sense the traffic by processing it. By using Image Framing, the latest frame of a video feed is used as the input which the device then evaluates. Through machine learning, YOLOv4, the detection and classification of automobiles will take place that will help in counting them. By observing multiple lanes and taking multiple inputs, we can gather enough data to make calculations on what the time requirements are for each of the lights on their respective lanes. Since it’s an intelligent system, it will repeat the instructions given and provide new results on the delay depending on the next set of video and image input. The time will keep updating to keep up and prevent any extra congestion on other lanes.

The main aim of the Smart Traffic Management System is to reduce waiting time and accidents. The system divides into several parts according to its functionalities.

The first part is the camera. The essential part of the system will help in monitoring and taking real-time videos. The second is the controller, the brain of the system. It will generate commands to the traffic lights based on the calculation. The calculation would base on the number of vehicles on the lanes, and accordingly, time for the green light and subsequent red lights will generate.

2. Methodology

2.1 Hardware Details

2.1.1 Raspberry Pi and Camera

We used Python as our programming language for the project. So, our core technology focus is Raspberry Pi. It is control cameras and LED lights depending upon the commands that are provided. It captures real-time videos and processes the latest image to calculate the density of vehicles on the lane using machine learning. With that information, the controlling of traffic lights takes place with the help of a decision tree.

2.1.2 LED Lights and Resistors

The LED lights are deployed on four-lane as traffic signal lights. They are showcasing the decision regarding which lane is opened for traffic and which are not. It will display the outcome of the system.

220 Ω Resistors are used with each LED. [4] Since Pi provides 3.3V which is the total voltage of LED and Resistor, so there will be a drop of 1.1V

2.2 Software Details

2.2.1 YOLOv4 and OpenCV

Other than Python, our project also uses the benefits of the language to run several programs that are vital to development. First and foremost, we have used Python to run the object detection model called YOLO (You Look only Once). YOLO is very versatile as it is a Convolutional Neural Network that is capable of object detection and can accomplish it much faster than most other object detection algorithms.

YOLO also makes use of COCO, Common Objects in Context. It is a dataset that is used for large scale object detection, object segmentation and captioning the data we will use for our model.

YOLO is optimal when it comes to object detection and object tracking. On a smaller scale it can be used for facial recognition, on a larger scale it can be used to detect and identify multiple, different types of objects. Most of its strength in this project, however, comes from OpenCV, a library that is used for computer vision. Without OpenCV, this project would not be properly functional. Its main applications are in areas such as facial recognition, object tracking and also provides machine learning which opens it up to even more useful applications.

2.2.2 LabelImg

To provide the database for the training of our object detection program, we use an external program called labelling. With LabelImg we can annotate images to specify the objects. Once an object in an image is specified an extra txt file is created which contains the necessary information that is used to create the bounding boxes that appear and identify the objects and vehicles in the final output. We can also download an image dataset from Google’s Open Image Dataset if we do not want to manually annotate our images.

This data is entered in an Excel spreadsheet as a CSV file to organize the data and to have a backup just in case. This data is then used with Google Collaboratory notebook to use this data to train our model to identify the proper objects. Google Collaboratory requires the proper libraries to be imported as well as specifying the proper paths to create and print out the data required for the training of our model.

3. Implementation

3.1 Details of Hardware Implementation

3.1.1 Raspberry Pi and Cameras

USB cameras are used on the opposite of the lanes. The cameras capture real-time videos of each lane and on getting instruction the camera of the respective lane turns on and image framing starts. All cameras are connected with the Raspberry Pi and the videos are sent as input in it. Pi processes Image frames and then calculates the number of vehicles on the lane. The count is sent to the decision tree for deciding the delay for the lane.

3.1.2 LEDs

The decision time for the green light is set as the delay for the green light of the respective lane. Meanwhile, the signal is green all the other signals are red and the next signal prepares itself by doing the whole process within the green light time of the one that is lit at that time.

3.1.3 Structure

Figure 5: Simple layout of connections
Figure 6: Prototype

3.2 Details of Software Implementation

3.2.1 Python for Raspberry Pi

In our case, Python is used to process images and videos.

To do so, we first use an interpreter for Python to use it in a more beginner-friendly manner. Our interpreter of choice is Anaconda Navigator, which is a very commonly used interpreter as it comes with several things to aid with Python development such as Jupyter Notebook, Spyder, Pycharm, etc.

3.2.2 labelling for Annotation

The annotated dataset in YOLO format is available on Kaggle.

3.2.3 Google Colab for Training

Google Collab Notebook is used to program the code for training the model. An empty path list is made to list the location of all the images we have taken and stored on our device. Additional files are created to store extra data such as image data which contains the paths of the images, text files and training files such as classes and weights. A separate file will also be made to contain the information for multiple classes, in case the images we are using contain more than one type of object.

The data in the text files can be used to create a CSV file to properly organize all the data as well as serve as a backup just in case there is any sort of file corruption. However, the data created cannot be properly used for YOLO in which case we use Google Collab Notebook to properly code and format it so that it can be used to train the device we are using for image and object detection.

3.2.4 Algorithm 1 — Overall system

1: Check the delay of the signal that is already green.

2: Set the time for all the upcoming processes to be completed in the green light time.

3: Capture real-time video and perform image framing.

4: Call YOLO’s weights’ and .cfg files that have been generated from training the dataset.

5: Load image and determine the spatial dimensions for the bounding boxes.

6: Initialize the information of classes, object confidence and bounding boxes.

7: Check the confidence in each image and filter out the weaker predictions

8: if confidence is > 0.5 then

Set x and y coordinates.

9: else

go to step 20

10: end if

11: Update the list with new information about classes, confidence and boxes.

12: Repeat for each object detected in the image.

13: Apply suppression to remove any inaccurate boxes.

14: Create bounding boxes for objects detected.

15: Count vehicles and check the density and decide delay from the decision tree

16: Set delay and turn on the green light and red for others

17: Go to Step 1

3.2.5 Flow Chart

Figure 8: Greenlight delay flow chart

4. Project Characterization

4.1 Results

The main aim of this project is what is achieved in the end.
The system doesn’t allot lights fixed timings, rather the delays are based upon the situation of traffic congestion. The system outputs the optimized time for each light for the flow of traffic to work efficiently. The parameters of efficiency of the flow are mainly measured based upon time wastage and have the involvement of accidents and roadblocks due to public or obstacles as well. This system would result in a reduction of accidents but cannot do anything regarding roadblocks. A plus point of this system is surveillance. Since the system is comprised of cameras, a check could be kept on roads and conditions of roads can be tracked down easily if the backup is made through the cloud. In this scenario, details of incidents through images can be checked from Pi’s memory.

Figure 9:
Fig (a): The image shows the count of classified Fig (b): Results that will be used to operate Traffic lights

4.2 Analysis

Our analysis and research show the most occurring reason [3] for traffic congestions are traffic lights. Our smart system has an intelligent algorithm running in it which diffuses the time wastage and ultimately reduces the number of vehicles on lanes, which concludes in less traffic.

Figure 10: The real-time factors causing the traffic congestion [3]

4.3 Conclusion

The smart traffic management system is designed to improve traffic conditions. This is an effective method for monitoring and managing traffic intelligently. The algorithm is comprised of new technologies which are trained according to this era. The system is a contribution to society to help the ones who suffer daily from traffic congestion. It reduces waiting time and also CO2 emission.

4.4 Future Recommendations

In our project, maximum traffic problem can be reduced which is done by image processing and machine learning and we have some future recommendations for making it more advance

· As our system recorded real-time video so we can create a database of video log of traffic at the backend which can be used for detecting number plates of all the vehicles, or vehicles that causes mishaps; accidents, burglars, etc.

· Priority-based traffic jam clearance based on an emergency vehicle such as ambulance, fire brigade etc.

· Cameras can be used for surveillance (we can record all activities of the road help us to prevent an accident)

· Since our system is a prototype, we can further deploy on roads, in real.

REFERENCES

[1] https://www.ceicdata.com/en/indicator/pakistan/motor-vehicle-registered
last accessed on 6th of May ’2021

[2] https://www.ceicdata.com/en/pakistan/motor-vehicle-registered
last accessed on 6th of May ’2021

[3] Analyzing the Real-Time Factors: Which Causing the Traffic Congestions and Proposing the Solution for Pakistani City
Wasim Hashmi Syeda, Ansar Yasarb, Davy Janssensb, Geert Wetsb

[4] Image Processing Based Vehicle Detection and Tracking Method Prem Kumar Bhaskar, Suet-Peng Yong

[5] Weihua Wang, “Reach on Sobel Operator for Vehicle Recognition”, in proc. IEEE International Joint Conference on Artificial Intelligence 2009, July 2009, California, USA, pp.448- 451. [12] Vehicle Detection Using Morphological Image Processing Technique 10 Aisha Ajmal and Ibrahim M. Hussain

You can find the code to run this project on my GitHub.

For any suggestions or research ideas, hit me up on LinkedIn, Twitter, or email me at tubazamansiddiqui@gmail.com.

Thank you.

--

--

--

A curated list of articles to understand different ml topics. for more check out our site ml-concepts.com

Recommended from Medium

10 AI Personal Use Tools for Everyone

AI Will Put 10 Million Jobs at Risk — More Than Were Eliminated by the Great Recession

Step by Step, AI Is Accelerating the Search for a Cancer Cure

Deep Reinforcement Learning(DRL) Medical Imaging

Thoughts | xAI | Neuromorphic Self-Driving Cars

Is the future of art AI-driven? My experience with creating art using Artificial Intelligence

“Bridge into silence”. Another piece of AI art I created using GauGAN2

Do self-driving cars drive in bad weather?

White King, Red Queen: the arms race in computer chess. Part V.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Tuba Siddiqui

Tuba Siddiqui

Tuba is a self-learner who has started exploring the world of Machine Learning and trying to learn a bit of everything.

More from Medium

OCR is Optical Character Recognition, which defines the process of converting scanned documents or…

Automating OCR processes towards translations

An application for Semantic Relatedness: Post OCR Correction

Data Labeling 101: An Introduction to Annotation Techniques for Computer Vision