ANoRC, Autonomous RC
ANoRC is a protoype of a selfdriving car that we build for our Capstone Project. Our prototype is an RC car with a stereo camera which is build by me and my team-mate Harry Karwasra.
INTRODUCTION
Autonomous cars are no more restricted to the works of science fiction. In the few years, Driverless car has gone through ” maybe possible” to ” definitely possible “. They are now vehicles on the road today with Advanced Driver Assistant Service (ADAS) that will help in maintaining the speed of the vehicles, break and operate with limited or no human engagement. With these features come great benefits like increase in personal safety, faster / easier commutes and increase in mobility.
WHAT IS DRIVERLESS CARS ? In simple words a truly driverless vehicle is able to navigate to the desired destination, avoiding all the obstacles without any human involvement. To accomplish we would be using advanced technologies of Computer Vision and Artificial Intelligence. These will help the vehicle in :
- sensing the environment,
- processing the visual data that will help in analysing how to avoid obstacles and collision,
- Assisting in parking your vehicle,
- Operate steerings and breaks,
- Using GPS to track the current location of the vehicles and the final destination.
You may have seen self driving car on news, might have read thousands of articles but most of those car still has a human driving. So, SAE (Society of Automobile Engineers) International has divided autonomous cars in 5 levels. Refer the image mention below:

Now, Lets talk about the HOW self driving car actually works.
According to Udacity, Driverless Cars has 5 Core Components on which it works. It includes :
- Computer Vision
- Sensor Fusion
- Localization
- Path Planning
- Control

Let’s discuss each of them in detail.
Computer Vision
It is a scientific field which deals with, how can we make computer to gain a high level understanding though digital images or videos. It can automate the task that a human visual system can do.
Computer Vision helps us in showing how the world around us is like. It looks for the colours and gradiet to identify the lanes and the roads. Then we train a deep neural network that will draw the bounding boxes around the other obstacles (other vehicles on the road).

Sensor Fusion
It is a combination of all the sensory data derived from the sensors that helps gathering information with less uncertainty and less error. It is one of the most important topic when it comes to Autonomous Driving.
So once we know what the world looks the next step is to augment that understanding of the world using other sensors so radar and laser to get measurements that are difficult for a camera alone to understand. It helps the vehicle to have a good understanding of exact amount of obstacles are and how fast are they moving.
Refer the Diagram that will explain how the car will detect the object with the help of Computer Vision and Sensor Fusion.



Localization
Localisation to figure out precisely where are we in that world. With the help of localization the car can tell its exact location, like is the car in the middle or right lane ? How much is the distance between the car and the curb ?
They use MAPS because these will help them what the world is suppose to look like. Humans can memorise easily to the routes they are familiar with because they know what to expect , where the speed changes, where do they have to take a turn or where is the intersection. So, Self Driving car uses maps that tells them where they have to look.
Identifying specific landmarks like poles, mailing box and measuring the car’s distance from each of the individually to estimate the car’s position.

Path Planning
So, once we have figures out where exactly we are int he world and what the world looks like, the next step comprises of charting a path through that world to figure out how to get to our desired destination. It is all about finding a safe and an efficient part to move through the traffic.
Path Planner predicts where the other vehicles are going to go and then figures out what path/movements he should take in response. Then it will build a line (series of waypoint) that you can see in Figure 6. The red car after analysing the movements of the car around it, makes few path / trajectory and then choose one with the most accurate one.

Control
The final step in the pipeline is control. Control means how actually turn the steering wheel and hit the break or accelerator in order to execute the desired trajectory that we have build during the path planning.

Now, lets breifly define the modules that we would be using to build this prototype. They are as follows :
- Lane Detection :- Pipeline to detect the lanes on the road. Identifying lanes on the road is a crutial step in any driving vehciles. Because of the absence of a driver, the car has to ensure that it is within the lane contraint while driving. For Lane Detection we would be using OpencV image analysis that will help identifying lines. (Including Canny edge Detection and Hough Transforms ). We also used algorithm like gradient thresholding, correcting distortion, color transforms, and image rectification.

2. Traffic Sign Classifier :- The main goal of the module is to classfiy traffic sign images by building a Convolution Neural Network in TensorFlow. The dataset that we would be using is German Traffic Sign Dataset.

3. Behavioural Cloning :- It is a method through which human’s sub-cognitive skills could be captured and can make computer mimic them by training them. Train a deep network to clone the human steering behaviour while the car is be driven. The network takes frame of the frontal camera (stereo camera) the input and predicts the steering direction. This can be built using TensorFlow or Keras.

4. Vehicle Detection :- This modules focuses on tracking and detcting nearby / surrounding vehcles. This is done in order to avoid collision and control the autonomous car’s speed according to how close and fast the object / other vehicle is. It can be created using OpenCV, HOG (Histogram of Oriented Gradient) & SVM (Support Vector Machine). Bounding Boxes are buld if any vehicle is detected.

This project is inpired by Udacity Nano Degree Course “Autonomous Driving”. The link to that course if right here:
I have taken few of the images from google images, youtube and other sites. Few of them is mentioned below. For GitHub codes of the entire NanoDegree Program refer the second link “ndrplz/self-driving-car”
