Complications of Cameras in Night for Self Driving Cars

Giscle
Giscle
Published in
3 min readNov 16, 2017
Object Detection at Night

We’ve started testing our object detection algorithm! Hurray! Our first step towards achieving the perfect self driving car the world has ever seen! Or so we thought. Getting a self driving car to work at night time especially in a place where streetlights is a scarcity is extremely hard.

Using the basic Tensorflow Object Detection API as our framework here is how our algorithm did…

Looks good on the first play but as you keep watching the replays you start realising that a truck is being detected as a car, a bus being detected as an umbrella and you end up losing your faith in technology.

Complications

There may have been many reasons as to why it did not produce the desired results. Here are a couple:

  1. Lack of street lights — The lighting needs to be sufficient for the camera to detect appropriate objects
  2. Native Vehicles — The Tensorflow API is trained on your average trucks in the European or another western continent such as North America.
  3. Quality — The camera quality needs to be high for Tensorflow to be very accurate
  4. Stability — The camera/phone needs to be placed in a holder to allow for consistency which therefore would cause minimal anomalies

Tackling the Complications

So what are we doing to fix these complications? There is not a lot that can be done when it comes to the lack of street lights but the stability, quality of software and the problem of native vehicles can be fixed by providing a quality mount, well and high quality framework and simply adding to the model used by the Tensorflow Object Detection API.

We will add to the model by collecting images of Indian native trucks, autos, cycles and other vehicles and then running the training algorithm on the existing Tensorflow model. Through this, the existing classes within the model such as truck or car would be updated with our own Indian relics. Not only that but to improve our results we are keeping our eyes open for a better framework!

To handle the external problems assuming we continue with just camera data (no Lidar data involved) it would be best to possibly put extra lighting on the car, this may effect the results for the better but may not be a drastic improvement. Fixing stability and quality as stated above, is in our reach but since we are building Autonomous kits these two attributes really depend on our users therefore it is up to them to contribute in solving this complication.

Future Steps

As we are a startup we still have a lot to do and so the first and foremost thing we need to focus on is improving our vision system. Which includes a better object detection algorithm, lane detection, putting the Lidar data to good use and finally bringing them all together to act in unity.

Nonetheless, we are very close to achieving our goals, you may even see our car driving down the road in a couple years! To speed up the process further and bring the time period from years down to months how about you help us and download the OfferCam app today!

--

--

Giscle
Giscle
Editor for

Computer Vision platform offering three core vision services (Detection, Recognition and Analysis) in the form of easy to integrate APIs and SDKs.