Where is Artificial Intelligence hiding in Autonomous Cars ?

An interesting question posed by Prof. Amnon Saushua at the MIT Center for Brains, Minds and Machines. Very interesting talk, you should listen to it and take notes (as I did!)

Two great speakers in this domain are Prof. Saushua and Nvidia’s Jensen Huang. Hear both their talks whenever you get a chance …

Object Detection — A trip down the memory Lane

Prof. Shimon Ullman introduced Saushua & talked about his thesis work saying he still doesn’t fully understand all the math ! Saushua’s research and accomplishments, of course, are long & deep ! The 1995 video of object detection was nostalgic ! We have come a long way, as Saushua would say later in his talk …

Sensing is interesting but solved to an extend (more later), Mapping is a technology & logistical problem (low cost & scalable to the globe), but driving policy is still not solved and that is where most of the AI is hiding !

Driving policy is negotiation, it is cultural and it is location dependent ! And we negotiate by motion and not by talking. So an AI, in order to successfully drive, need to understand the intent.

Interestingly, yesterday at the GTC17, Bryn Balcombe (CTO,Roborace) expressed the same sentiments (my blog on the #gtc17 session is here)

Mapping would be solved by crowd sourced, according to Mobileye. ~ 2M cars are harvesting data (Mobileye has contract with BMW & VW for the maps) ; landmarks (~20,000 different items viz pavement marking, reflectors, poles,…), drivable paths and so forth are the building blocks for HD maps and for localization. He showed how the combination gives an accuracy of < 10cm (GPS is in meters) overlaying the maps onto Google earth.

The maps generated automatically. Map assist module with front facing camera generates the data (~10 KB/km), sent to the cloud and aggregated in the cloud to an accuracy of at most 10 cm-on an average 5 cm !

The double Merge is an interesting challenge that is a hard problem to solve. 2025AD has an interesting blog post about it.

Machine earning vs. Policy

He had a good set of thoughts around the ML vs RL — as a general separation.

The Q & A session was more illuminating — a variety of topics were discussed !

On Reinforcement Learning

Supervised learning with neural network work very well. Even with more parameters than data, the network finds the local minima.

But in case of RL, one will be greatly disappointed if you try to recreate RL papers to all sorts of problems (good insight!). RL is the bed rock of solving the driving policy, but requires lots of fine tuning.

One interesting area is the GANs — Generative model of human driving policy incl bad drivers can help the simulation drive for the the robotic policy. Beware, drive around simple areas and avoid intervention to not to mess with stats (like intervention) is not a solution !

Tesla accident

In variably it is only a question of time before this question was asked ! The Tesla crash was outside of the design parameters of the system — the system was configured for rear end crashing, but what happened was a T-bone. Systems can do T-bone detection, but the Tesla system didn’t have T-bone detection (white truck and sun have no bearing — they are just stories) Of course, it was an ugly divorce and at some point both companies said that they won’t comment anymore - so no comments !

Note : Two more talks by Amnon Shashua, to watch, are his Keynote address at Bosch ConnectedWorld Conference 2017 and CVPR 2016 keynote.