A few thoughts about #AI/#ML/#DL applied to #autonomous #vehicles

Carlos Holguin
AutoKAB
Published in
2 min readNov 16, 2017

The week of Nov. 6 2017 was full of milestones in the field of “#autonomous” #vehicles. Within hours of difference, Google (Waymo) and Navya announced respectively a test without driver in the front seat (but still with employees overlooking in the back seat) and a new “robo-taxi”. Then, less than 24 hours later, reality brought everyone’s feet back on earth, when a Navya shuttle had a (minor) crash less than two hours from its launch in #LasVegas. Fortunately, no one was hurt, but the simplicity of the scenario raises questions for a more widespread deployment, and shows lack of expertise or a bare ignorance of the state of the Art. It’s unfortunate PR for both Navya and Google, but also for people thinking of AVs as the ultimate solution for mobility. So I decided to put down some thoughts on the “mainstream” approach of using “#AI” on #autonomous #vehicles…

  1. AI is based on data manually annotated by humans, and then scaled to « learn » from massive amounts of data. So if a human makes a mistake in the annotation, that mistake will scale massively and « pop-up » at an unknown point in time.
  2. Everyone keeps repeating that « AI will learn from its mistakes ». But, how can AI know a mistake is a mistake? Just like any computer “program”, it will follow a logic, which for AI is internal (i.e. derived from the data), and act on the basis of external inputs. Only when the mistake happens (which for the AI will NOT BE a mistake, just an output), and which will happen at an unknown point in time, a human will have to try to understand where the mistake came from.

My takeaways are:

  1. AI is ok for Siri®‚ and to identify cats’ images in a web search. These are commercial applications of AI today. But these AI applications don’t put people’s lives at risk. How can we think that AI will be capable of driving cars safely?
  2. Aviation is the safest transport mode. #Aviation’s safety record (fatalities per 100k flight hours) passed from 11.90 in 1938 to 4.80 in 1952 to 1.33 in 2010. With no #AI involvement whatsoever. Only “human” intelligence.
  3. IMO, a loooooot of research will be needed just to determine a method to PROVE #AI can drive a car safely. In the meantime, we are playing russian roulette with people’s lives by letting AI-driven vehicles loose in public roads.

After all the events of the past week, everyone needs to back up and think whether we really want to increase the safety level of road transport, or if we just want to achieve the feat of using AI to drive cars (risking the lives of people around them). For the former, we don’t need to use an unproven, research state technology, the #aviation and #railway industries have already paved the way, and solutions based on extensive research in this field are already available on the market. This is where AutoKAB can help.

--

--

Carlos Holguin
AutoKAB
Editor for

President, SuburVAN. Product designer, Urban and mobility planner