(Dangerous) intellectual shortcuts in autonomous vehicle technology development

Carlos Holguin
AutoKAB
Published in
2 min readNov 16, 2017

As time passes by and the market fills with more and more autonomous vehicle developers, I see a common trend, both in potential customers and promoters of these technologies, and on technology developers, to avoid seeing autonomous technology in the big picture (voluntarily or not). As journalist Junko Yoshida puts it, transport specialists and AV promoters are so enamoured of this technology that they exercise a sort of “voluntary blindness” of any hickups, as “AI will (supposedly) learn from its mistakes”. I guess this is also due to the autonomous vehicle over-hype.

The most crying example was a recent article on Slate titled “A Dangerous Self-Driving Car Is Still Better Than a Human Driver”. This statement in itself is irresponsible and reflects a dangerous misunderstanding: that putting sensors and computers on a car makes it “automagically” safer. Reality demonstrated this is not true with the crash of an “autonomous” shuttle just hours after being launched in Las Vegas. Even though everyone rushed to blame the human driver of the truck that hit the shuttle, everything that could be done to prevent this fender-bender, in which fortunately no one was hurt, was certainly not done. So no, autonomous vehicles are not safe just for the sake of putting more sensors/processors/etc. Safety is achieved through safety-assurance methodologies, which may seem a hassle when you want to sell millions of “100% autonomous” vehicles, but it can save you from throwing all your work to the trash in seconds, and a lot of headaches.

Another technology that reflects this short-sightedness is artificial vision. Car makers love artificial vision: the sensors are as cheap as they can get (thanks Apple and Samsung) so they provide cars with some value at the smallest possible costs. The counterpart though, is that vision requires enormous amount of processing, and millions of engineering hours to achieve the simplest results (say measure the distance to an object). Vision fails in lots of simple scenarios: when you enter/exit a tunnel, when you follow lane lines and they disappear, etc. As a result, Safety assurance of artificial vision for a fully automated driving system (SAE L4/L5) is almost impossible to achieve. Had the best people I know doing vision (@vislab) all of Google X budget, they still could not ensure safety of vision software.

This problem probably comes from the fact that “autonomous” vehicles are at the top of the hype curve, and people has not yet started to realise many that the reality is more complex. All we can hope is that this realisation doesn’t come at the price of (more) human lives.

--

--

Carlos Holguin
AutoKAB
Editor for

President, SuburVAN. Product designer, Urban and mobility planner