IMAGE: Lightwise — 123RF

Man versus machine: the handover problem

The 2017 Consumer Electronics Show, which ended yesterday, has seen an extraordinary number of announcements that testify to the rapid progress being made in autonomous driving: alliances such as that between BMW, Intel and Mobileye, who say they will have autonomous vehicle fleets on the road this year, along with Hyundai, which says it is aiming to produce autonomous models costing around $30,000 designed to compete with the much expected Tesla Model 3, and Honda, which is now ready to enter the fray. In a very short period of time, the idea of self-driving vehicles has gone from science fiction to a commercial and social reality. Definitely, more people will ride in self-driving cars in 2017.

But one question still dogs further progress: the handover, or transfer of control of a vehicle from self-driving mode to the driver, an issue that led Google to focus solely on fully autonomous vehicles with no role for humans whatsoever, aiming directly for the so called “level 5” (so far, the Google vehicles still have a driver ready to take control, a wheel and pedals, but according to the brand, that’s just for legal purposes).

The crux of the issue here is the ability of the driver to take over at very short notice in the event that the autonomous system did not know how to react. Stanford University’s research shows that taking sudden control of an autonomous vehicle impacts significantly on drivers and can be problematic. Such a situation might arise because of road works, accidents, road blocks, police or maintenance requirements or unexpected behavior from other drivers that might require driving into the oncoming traffic or switching lane suddenly.

Nissan CEO Carlos Ghosn has said recently that we are still a long way from autonomous systems able to react to such situations, and his company will be installing a help system to a call center with staff on hand to assist the vehicle in situations beyond the decision-making capabilities of the algorithms.

The idea contrasts with arguments that artificial intelligence, after the sufficient machine learning, can perform such tasks better than humans. In other words, progressive training is able to generate algorithms that don’t just comply with the rules of the road, but are also able to understand situations in which it may be necessary to infringe them. Tesla’s algorithms and autonomous driving sensors are now able to prevent accidents. Shared learning applied to fleets of vehicles takes on added meaning in this context.

Who deals best with crises, a person with just a few seconds notice or an algorithm properly trained to deal with them and that has the experience behind it of an entire fleet? Nissan’s approach may have its uses in some situations, but what to when there really is no alternative but to hand the wheel back to the driver?

The answer to this question clearly affects the strategies of Google, now Waymo and its total autonomy decision that removes humans from the equation by completely removing the human from responsibility, and Tesla and others with their systems that hand back control to the driver under certain situations.

Commercially, the first approach is much more radical, requiring greater investment and very disruptive in so much as it pretty much means the end of private ownership of vehicles. The second is more gradual, more acceptable to carmakers, and possibly less sudden for many road users, who see autonomous driving as a possibility, not as an obligation.

Google however, already sees its technology as sufficiently mature as to launch a spin-off. As has happened before, I am inclined to think that this transition is going to happen much more quickly than many expect … including the traditional carmakers.


(En español, aquí)