Don’t Blame the (Tesla) AutoPilot !

Hemant Bhargava
3 min readAug 10, 2016

--

Most of you must have seen the amply-broadcast and highlighted story about a fatal accident involving a Tesla in autopilot mode. Commentators were quick to blame, chastise, or at least question, Tesla’s autopilot software. The government watchdog NHTSA is investigating Tesla. There is a possibility of a knee-jerk reaction that will force Tesla to “recall” the software, and for Tesla drivers to lose this feature. But before we fall into that trap I would like to offer a perspective.

1) Augment vs Replace: Tesla AutoPilot, like many other “artificial intelligence” (AI) tools, aims to “augment” the human — a cornerstone of AI work for decades — not replace him or her. For example, I find the auto-pilot feature immensely useful — it lets my eye scan the environment around, front of, and behind me — and be alert for unforeseen things which I would have missed if I were intensely focused on keeping the car in lane. It doesn’t replace me. There are many things I can anticipate, predict, and respond to a lot better , yet the autopilot definitely helps me while I’m in the car. The particular scenario that occurred — big truck crossing perpendicular to your lane, in front of you — perfectly illustrates the value of augmentation. A driver with autopilot would be far more likely to notice the truck than one without.

2) Beta status and Correct use: Tesla is very clear about conditions under which autopilot should be used (“standard” freeways, no construction, and evidently no intersections for cross-traffic). I find myself pushing these boundaries, but all the time ready to take over in the blink of an eye. I’m sure many early adopters do the same, but we must realize there is a gamble every time we try to break the boundary. In this particular case, I am not sure but it appears to me this event might have occurred outside the “correct use” condition.

3. Tradeoff and probabilities: Even if points number 1 and 2 were not valid in this case (which they are), one still should be clear of what to expect from an AI tool : it may not be perfect, but if it is “better on average” (i.e., reduces probability of an accident) it is still worthwhile.

Most importantly, look at how items #1 and #3 combine. Not only does autopilot perform better on average, it generally performs well on where the human is weak (e.g., losing attention on a long boring drive; drifting across lanes; unsafe lane shifting) — and while it certainly will fail in some cases where the human would have done fine — that totally reiterates the point that in this case 1+1 = 3 or more.

At the end of the day, Autopilot software is in a sense not so different from, say, a rear-view mirror. You use it to get a sense of the objects behind you. But sometimes you turn your head and try to look behind. And you certainly don’t blame it if you back into a wall.

I will concede that Tesla has erred in terms of its marketing message around the software, and even calling it “Auto” pilot creates a false sense of reliance on automation. In that case, change the name but don’t kill the product.

ps: This entry repurposes a previous one published on the Blogger platform, 07/03/2016.

--

--

Hemant Bhargava

Jerome and Elsie Suran Professor in Technology Management at the UC Davis Graduate School of Management (http://gsm.ucdavis.edu/faculty/hemant-bhargava)