Why Are Autonomous Cars Frightening?

Mister Lichtenstein
Extra Newsfeed
Published in
7 min readAug 26, 2017
https://s.aolcdn.com/hss/storage/midas/288146b76499dd15fe5c507d42f7a751/205394954/tesla-autopilot-crash-warnings-2017-06-20-01+%282%29.jpg

A lot has been written about how autonomous cars are going to replace human drivers over the next few decades. This of course, has been the story for… the last few decades. We’re only… a few decades away from it. Always. For argument’s sake, let’s assume we’re finally right. We should expect full autonomy available in some for in the next decade or two. Some people have reacted with utter joy, while others think it’s terrifying. Strangely, Elon Musk, a champion of self-driving cars, says that AI is a worse threat to humanity than North Korea. Nice job on giving Skynet a leg up there, Elon.

If human drivers are put out of a job, 3.5 million truck drivers in the US alone will lose their jobs. Without human drivers to service, a lot of truck stops could become more autonomous, putting even more millions out of jobs. Whole towns that exist solely because they sit astride trade routes would cease to exist. Whole states would suddenly no longer have a large part of their economic base, and millions of people on the dole at the same time. It’s possible the economics of trucking would mean the need for fewer warehouses, or warehouses in different places, thus further destroying jobs, and automating the ones that replace them. For the most part, if you work in a technical job that an AI might be able to do (I’m even looking at you, accountants) then you will be put out of a job in your lifetime, if you’re under thirty.

The thing that seems to freak people out right now though, is being in a semi-autonomous car. If the current state of automation is any clue to where it’s going, it’s a little frightening. It’s one thing to have a human driver, to whom all communications are clear, but imagine being driven around by a machine with all the communication skills of a two year old. It’s bad enough when you’re in a cab and the language barrier makes “hey, you’re going down the wrong road” impossible to communicate. Just imagine when the computer doesn’t know what you’re talking about because all the engineers spent their time teaching it what dotted white lines mean and nothing about what “What the fuck are you doing you stupid machine!?” means. Have you ever used voice commands in a car? If so, then you know what I mean. My wife, who is British, can’t even get American Siri or American Google Assistant to understand her. Just imagine if she was from Venezuela, or New Jersey.

If you’ve ever been in the back of a driver’s ed car, watching the student driver and the teacher, then you know the teacher has to sit there with his or her hands on the extra wheel, his or her feet on the extra brake, waiting for the driver to make a mistake. When you put on Tesla’s autopilot, that is effectively what you’re doing.

This means that when you put on a semi-autonomous system, you’re a guinea pig. And look, you might be all right with that. This brings us to where we have a talk about autonomy vs. autonomy.

On one hand you have the kind of autonomy like Tesla’s autopilot, which aims to replace the driver whole-cloth (though not really, wink wink, because lawyers). When this approach reaches its zenith, in theory, you tell the car where to go and it goes there safely. It’s frightening because we don’t know how to teach it the difference between a pedestrian it should brake for, and a pedestrian with a shotgun trying to carjack us, and a pedestrian who is a child with a water pistol not trying to carjack us. Also, a fully automated machine (and even semi-autonomous cars are effectively fully automated in the sense that they can give the computer control of anything) can be hacked, and who likes the idea of someone carjacking you remotely?

On the other hand you have the kind of automation best exemplified by things like ABS and the automatic gearbox. These are automatic features that take a technical task best left to a machine (in most cases) and giving the driver one less thing to worry about.

“But machines will always be safer than human drivers,” you say. “Machines don’t sleep, don’t get drunk, etc.”

While this is technically true, it brings us to an important conclusion science has had about the effectiveness of AI vs the effectiveness of human intelligence. The truth, as it turns out, is unexpected.

In the realm of chess, an area AI has been working since AI was born, it was found that super computers like Deep Blue could beat human chess masters through sheer brute force computing: combing through all the possible permutations of a game for victory. What was interesting was that when human chess masters (some of whom were just okay as chess masters go) were paired with okay chess playing computers against chess master computers like Deep Blue. In theory, computers like Deep Blue are capable of beating either the lesser computer or the human, individually. Together, the cyborg pair defeated the big bad AI handily, and repeatedly. This is termed Augmented Intelligence. It’s like how we all know a lot more now that we have encyclopedias in our pockets every minute of every day. When I needed to fill the tires on my wife’s car, I just looked up the model and the Toyota website told me exactly what PSI they needed.

What this suggests is that actually the safest way forward is not 100% automation or 100% human control, but a kind of augmenting of human abilities through automating certain tasks, and aiding humans with the work of powerful computerized sensors. It’s a cybernetic connection in the original sense of the term developed by mathematician Norbert Wiener.

Part of the reason these systems work is because humans understand they are still responsible. Calling a car autonomous when it isn’t is like calling a food healthy when it isn’t — it will lead to self-destructive behaviors.

As Adam Clark Estes observed in his piece about semi-autonomy in Jalopnik this week (see link above) part of the problem is that you can get overconfident about the autonomy aspect and drive like a dangerous idiot. In some respects, this is not that different from fully manual vehicles operated by the average idiot, unwrapping a chalupa while passing someone on the highway. The problem is that when you trust the tech to take care of you, you might start taking care of yourself less, sending texts or making a playlist on your car’s smartphone interface. It’s a dangerous attitude that makes this less effective and less safe than full autonomy.

This inevitably brings me to ergonomics. When you sit in a machine that is meant to work a certain way, even if it isn’t the “best” way, it should work in a way that is familiar or there will be problems. Even the QWERTY keyboard has been confronted with “better” replacements, but since it’s what we’re all used to, it sticks. When BMW decided to replace their turn signals with their current, idiotic system, they lost some customers to the learning curve. Deliberately making someone unfamiliar with technology makes them more likely to have problems, so taking the actual task of piloting a vehicle (read: steering, accelerating, braking) from a driver for, in the best case years, and then throwing them back into it in the midst of an accident the computer can’t handle (as is currently the law) is utter insanity. If people have trouble with a turn signal for “a couple of weeks” (according to DeMuro) just imagine the problems they’ll have with all the other crap we’re dealing with.

At 5:21 — my brother from another mother, Doug DeMuro (who is my height) complaining about the thing I HATE HATE HATE about BMWs (and every car they make including the Mini)
Jump to 3:29 for Jeremy Clarkson’s take on the BMW indicator stalks

The smallest change to a vehicle can result in catastrophic human error, as evidenced in Malcolm Gladwell’s Revisionist History podcast, episode 8, in which he examines the fallout from the “random acceleration” cases brought against Toyota. To make a long story short, it turns out that all car makers have this problem to the same degree, and the cause 99% of the time is human error, as it likely was in Toyota’s case. This human error usually comes as a result of a small change to the driver’s environment, like an extra plush floor lining, that changes muscle memory for where things like break and accelerator pedals should be. In a nutshell, if you change the ergonomics of a car, thwarting the expectations of drivers, it will result in a learning curve and that learning curve will result in lots, and lots of deaths. This will be true even when cars have full autonomy because if I can’t tell that I’m still signalling left in a BMW, then how the hell do you expect me to know my autonomous driving is turned off at 75 on a the freeway?

So if carmakers want to make us safer on the road, the path forward is clear: Focus less on wiz-bang new features that look sexy on paper, and more on standardizing ergonomics so fewer people die hurtling down the road at 125 mph because they thought the accelerator was the brake. Focus less on selling us the idea that we’re starring in The Jetsons and more on tools we can use to keep drivers safer.

Please recommend and comment! Please check out my website! Please check me out on Twitter!

--

--