Wake Me Up When I Can Sleep in My Self-Driving Car

Opinions expressed here are my own, and are not associated in any way with my current or previous employers.

I wrote a post last year called Tech Moving Too Quickly for Auto Companies. In that piece, I discussed traditional auto manufacturers’ lack of tech experience during their stumbles to incorporate connectivity into their products, and how this creates low quality features that can leave many vehicles open to (often non-sophisticated) security breaches. This year, an even bigger technological hurdle is making headlines for these companies: the safety pitfalls of birthing AI-controlled driving ecosystems.

In the United States, engineers in the field and the National Highway Traffic Safety Administration (NHTSA) use the following automated vehicle classifications:

  • Level 0: The driver completely controls the vehicle at all times.
  • Level 1: Individual vehicle controls are automated, such as electronic stability control, automatic braking, and basic cruise control.
  • Level 2: At least two controls can be automated in unison, such as adaptive cruise control in combination with lane keeping.
  • Level 3: The driver can fully cede control of all safety-critical functions in certain conditions. The car senses when conditions require the driver to retake control and provides a “sufficiently comfortable transition time” for the driver to do so.
  • Level 4: The vehicle performs all safety-critical functions for the entire trip, with the driver not expected to control the vehicle at any time. As this vehicle would control all functions from start to stop, including all parking functions, it could include unoccupied cars.

In my opinion, given our propensity for distraction, level 2 and 3 systems should not be pursued by the auto industry. They should for the most part be “leap-frogged” and focus should be sharply trained on level 4 systems.

Much has been written lately about a May 2016 auto accident involving a Tesla driver who was watching Harry Potter while the vehicle was engaged in a feature somewhere between a level 2 and level 3. The feature combines adaptive cruise control and lane keeping, and though it instructs drivers to be ready at any time to regain control of the vehicle, the feature is irresponsibly called “autopilot.” It’s also a beta feature, which is new territory for the auto-industry: flashy new safety implementations that still contain bugs. I get that Tesla is taking a completely new approach and trying to shake up the game, but how could anyone not see that this would eventually produce tragic outcomes?

Put simply, with each passing day there are more and more utilities, pieces of content, and social applications that are competing for a finite amount of human spare time. We have gladly obliged, and as we willfully let our brains drift into all of these new escapes, we must also recognize our failures. A huge majority of us still fail to incorporate (or disassociate) these technologies safely into our driving. Look at driving and texting. Everyone knows it is deadly, and yet it is incredibly common.

Level 2 and 3 autonomous driving systems are attractive to engineers and marketers, but the reality is that they often give drivers a false sense of security. Just because the driver tapped “I agree” on the console (without reading anything else, as we all do when installing a new version of iOS) does not mean they are consciously aware that they must resist the urge to get sucked into another mental space. And even if they are, all it takes is one slip to tempt death.

I’m not a psychologist, or behavioral scientist, or professional in any field that can back up my thoughts here. But in my opinion, humans in today’s world of constant distraction and time demands are not really compatible with level 2 and level 3 driving systems. Our 21st century brains need level 4.

Personally, I welcome the day when I can step into my own vehicle and go to sleep, read the paper, or start a movie on my hour-long commute. But let’s be honest, many people have safety concerns and aren’t necessarily comfortable with this — much like we weren’t comfortable popping a $4000 check into an ATM when they first appeared.

If we want the masses of drivers to embrace fully-autonomous level 4 driving systems, pushing out these lower-level features is just plain counter-productive. They become deadly when a driver, whether being negligent or not, understandably cannot instantaneously regain control in an emergency. The more level 2 and 3 features are released and involved in tragic situations, the more difficult it will become to convince everyone to embrace level 4 systems (which have many more benefits by the way, including massive traffic reduction).

The question becomes, do we need regulation to step in here, or can the industry handle this themselves?