Self-Driving Cars are on the Rise, But Can They Really be Driverless?

A quick look into the evolution for autonomous vehicles and the variables to consider.

Ricky Huynh
RickNTech

--

Photo by Firdouss Ross on Unsplash

Autonomous vehicles are the future of mobility offering countless potential advantages. Some notable ones include an increase in safety, reduction in traffic congestion, and an increase in human productivity. The growing phenomenon has large companies such as Waymo (formerly the Google self-driving car project), General Motors, and Tesla in the race to develop a fully automated self-driving vehicle, with many startups joining in recent years.

Autonomy is defined as the freedom from external control or influence. What does that mean in the world of driving? Can we truly see vehicles on the road without human influence? Unfortunately, it’s not a simple yes or no answer; there are several variables to consider that may delay the path to driverless cars. Before we dive into those variables, let’s first understand the different stages of autonomy.

Levels of Autonomy

Autonomous technology is not binary — there are several stages in its evolution. The path to driverless vehicles has been defined as 6 different levels of autonomy, according to SAE International’s standard J3016.

Levels of Autonomy for Self-Driving Cars

Level 0 - No Automation

Requires full human-driver interaction to operate the vehicle. (Normal cars)

Level 1 - Driver Assistance

In some cases, the vehicle may automatically provide steering or control speed, but typically not at the same time. The driver is responsible for operation of vehicle, but may leverage some vehicle features such as adaptive cruise control or automatic parallel parking.

Level 2 - Partial Automation

The beginning stage of autopilot. The vehicle can control steering, acceleration, and braking in normal driving conditions. However, the driver must be ready to take control at all times, with the responsibility for changing lanes and scanning for unexpected conditions.

Level 3 - Conditional Automation

The jump from level 2 to 3 automates most operating functions to allow the driver to disengage even further, including monitoring the environment. The vehicle has the capability to prompt the driver when it encounters challenging road conditions, which means the driver’s attention is still critical.

Level 4 - High Automation

The vehicle is capable of operating (under select conditions) without any human interaction or oversight, as well as responding to road conditions, lane changes, and turn signals. However, there are constraints, such as prohibited operation beyond a certain speed or being restricted to specific geographical areas (geofenced), that limit the vehicle’s ability to be fully autonomous.

Level 5 - Full Automation

The final stage enables the vehicle to be completely independent of human interactions. The fully driverless vehicle can operate in any condition or geographical area that a human driver could handle. The “driver” can just kick back and relax as the passenger.

Dangers of Self-Driving Cars

Photo by chuttersnap on Unsplash

Kick back and relax? What can possibly go wrong? In an ideal world, all cars would be at Level 5. Vehicle to vehicle communication is widespread. Traffic conditions are reduced. And roads are free of accidents. It’s difficult to imagine a world like this. And with the dangers connected with autonomous vehicles, it may forever be a dream. Here are 3 dangers we should anticipate with the rise of self-driving cars.

Malfunctions

“Hey Siri, play Taylor Swift” (“Calling Mom…” — Siri). Anyone who has used software has likely experienced some kind of issue that causes the device to act unpredictably or completely fail altogether. Though issues with Siri on your iPhone vs. making the wrong turn at 60 mph are slightly different stakes. Rigorous testing (Waymo has performed over 5 million road miles and billions of computer simulated miles) helps ensure autonomous software releases are safe and reliable, but its hard to believe the technology will work flawlessly. Malfunctions will happen and the ramifications of those will be hard to overcome — especially for public acceptance.

Hackers

Hackers gonna hack. An autonomous vehicle is simply a computing device running software (that will malfunction) in which a hacker could gain access to. Imagine if a hacker were to take full control of your car. Or take over hundreds of vehicles and weaponize them in a terrorist attack. The possibilities are endless and hackers are capable of finding vulnerabilities in even the most secure systems.

Humans

Human error is believed to cause over 90% of all traffic collisions. As mentioned previously, autonomous technology is not binary and will require the human touch to get to Level 5. As the technology gets more established in Level 4 (some companies claim they already are), drivers will likely have a false sense of trust in the vehicle, thus negatively effecting the driver’s reaction speed to an unforeseen event. According to a NHTSA study in 2015, the “driver” of an autonomous vehicle could take up to 17 seconds to respond, whereas drivers in a normal car react in less than a second.

So, What’s Next?

Many experts believe we are about 98% of the way to get to Level 5. Unfortunately, the last 2% is proving to be extremely challenging — if not impossible — to achieve. There are many challenges that still need to be resolved. The most interesting one (in my opinion) is the ethical dilemma for self-driving cars.

Despite the fact that self-driving cars will likely have drastically fewer accidents, accidents are inevitable and sometimes with deadly consequences. Normal cars leave the decision making to the drivers, but autonomous vehicles leave the decision to the programmer. Lets look at an example as diagramed in the image below.

Autonomous Vehicle Ethics Dilemma

A self-driving car with faulty brakes is heading down the road with 4 passengers. The vehicle is directed toward a crowd of children, uncertain about how many can successfully disperse. The car has the option to swerve right, killing an elderly person with high certainty, or swerve left, hitting a tree that will likely kill the passengers. What should it do? Should it hold priority to the safety of its passengers or protect the pedestrians? How should considerations such as likelihood of death, the number of lives involved, or age be taken into account?

There’s no easy answer to approach these ethical scenarios, which may take longer to resolve than the technology to develop. Phantom Auto has taken a different approach to this dilemma by offering teleoperation-as-a-service, which “enables a remote human operator to operate an [autonomous vehicle] when it encounters a scenario which it cannot handle on its own.” Perhaps their technology would be able to detect faulty brakes and have a human operator take control for the final decision. But that leads to another challenge, who will be held accountable?

What Does Ricky Think?

The world will shift towards autonomous vehicles sooner than many skeptics believe. Although I also think we will be stuck in Level 4 for a few decades. Will we have cars that are completely driverless? Sure we will, but driverless cars will be most successful — and publicly accepted — if deployed within specific constraints. For example, deploying driverless taxis that stay below 35 mph, geofenced within downtown San Francisco.

Having autonomous vehicles operate under constraints will help companies overcome the dangers of self-driving cars and mitigate the potential risks. However, the ethical dilemma is a challenging hurdle that may not be resolved anytime soon. There are no right answers to how autonomous vehicles should be programmed and the debate of who should be held accountable will be long-standing. Other factors that will delay the advancement to Level 5 are the infrastructure requirements that are necessary for driverless cars to communicate. (e.g. network, security, connectivity, storage, computing, etc.) This is a rather large topic that I’ll save for a future post.

I’m eager to see how self-driving technology and the ethical dilemma evolve in the next few years. These are my thoughts, let me know what you think in the comments section below.

Did you find this useful? Please recommend or share, and feel free to hit the clap icon. 👏🏻 Follow me for future posts.

--

--