Quantum Roads Ahead: The Future of Autonomous Vehicles

Seth Zucker
b8125-fall2023
Published in
4 min readDec 8, 2023

For many around the world, learning to drive a car is a staple of your adolescent years. The process usually begins with learning the general rules of the road and develops to eventually getting behind the wheel and taking things slow. After enough time, the act of driving just becomes another part of your day-to-day life, whether it be on the highway or in a city, or even driving in a different country, your ability to drive and utilize intuition can be applied across varying circumstances. But what about teaching a car to drive itself? Oddly enough, teaching cars to drive themselves involves many of the same underlying principles: machine learning algorithms are trained on how to drive, programmed with the rules of the road, and sensors and cameras are used to continuously monitor real time information out in the world so that based on the trainings, the vehicle can autonomously make decisions in real time. There is however, one key difference between when a trained person drives a car versus an autonomous vehicle (AV), humans are capable of “systematic thinking”. As people, we can better contextualize information, allowing us to then take bits and pieces of what we’ve learned in one area and apply it to different situations. In the context of driving, if you know how to park in a driveway, it’s likely that you can take those skills and infer the proper technique for parking in a garage without someone having to reteach you how to do so. While you can train and program an EV with as many scenarios as possible, current technology lacks the ability to generalize, intuit, or transfer tangential learning to new situations. This directly relates back to the architecture of current AI systems, in which they process information sequentially (one piece after another) and base it on specific rules or algorithms, thus inhibiting them from applying knowledge from one topic to a related, but untrained similar task. This limitation is often the underlying cause of many of the AV accidents reported. Most recently, a pedestrian was struck by a car (with a person behind the wheel), thrown into the path of a Cruise AV, pinned to the vehicle, and then dragged 20 feet as the AV attempted to pull over to the side of the road. It’s easy to see the problem: despite pulling over after the accident being the correct thing to do under the law, any person behind the wheel would have immediately stopped the car once they realized that someone else was pinned to it. With accidents like these making news headlines, and consumer trust in AVs declining as fear grows, this poses a very important question: how can we build AVs to make the “best” and most “ethical” decisions on the road?

The first option is the Elon Musk / Tesla approach in which it was reported that the self-driving function was aborted less than one second prior to impact. Unfortunately, the only thing this if effective for is in limiting Tesla’s legal liability, not actually addressing the issue at hand. Others believe that the answer to this problem lies in utilizing utilitarian programming. Essentially, these AVs would be programmed to always take the path of least harm (i.e., avoid hitting 5 people by moving to only hit 1 different person.) The flaw with this approach is that we have now trained vehicles to actively take life in this situation which many argue directly violates an ethical duty of care to not put someone in harms way.

If we thus move away from a utilitarian approach, a recently published paper by Professors from the University of Michigan Law School proposes programing autonomous vehicles to uphold traffic laws while also emphasizing human safety. The authors posit that by basing the instructions and guidelines on legal system frameworks, autonomous vehicles would operate under the “authority of human-defined traffic law and ensure that the vehicle avoids decisions that introduce unreasonable risks”. While this aids in solving the ethical dilemma, these solutions fail to address the root of the problem — AI’s inherent inability to react and adapt to unforeseen circumstances.

As such, if adapting the training and programming of AVs can only get us so far, the next option becomes a fundamental overhaul of the architecture on which these vehicles are built. We thus turn to architecture based on what once seemed like science fiction: quantum computing (QC). In brief terms, QC is a type of computing that uses quantum bits or qubits, which can exist in multiple states simultaneously (both 1 and 0 at the same time), allowing for more efficient and more complex computations compared to classical computing which can only exist in binary states (either 1 or 0). If we can integrate QC into the AV architecture, this could enable faster real-time decision making, a particularly important dynamic in constantly changing driving environments. Additional benefits include the ability to consider multiple possibilities simultaneously, leading to a more flexible and adaptive decision-making process (similar to human intuition), as well as enhanced ability to utilize the wealth of data that is constantly being generated by all of the sensors and cameras in AVs.

While it is important to note that quantum computing as a part of our everyday lives is still at least a decade away, for AVs to truly operate in a driving environment that involves the unknown unknowns of human behavior, it will not be enough to change the way these vehicles are programmed. We will need to fundamentally change the architecture on which they are built.

--

--