The Role of AI in “Self-Driving” Cars: Understanding the Risks and Evolving Developments

Liam Macdonald
QMIND Technology Review
8 min readApr 1, 2024
Tesla FSD V12

In a rapidly evolving world where there is seemingly a new, fascinating technological advancement everyday, it is challenging to keep up with the latest developments. The possibilities that technologies like Machine Learning (ML) and Artificial Intelligence (AI) bring to our lives are extremely exciting, however it is important to understand their functionality and potential implications in specific contexts before blindly embracing these advancements.

One sector that has been revolutionized by technological innovation in recent years is the automative industry. Emerging advancements like AI and ML have paved the way for the possibility of autonomous vehicles (AVs) to automate transportation, potentially transforming society in the foreseeable future. While the concept of fully “self-driving” cars is undeniably compelling, significant questions remain regarding the reliability of such vehicles and the potential risks that they present.

Tesla: A True Leader in the AV Market

Having pushed the limits of recent developments in the field, thereby redefining society’s perception of autonomous vehicles, Tesla is an innovative, well-recognized leader in the AV market.

In late 2023, Tesla announced that they would be making monumental changes to the manner in which the self-driving function of their autonomous vehicles would be controlled: a dramatic switch from manual programming to the use of end-to-end neural networks. Prior to the release of the FSD V12 model, autonomous decisions “made by” Tesla’s vehicles emanated from over 300,000 lines of C++ code.

Understanding the Role of C++ in AV Decision Making

Consider the example above. The white shaded area above the steering wheel depicts an example of what this C++ code might look like, when an autonomously operated vehicle is deciding whether or not to stop at a red light. In layman’s terms, the first two lines would essentially trigger the AVs’ censors to determine: (a) if a red light has been detected; and (b) how far the car is from where it should stop at the red line (illustrated, respectively, by the blue dotted lines above).

The succeeding three lines of code would then compute the brake force needed to stop (based on the distance from the car to the white line), presuming that the existence of a red light is in fact detected by the car. The last line of code would send a signal to the AV’s operating hardware to apply the appropriate computed brake force.

Notwithstanding the competency and experience of Tesla’s engineers and developers, how can we be certain that these individuals can account for complex or unforeseen driving conditions? Furthermore, how can we ensure that the sensors responsible for detecting the red light or white line will not malfunction, in the same way that the facial recognition features on our smart phones are less accurate after we have received a haircut?

Autonomous Vehicles: Understanding the Risks

The above image depicts a potential problem that AV programmers have addressed for years, motivating Tesla to develop its FSD V12 Model.

Let us now reconsider the previous red light example. Simply put, a programming error occurs when a coding issue prevents the program from producing the intended result. The challenge for AV programmers is accounting for the inherent risk in driving a motorized vehicle and identifying unforeseen circumstances that may impact the safety of a vehicle’s autonomous operation. If the program requires the detection of a white light to calculate the brake force needed, what happens if the white line is faded? Does the program assign a default brake value — and if so, does that value ensure the car stops in time, or alternatively, does it cause the car to stop too aggressively, potentially harming passengers?

Although arguably trivial, from a statistical perspective, Tesla recently announced that vehicles’ self-driving functionality was the chief cause of roughly 736 collisions in 2022, since 2019 (17 of which were tragically fatal).

Tesla has promoted its FSD V12 as an optimally functional AV due to the use of end-to-end neural networks in its vehicles which are intended to mitigate these concerns. End-to-end neural networks are a form of Deep Learning, a relatively novel subset of Machine Learning.

Understanding Machine and Deep Learning

Like ML models, Deep Learning models learn from data to extract patterns, make predictions, or execute classifications. An analogy that is commonly used is that of children learning about colours.

When children are first learning how to identify and classify colours, they are presented with items that are clearly labelled “blue” or “red”, for example. Over time, children became more familiar the labelled items, thereby becoming more confident in their ability to correctly classify colours.

Understanding Learning: Children

Similarly, ML models become more accurate over time as they become exposed to more data. If the goal regarding children learning about colours is to accurately identify colours based on experience, the goal of a marketing-based ML model, for example, might be to accurately classify customers into segments based on certain characteristics.

Understanding Learning: Machine Learning Models

Deep learning models and neural networks function in a similar way to that of Machine Learning Models, except there are often no explicit instructions involved. For example, think about a bird in a forest trying to find its way back to its nest. Machine learning would associate certain landmarks (like trees or rivers) with directional values, and provide implicit instructions based on those values to correlate with the location of the nest. Deep Learning would allow the bird to find its way back to the nest on its own, assuming the bird would learn which landmarks are of directional value and which are “dead-ends”. Through this process, the bird would become very familiar with the forest, penalizing itself for taking a wrong turn, and rewarding itself for taking a correct turn. Eventually, the bird would become so familiar with the forest that it would know which way to turn to reach its nest as directly and quickly as possible.

In the context of self-driving cars, the operating system can be thought of as the bird and the passenger’s destination can be thought of as the nest. Over time, the neural-network based operating system would learn the evolving nuances of roadways and develop action plans from addressing unforeseen circumstances (like the disappearance of the white line, as discussed above).

Understanding Learning: Deep Learning and Neural Networks

Neural Networks in AVs: Understanding the Risks

The integration of neural networks into Tesla’s vehicles represents a dramatic development in the autonomous vehicle industry. This technology allows the vehicle’s operating system to simulaneously interpret millions of data points — from diverse driving conditions, to unexpected road blocks, to subtle nuances in pedestrian behaviour. All of this purportedly enhances the vehicle’s “decision-making” capabilities, by obviating the challenges inherent with static conventional C++ code in light of the dynamic nature of real-world driving.

However, how can AV passengers be certain that these neural networks will effectively “learn” about unforeseen driving circumstances or make the correct decision when presented with options?

One highly contested field of AI and ML is AI ethics, which examines the societal impacts and moral implications of AI technology. With reference to self-driving cars, the concern may relate to teaching the neural network about the “moral values” behind certain decisions and when these values should cause an AV to override its previous learnings.

In a driving scenario where there is an autonomous vehicle approaching a green traffic light , and a group of teenagers are illegally crossing the intersection in question, how should the autonomous vehicle proceed?. Since the car would not have detected a red light (and therefore would not be signalled to stop), the operating system would be presented with the following options:

  • Option A: Abruptly stop the car to prevent a collision with the pedestrians. This option risks harm to the passenger due to rear-end impact with other cars that do not have sufficient time to stop.
  • Option B: Abruptly swerve into the right-hand turn lane and exit the traffic intersection to avoid collision with the pedestrians and cars behind the turning car.

Initially, it appears that Option B may be the optimal course of action for the operating car to pursue; it mitigates the risk of harm to the passenger and to the pedestrians. However, what if an elderly individual is also crossing on the right-hand side of the intersection in a parallel direction to that of the AV? In this circumstance, the operating system would be presented with the following options:

  • Option A: Abruptly stop the car to prevent a collision with the teenage and elderly pedestrians. This option again risks harm to the passenger form rear-end collisions with other cars following the AV.
  • Option B: Abruptly swerve into the right-hand turn lane, avoiding collision with the group of teenage pedestrians. This options risks harm to the elderly pedestrian.
  • Option C: Do not change the car’s current operating path, so as to avoid a collision with the elderly pedestrian. This options risks harm to the group of teenage pedestrians.

The inherent challenge in developing these neural networks is to balance the the AVs’ navigational capabilities with its moral understanding of various unforeseen circumstances. With reference to the foregoing scenario and related options, how should the neural network decide who to protect: the passenger, the teenage pedestrians or the elderly pedestrian? Furthermore, would, and/or should, the neural network consider the illegal nature of the teenagers’s crossing in its “diagnosis” and chosen course of action?

Weighing the Risks Associated with Autonomous Vehicles

Although the prospect of fully ‘self-driving’ cars is compelling in many ways, , how can we as a society be sure that the potential benefits outweigh the associated risks? In particular, in the tragic circumstance that a neural network causes a serious AV collision, who should bear the ultimate moral and legal burdens involved: ; the passenger in the self-driving car, the developers responsible for producing vehicles like the Tesla FSD V12 or the neural network itself?

One thing is certain: the conversation about ethics, safety and responsibility relating to autonomous vehicles is far from over. It is my hope that this conversation will continue with thoughtful risk analysis and corresponding oversight, as AV developments continue to evolve.

--

--