Nobody is to Blame for Accidents Involving Self-Driving Cars, … right?

Responsibility gaps and autonomous vehicles

Peter Edmund Wilinski
5 min readJun 11, 2023
Photo by Eric Weber on Unsplash

As you slowly recover, waking up from your unplanned nap, you’re welcomed by flashing lights on the dashboard and a high-pitched alarm coming from the speakers.

A red triangle with an exclamation mark is displayed on the big screen positioned in the middle of the dashboard.

Where there used to be a steering wheel in older models, now there’s another screen for watching Netflix on tedious commutes.

The car has come to an abrupt stop; you vaguely remember the sound of screeching wheels on the asphalt, intruding into your dream, right before you woke up.

You’re confused and your thinking is slow; you’re still half asleep. As you realise, however, that you (or rather your car) has just been involved in an accident, the adrenaline wakes you up.

How bad is it?

As you look back through the rear windscreen, your heart is pounding so hard, you can feel it in your throat.

At first, you see nothing, until your eyes finally rest on an indistinct shape, laying just off to the right, leaning against the curb of the sidewalk.

From where you’re sitting, it looks like a bag of old clothes, carelessly cast aside. But then, you recognise legs, arms…

Self-Driving Cars and Responsibility Gaps

Self-driving cars could end up being safer than conventional, human-driven vehicles. So goes the promise and there seemingly is good evidence to believe that to be the case. Eventually…

Eventually, because as with all systems with machine learning capabilities, it might take a while before the algorithms catch on. Adaptation and optimisation require time. That isn’t the point, though.

Even if self-driving cars are significantly safer than conventional vehicles from the get go, they won’t be perfect. No matter how safe they are and how well the technology works, at some point, there will be an accident.

Responsibility gaps arise, when there seems to be no appropriate target of blame. In the case of accidents involving self-driving cars, we could look for a responsible party, and find nobody who would fulfil the criteria.

When a self-driving car causes or is at least involved in an accident, who or what is responsible for the damage and possibly injury incurred?

The first parties coming to mind are:

1. What about the driver?

Future self-driving cars will most likely not even allow their users to assume control in precarious situations.

Imagine the driver is woken up from a nap, discombobulated and unaware of their surroundings. The system “yells” at them, warning them of an impending accident… and shrugging off any responsibility, simply telling the user to immediately resume control and “deal with it”… not a viable solution. The user suddenly turned driver arguably could only make matters worse.

Would we blame the user of a self-driving car for the accident, knowing very well they had little to no chance to prevent it from happening in the first place? Well, other than not using the vehicle at all.

Would we feel guilty if our self-driving car caused an accident?

Not using self-driving cars is the same as telling people to not use conventional cars, because they could, if they malfunctioned, cause an accident (although, most accidents do not happen because of any mechanical failure). It is not a viable solution.

But if the user is not to blame, as they were passively sitting in the car with no control over the vehicle’s behaviour, who then should we blame?

2. What about the manufacturer?

Machine learning entails a procedural adaptation and optimisation of the machine’s code to the conditions in which that machine is implemented.

The manufacturer, the engineers and programmers, have little control over this process. Once the machine is out there, learning, the algorithms change without any input from the programmers.

Can a programmer be held responsible for the behaviour of a machine that now runs on code that has significantly changed from its original state?

It could very well be argued, that as long as the manufacturer, engineers etc. have complied with safety standards and kept both the hardware and software of the vehicle up to date, no blame hits them.

3. What about the society?

We could blame society at large for allowing autonomous vehicles to roam our streets.

Self-driving cars, however, bear too much promis to be ignored or outright denied.

Apart from that, it seems this technology is inevitable in the long run, anyway.

4. What about the car?

We could blame the car, but how would we punish it?

Even if we were to allow machines into the world of morally responsible agents, how would we punish a machine?

Would we ostentatiously turn the machine down or delete its software? Would we destroy the entire car or just wipe clean its hard drive?

And would those punishments even be commensurable with the “wrongdoing” the cat committed?

Such ritualistic punishments would play with out psychological tendencies to anthropomorphise inanimate objects. In the end, we do give our cars pet names, we praise the car if it runs well and we are liable to kick it if the tire blows out.

But could we seriously “punish” cars, even self-driving ones? Would this constitute a legally and morally appropriate punishment? Would it even satisfy our intuitive notions of justice?

Conclusions

This piece was supposed to incite thinking about self-driving cars, rather than provide conclusions. The debate about the moral responsibility for autonomous machines continues. Self-driving cars, which promise to have the greatest impact on our society and are also seemingly quite imminent, are a good first candidate for such considerations.

It is vital to take control of the development of this technology in such a way, as to tailor it to the greatest possible benefit for society rather than to pointlessly fight against it.

If we can underpin the potentially already beneficial aspects of this technology (such as improving the safety on our roads and thus saving lives) with principles of justice as fairness and some considerations for the welfare of all people, we could look towards the technological development with hope, rather than apprehension.

Anxiety about change is inevitable; and it seems that hope is the necessary impulse to overcome it, face the challenges, and avoid dwelling in the comfortable yet subpar status quo.

This article is but the first of a whole series of articles concerning self-driving cars (and possibly veering into other autonomous technologies).

--

--