Self-Driving Cars & Responsibility Gaps

What happens when we do not know whom to blame…

Peter Edmund Wilinski
4 min readJun 13, 2023

I n my last article, I have briefly touched upon some issues that can arise, when we are trying to determine who is responsible for a traffic accident, when the car causing the accident is a self-driving one.

There, I barely scratched the surface, though. In this article, I would like to delve deeper.

Photo by Rémi Jacquaint on Unsplash

“Responsibility Gaps”

Usually, when we are looking for the responsible person, we don’t do it because we want to thank them. Questions about responsibility more often than not have a negative connotation.

When someone gets hurt and we ask who is responsible, really, we look for somebody to blame and possibly punish. There might also arise the question of compensation.

Considering that we can usually retrace and analyse most events with sufficient certainty as to who has done what, it could appear odd how there could ever emerge a gap concerning responsibility.

Ultimately, as long as we can ferret out all the agents involved and reconstruct the causal chain that led to a particular outcome, how could we possibly fail to find the agent or agents most causally responsible?

The more complex and intricate aspects of responsibility and responsibility ascription aside, responsibility gaps can arise, among other, when the “agent” that caused the accident is itself not an appropriate target of blame.

Autonomous Systems

When You drive your lawnmower over your neighbours’s foot, there really is no questions about who is responsible for the injury (obviously, the neighbour…).

Or if we drive our car and cause an accident, be it due to negligence or recklessness, nobody will question who is responsible for the mess.

But when our self-driving car runs somebody over, and we are not even able to intervene at all, whose fault is it?

The difference between the lawnmower (or a conventional car, for that matter) on the one hand and the self-driving car on the other is this: we are in full control of the former; we cede all control to the car’s software in the latter.

Control means: being able to direct and foresee the behaviour of a thing. When we operate the lawnmower, a car, or a chainsaw, we can both direct its behaviour and foresee what is going to happen, when we direct it a certain way.

With systems equipped with machine learning capabilities, systems that on top of that act autonomously, i.e., decide what to do in any given scenario, the user’s ability to direct and/ or foresee the machine’s behaviour, is limited if not completely inexistent.

Thus: if we cannot hold the user accountable, and it seems odd to blame a machine (or at least, it would feel odd to expect the machine to justify its behaviour to us and then, if we’re not happy with the justification, punish it).

Why does it matter?

Well, for one we usually want to find the culpable party whenever something bad happens. At the end of the day, there is good evidence that we all have more or less pronounced tendencies towards retributivist behaviour. (stay tuned, I’m working on an article about that: retribution gaps!)

If someone hurts us or somebody close to us, we want the wrongdoer to be punished. That desire can persist, even if there are reasons to attenuate the wrongdoer’s responsibility. I believe that many of us can still be angry, even if there’s no real wrongdoing involved, at all. The anger might be irrational, even inappropriate, but that does not make the anger evaporate.

Emotions tend to be rather rebellious. And although rational reflection can eventually help overcome the negative emotions, especially right after the tragedy, people cannot be expected to simply shrug off the loss as an unfortunate incidence.

On the other hand: punishing the user of a self-driving car for the accident (e.g., by incarceration) also does not feel right.

Although self-driving cars could possibly be significantly safer than human driven vehicles (human error and all that), it is unlikely that they will be perfect… nothing ever is. There is always a chance that something will go wrong and someone gets hurt.

We could abstain from developing this technology, but what kind of solution would that be?! Saving lives is as strong a moral argument to continue the development of the strategy, as can be.

Conclusion

Responsibility gaps are a phenomenon showing us, that our moral frameworks begin to waver now, that our technological development begins transcending what was known so far. The development of autonomous systems equipped with machine learning capabilities certainly is one instance of exactly such transcendence.

Artificial Intelligence (A.I.) would be another…

Further Reading/ Sources:

Danaher, J. Robots, law and the retribution gap. Ethics Inf Technol 18, 299–309 (2016). https://doi.org/10.1007/s10676-016-9403-3

Matthias, A. The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics Inf Technol 6, 175–183 (2004). https://doi.org/10.1007/s10676-004-3422-1

Wilinski, Peter Edmund. (2023) Self-Driving Cars and Retribution Gaps -
A “Welfarist” Approach to Assuage Retributive Sentiments. [Manuscript in preparation].*

*if You’re interested in reading the manuscript, please contact me under: the.essayist.nexus@gmail.com

This article is part of a series exploring the topic of self-driving cars and similar autonomous systems from a philosophical perspective.

--

--