How safe is autonomous tech?

Martin Colebourne
5 min readMar 24, 2018

Last week, an autonomous vehicle under development by Uber, hit and killed a pedestrian in Arizona. Surprisingly, police have released video footage of the incident, showing an interior view of the vehicle and an external view.

It’s not pleasant to watch, but three things are clear from the footage:

  • The pedestrian is crossing slowly across a wide road, wheeling a bicycle. Her path is slightly diagonally, moving away from the vehicle. She doesn’t suddenly change direction, or veer out into the road.
  • The vehicle makes no attempt to stop, slow down, or change direction.
  • The operator, looks bored and distracted immediately prior to the accident (as would you be, if your job was to sit for hours in a vehicle whilst it drives around in fully autonomous mode). She is not focused on the road and does not intervene prior to the collision.

A full understanding of what went wrong will obviously have to wait until the investigation is complete, but I think there are some important points to note that should give us pause for thought about the state of autonomous tech and the way in which it is being developed and tested.

I’ve argued before that I believe that full autonomy is doomed as a technology. In brief, I believe that we cannot hope to create vehicles that will be fully autonomous at all times, in all conditions, and on all roads, with current technology. Instead, I think that we must end up with either part-time autonomy; a change in what we consider ‘cars’ to be, limiting their usability.

Whether I am right or wrong, the idea that it is possible (which is the dream being pushed by a number of companies), is a hypothesis. There is nothing certain about it. In which case, we need to view the testing of autonomous technology as experiments — experiments that are being conducted on public roads.

In some cases, these are experiments being conducted by the companies developing the technology. But increasingly, they will be experiments conducted by customers, sometimes without them fully understanding the limitations of the technology they are using.

In Arizona, one of these experiments has clearly failed, resulting in the death of a pedestrian. Autonomous technology promises to reduce road deaths, but the video demonstrates that the vehicle failed to spot and take avoiding action when it should have done so. This is not the first death and it will not be the last. The question is, are the right controls in place to ensure safety — should we even be allowing these experiments to continue on public roads, whilst the technology remains unproven?

The fallacy of a human supervisor

In the majority of cases, experimental vehicles have an operator on board, whose job it is to supervise the vehicle and step in if things go wrong. This incident also demonstrates that this is misleading. Human Factors professionals have known for decades that placing humans in a ‘supervisory’ role, whist automated technology runs a system is a flawed strategy. Humans placed in this role inevitably find it hard to remain focused — doing nothing but watching is intensely boring. They cannot be expected to intervene proactively to a threat that the system fails to spot, for example, to hit the brakes if a vehicle fails to spot an obstruction.

There is a further challenge: autonomous systems have a tendency to work perfectly until the point at which they encounter something unexpected, at which point they sound a warning to the operator and hand back control. However, at this point, the operator is not up-to-speed with what is happening — they have been bored and distracted — meaning they are poorly prepared to take control. Worse, these are conditions which have flummoxed the technology, so there is likely to be something unusual and unpredictable happening. This is a recipe for disaster.

The footage from Arizona demonstrates exactly these conditions — the operator is not able to intervene to prevent the accident. But we should not expect her to be able to — years of research have taught us that.

Whilst this is scary enough in a test environment, what should make us more worried is that we are rapidly creating these conditions for customers too. As autonomous technology is gradually rolled out into more vehicles, and is designed to cover more situations, drivers are being asked to become supervisors. They are supposed to engage autonomous driving under the right conditions and then remain in a supervisory role. However, they may be poorly aware of the limits of the technology and we know that they will make for poor supervisors.

These factors may have contributed to the death of a Tesla driver in May 2016. He had engaged autonomous mode on a highway without a central divider, which it was not designed to work with. And he took his hands off the wheel, rather than properly supervising. The vehicle ran at full speed into a truck that was crossing the highway, killing him instantly.

So if we know that putting humans in a supervisory mode is dangerous, why are we doing it? Well, manufacturers are caught in a bind: Drivers could be asked to drive, with technology focused on improving safety by looking out for hazards that the driver has missed. However, developing technology optimised to look over our shoulder will not naturally lead to full autonomy — the focus is different.

Since full autonomy is the dream, companies need to focus on allowing the technology to take control. However, whilst the technology is incomplete, drivers must act as supervisors, a role we know to be unsafe.

The Race to Autonomy

The development of autonomous technology is an exceptionally difficult prospect, perhaps an impossible one. It is also dangerous, with lives at stake. We might hope that companies would take a slow and careful approach, developing and testing with the greatest care. However, there is an uncomfortable sense of a race going on within the industry.

Established manufacturers, like BMW, Mercedes and Volvo are developing autonomous technology, but they seem slow and out-of-date. New manufacturers like Tesla, and technology companies like Waymo and Uber, seem to be racing ahead. It is easy to paint this as the magic of the tech nerds, compared to the plodding pace of the old guard.

However, perhaps we should question whether this is not the difference between the slow and careful work of automotive giants with decades of history in safety and a true appreciation of the implications of their work; compared to the hubris of Silicon Valley, striding forward with the mantra of ‘move fast and break things’. In the case of autonomy, ‘breaking things’, can mean killing people.

--

--

Martin Colebourne

Martin writes thought-provoking essays on science, philosophy, politics and design that nobody reads.