Self-Driving Cars And Sound Cannons: An Indecent Proposal

The harder it tries to be intimidating, the cuter it looks.

Deaf ears seem to pair well with loud mouths, as the media continues to fumble through the wrong arguments about ethics in autonomous vehicles: the increasingly infamous “trolley problem” wherein a car must choose whom to kill in an accident it can’t avoid. The latest carbon copy of this clickbait comes from MIT, who literally ought to know better.

I’ve already written about the real ethical problem at the forefront of self-driving technology. This time around, I’m proposing that we eschew ethics altogether in favor of good ol’ weaponry to solve one of the major remaining problems with autonomous cars: pedestrians. Yep, weapons against pedestrians! Just trying to put those two words as close to each other as possible for the detractors seeking a rebuttal quote to highlight. No need to thank me.

Let’s first establish a train of thought to ensure we’re on the same page:

  1. “Solving ethical dilemmas” is not something you simply jot down on your to-do list as an engineer. Our species has been given thousands of years and the best brains nature has to offer, yet we’ve solved very little in that time. It’s easy enough to pose the question (or maybe not, considering how often the media has posed the wrong one), but getting the answer is not something we’re able to put on a timeline, let alone agree is good for the future of humanity… and that, my friends, is currently the greatest obstacle to the practical application of autonomous vehicles.
  2. Solving ethical dilemmas is not an absolute requirement of self-driving cars. It’s a steeper hill to climb with much greater payoff, as opposed to directly solving what we think needs to be solved and then hoping nothing else comes up, or manipulating the environment so that nothing else comes up. The appeal of developing native ethics is that, just like raising a kid to be a responsible adult, you no longer need to supervise them as they confront the millions of edge cases you didn’t directly teach them to handle.
  3. Solving edge cases is an exercise in attrition; never forget that. At some point you have to shrug your shoulders and say, “well, if that happens… tough shit.” Acknowledging this is vital for the growth of self-driving cars, because the popular sentiment among government officials is “we want to make sure this technology is 100% safe before moving forward”, and the popular statement among detractors is “what about ‘X’ dangerous scenario?” We as a society need to start calling bullshit on those statements. Air travel still has unsolved problems which cause inconvenience, injury and death. Rail travel too. Hell, think of all the ways you can still get electrocuted despite a century of developing consumer-grade electricity. Should we have held off on implementing electricity because it wasn’t — and still isn’t — 100% safe? The goal of engineering solutions is to do better, not do perfect. Perfect is not a feasible goal. Self-driving cars are already on the verge of being better, and if we can solve for the remaining issues that currently cripple the technology, we can start finding out first-hand whether these things can deliver the second industrial revolution they’re promising to us, which would be a helluva lot better than the millions of lives and trillions of dollars wasted on the still-unsolved problems of conventional cars.

Phew. Okay, let’s propose some shit.

Pedestrians (which, for our purposes here, are any living organism whose behavior cannot be predicted) pose serious problems for AVs because:

  1. They exhibit inconsistent behavior, sometimes in defiance of logic or social contract
  2. They are deemed to be of immense value, which makes interacting with them a very dangerous endeavor

In short: they do what they want, and vehicles just have to deal with it.

Enter: LRAD. Long Range Acoustic Devices. Sound Cannons. You may remember LRAD from such knee-slapping shenanigans as the Pittsburgh G20 Summit protests. Here, have a look and a listen:

To clarify, this was some bad press LRAD received due to its debatable use by the Pittsburgh police — no peaceful protester is supposed to be that close when you sound the siren. As a weapon, it was initially designed to stop the kinds of “pirate-style” terrorist attacks that claimed the lives of several dozen sailors on the U.S.S. Cole fifteen years ago. That is to say, its targeted beam of sound is designed to become increasingly powerful as you come closer to it, communicating to your body an instinctive motivation not to proceed into a particular space.

It’s that design which makes for a questionable tool in civilian protests (where the space authorities want cleared is already occupied) but an absolute masterpiece for managing roadways and intersections. In fact, the application is so relevant that I’d contend the LRAD Corporation missed its calling.

Lo and behold, the opportunity has come knocking yet again: with all my heart (and my newly-minted 100 shares of the company), I propose that autonomous cars should be outfitted with mini LRADs to gracefully and peacefully “sweep” roadways of wayward pedestrians.

The How

LRADs deploy sound waves in an innovative pattern, which is where the “sound cannon” moniker comes from. The signal can be more or less directed so that the decibel level is significantly higher in front of the device than to the side or behind it. You could say that conventional speakers leak sound, whereas LRADs point sound. You’d be wrong, but it’s close enough.

Plop some pint-size versions of these LRADs onto a self-driving car, put their operation under the control of the car’s AI software, and you have a solution that can sense, notify, and peacefully ward off any foreign party (person or animal) that is on a trajectory to breach its operating space. Remember also that autonomous cars are going to be increasingly electric or otherwise-alternatively powered, which means they really do need some sort of supporting audio to declare their presence in lieu of the familiar rumble of an internal combustion engine.

Consider a typical scenario:

  1. You start crossing a road that you legally should not be crossing, such as a highway, or a street with a green traffic signal.
  2. As a self-driving car traveling down this road approaches, it measures the trajectory relative to you (which it already does) and activates the LRAD in the proper direction and decibel level to inform you to stay clear of its right-of-way space.
  3. The longer you ignore the warning, and the further you venture into the path of the vehicle, the louder the sound gets. The “collision course” vicinity several dozen meters in front of the vehicle (the area where it can’t avoid hitting you without significant evasive action) would be emitting a signal so loud that your ear drums could rupture, which is why your instincts would have compelled you to move away from the sound long before that, which is why a confrontation with the vehicle never materializes and the danger of a collision drops to zero.

At no point in this scenario did the AV have to slow down, change its trajectory, or try to guess your next move. This could be a vital breakthrough, as it marginalizes the need to solve any complex ethical dilemmas about how to confront a pedestrian who accidentally or intentionally impedes the car’s right-of-way. And, because it’s managed by software rather than say, road-raging humans with itchy horn fingers, you will only hear it as your behavior warrants. People crossing the roads legally, or crossing when no oncoming traffic exists, will hear nothing.

The Why

So, why do this at all? Why can’t AVs just yield to pedestrians?

Well, that goes back to the ethical dilemma and my revision of the trolley problem. Human drivers consistently threaten or ignore pedestrians, to the tune of thousands of deaths annually. Part of it is because we’re assholes, but the more important part of it is in line with our social contract: if pedestrians could stop traffic 100% of the time without risk of injury or penalty, they would do exactly that, rendering our roads and vehicles useless.

That’s why most of us agree to stop crossing the street when traffic approaches, or when the Don’t Walk sign lights up. But sometimes we pedestrians push the envelope, and if a car’s software were guided to “prioritize the safety of humans” — as most newcomers who wade into the waters of autonomous vehicles opinion-first claim it should be — then autonomous vehicles would do what they do today: stop and wait for the pedestrian to move. On a mass scale, that would create permanent gridlock, and would thus ruin road-going traffic. The cars simply cannot operate using that logic… and we should know, being a society who proves every day that we don’t operate using that logic either.

Naturally, the problem is people… but “solving” seven billion people’s behavior is no easier than programming our ethics into AVs, and neither are practical solutions we can implement today. So, what is the problem we can solve today? The crosswalk environment.

A Don’t Walk sign is merely advice, nothing more. It can’t deter your behavior, and even if it could, it wouldn’t know whether it should. The consequences of disobeying the sign are unknown, making it a poor disciplinary model. This is sub-optimal.

A car barreling down the road towards you is an exercise in spatial relationships, which many of us suck at on paper, let alone when it’s three seconds away from turning us into roadkill. But maybe the driver will yield? Maybe if I keep walking he’ll go around me? Is he slowing down already? Should I walk faster? Am I going to die? This is seriously sub-optimal.

Put ’em together, and what you have is a “cone of unknowns”, the perception of which can vary from person to person and even from situation to situation for the same person:

A conventional disciplinary model: your behavior is only judged if and after the consequence is incurred, such as getting arrested for jaywalking or being hit by the car. If neither happens, it’s hard to know whether (or to what degree) you’ve violated the social contract.

LRAD-equipped autonomous vehicles solve these vital unknowns for us, so we don’t have to become masters of inductive reasoning or get involved in telepathic games of chicken amongst each other. The instruction, consequences, and behavior are the same every single time. The louder you hear this sound, the faster your body is going to haul itself out of the way.

An incremental disciplinary model, made viable by big data: as your behavior deviates from the social contract, the reaction to your deviation increases. The goal is discipline through education, not penalization.

The secret to any good system is consistency, and that’s what we’re building here as part of the greater goal of autonomous cars: removing the unknowns from the environment, so that people and vehicles can coexist at the pinnacle of mutual benefit, and the discipline of our social contracts can be strengthened through education rather than penalization.

Holler if ya hear me.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.