Killer Robots and the Moral Dilemma of Automation (1 of 2)

Jacob Ward
Aug 24, 2017 · 9 min read
SeaRAM defense system, which intercepts incoming missiles without any human involvement. (Photo: Raytheon)
Psst...there’s an audio version of this story. Upgrade to listen.

I once sat inside a flight simulator, slowly blacking out.

“Flight simulator” doesn’t do the machine justice, but those are exactly the right words. There are the software simulators, which show you the ride. And there are the physical simulators which bump you around on a small crane. This wasn’t like that. This was the whole experience, and I was about to wash out.

I was in a windowless white capsule roughly large enough to represent the space inside a fighter jet. The capsule was attached on its right side to an enormous joint that could rotate it in any direction. And the whole thing sat suspended at one end of a roughly 100-foot-long centrifuge that filled a hangar-sized room.

The contraption can produce pretty much any sensation an airplane can — certainly as much as you or I would ever be conscious enough to experience. The trick is that while I was actually spinning in a circle, the image on the screen tricked my brain into thinking I was traveling a linear path through open sky. I had already made it through what the manufacturers would consider a modest test of acceleration. As the screen image told me I was rocketing upward toward an imaginary sun, the centrifuge spun faster and faster, and eventually I experienced six times the force of gravity, enough to crush my face and torso into the seat. It was scary and deeply uncomfortable. My Adam’s apple, which normally protrudes from the front of my throat, pressed against the back of my throat as the machine whined up to higher and higher speed. But it turns out that this sort of G-force — along the front-to-back axis of the body, the kind of force that produces gruesome photographs of normally handsome pilots contorted into goblins — is the easiest to handle. It’s when the simulator gives the experience of downward G’s, which compress the body toward the feet, that things go really wrong.

NASTAR

As the capsule did roughly half a barrel roll, so that my feet were now on the very outside of the centrifuge’s rotation, suggesting the sensation of hard banking, I tried to do what they’d told me: squeeze my butt, press down with my feet on the pedals, all in an effort to keep the blood up above my waist. But it was no use. At 3.5 G’s, my brain was losing circulation. I began to see colors, then a tunnel of black slowly formed at the edge of my vision, and pretty soon I was blind. “I can’t see,” I gasped, and the operator slowed the machine just before I lost consciousness.

It took me roughly an hour in a folding chair to recover. Not only was there the passing out and all that, the conflict between what my inner ear was experiencing (rotation!) and what my eyes were being shown inside the capsule (level flight!) was the classic miscommunication that creates motion sickness.

My post-flight video for Popular Science, where I was editor-in-chief at the time

But eventually I got it together, and sat down with a flight surgeon to find out why I’d washed out. He explained that the blacking out is an interruption of the hydrostatic loop — the liquid connection between your heart and brain that keeps you conscious. When the downward pull of those G forces breaks that connection by pulling your blood into your feet, you black out and that’s it.

He also told me that his company sells this simulator to allied air forces all over the world, and that every so often those air forces will link up their simulators in one enormous imaginary airspace, bring in their best pilots, and dogfight. It’s like a high-stakes video game tournament, with barfing and unconsciousness a part of the bargain.

“Well?” I asked. “Who wins?”

“The Bahrainis,” he said.

“Why? Why are they so good?”

“Well, it’s not really that they’re good,” he said. “It’s that they’re usually the smallest.” He looked embarrassed. “Their hydrostatic loop is shortest, so it’s hardest to break. They can turn tighter than the other teams.”

It turns out that the system that determines the outcome of battle isn’t the weapons, or the flight surfaces, or the armored exterior. It’s the human tendency to get all fainty when we move the wrong direction. The difference between winning and losing at the controls of a $90 million combat aircraft amounts to who among the pilots has a heart and brain that sit closest together.

This is why the former Secretary of the Navy Ray Mabus famously said about the F-35 fighter jet, the newest model built, that it “should be, and almost certainly will be, the last manned strike fighter aircraft the Department of the Navy will ever buy or fly.” The pilot is the plane’s greatest weakness.

This is why militaries all over the world are automating everything from armed sentry duty to antiaircraft guns to cyberwarfare. Humans aren’t up to the job.

And this is why Human Rights Watch and now a group of robotics and AI leaders led by Elon Musk have urged the United Nations to create a sort of Geneva convention that outlaws weaponized robots. Human soldiers, with all their human frailties and human restraint, are on the verge of being replaced by machines that have neither.

We are using technology to make decisions that we humans can’t agree on even when we make them ourselves. And in the process, we’re changing the way we make those decisions. Military technology may be reprogramming our attitude towards war.

Consider the way that technology has already shaped our habits even when we’re not trying to kill one another. When I land in a new city, and must drive from the airport into town, Google Maps offers me a handful of options — here’s one highway, here’s another, here’s the small-road route. And I dutifully choose from among them. Then, as I’m driving along, I start to think about the fact that there are undoubtedly dozens of other ways to get there. But the navigation systems we use funnel our endless options into a multiple-choice question, and we wind up thinking there are only three ways into town. It feels as if we have more options, but in fact we wind up with fewer of them.

So it is in military systems. The Boeing Corporation, along with most of its competitors, offers “mission planning” products that do to military decision making what Google has done to your commute. (“Mission planning” has been a product at Boeing since 1975.) These systems take all the complexity and urgency of military planning and do their best to reduce it all to a handful of choices. Whether it’s the need to airlift troops in or out of the field, plan a missile strike on an installation, or deploy soldiers to secure a town, the software funnels all the open-ended variables into a convenient menu. You might choose the scenic route from the list that Google offers you. A Naval commander might choose the over-the-mountain flight path from the list that Boeing offers her.

In 2013, at Fort Benning, Georgia, the Army invited a handful of military robotics contractors to trot out their best attempt at a self-directed mobile weapons platform — basically, a robot with a gun. It was one of the first times the United States offered any public display of their intention to automate acts of war. Various small tanks rolled out at an outdoor firing range, and with operators standing around at a safe distance, giving each robot the final go-ahead to fire, they shot up a hillside full of targets.

That final go-ahead to fire is the number-one reassurance that military spokespeople offer when asked about the ethics of weaponizing a robot. The technical term is “human in the loop” — a human being will always, it is argued, be involved in deciding to use deadly force. But even if that’s true, the role of the human is still very different when the robot is holding the gun. Rather than pulling the trigger, or even telling a robot to pull the trigger, the human winds up with little more than veto power. The robot makes all the moral and logistical preparations, and the human’s choices are reduced to “go ahead” or “hold on.”

And the thing is, humans are already out of the loop, because just as the organs of a human can’t be trusted to keep her or him from fainting at the controls of a fighter jet, the senses and reaction times of a human aren’t fast enough to handle the kinds of situations that combat now demands of it. As Peter W. Singer, author of Wired for War and a strategist at the New America Foundation , told me in a recent episode of my podcast Complicated, multiple systems already exist that take humans out of the loop entirely.

The SeaRAM system, built by Raytheon, is a last-ditch shipboard defense system for knocking incoming missiles out of the sky. A human would be essentially useless in this role, because the seconds between detection and impact simply aren’t enough time to consult a human operator, much less ask her or him to track and fire on the incoming weapons. So the ship defends itself with an automated battery of its own missiles that detect, track, and fire on incoming projectiles without any human involved. The company’s tagline for the system is “evolved ship defense.”

Promotional video for Raytheon’s SeaRAM defense system

And in Israel, the Iron Dome system has been in operation for several years, intercepting incoming homemade rockets fired from Palestinian territories into civilian areas of Israel. This is again an instance where humans are useless. The rockets come in far too fast for a human to respond, and so this automated system takes care of it for us.

An Iron Dome launch in Eastern Tel Aviv, 2014

Finally, cyberwarfare reduces that response time still further, from seconds to milliseconds, requiring a total delegation of authority to automated systems that detect and respond to attacks in cyberspace. Humans can’t even begin to be included in that loop until the attack and the response are long finished.

Right around the time I became acquainted with my predisposition toward puking and passing out in that flight simulator, I worked on a television show about top secret airplanes, and wound up standing in front of a solution to the fainting-pilots problem: the X-47B. Built by Northrop Grumman at a military facility known as Plant 42 in Palmdale, California, just over the mountains from Los Angeles, the plane was the first of a generation of pilotless combat aircraft being tested by various branches of the military. It’s a tough thing to be around. All my 20th-century instincts about aircraft lead me to expect someone to be sitting inside, so it’s disturbing to see a huge black intake port where the front of the cockpit should be, and big blank stretches of aluminum where you’d expect the pilot to wave at you. The thing literally has no face, no feelings. And yet it has successfully taken off from and landed on an aircraft carrier — one of the hardest things a combat pilot is ever asked to do — without a human involved.

X-47B flight test (Photo: Northrop Grumman)

Many thoughts went through my mind standing in the burning sun, looking at the future of war. For one thing, it occurred to me that no human inside meant no pilots would have to suffer bullets or burns or bailouts. Maybe that’s a good thing. But I also knew that this thing, expensive as it was to build, was a disposable machine that can turn harder than any human pilot could ever turn. It’s a terribly convenient system that will make war logistically and morally and politically easier than it has ever been. We have engineered the human frailty out of this sort of automated system. But how are we going to engineer human morality into it?

In the next installment, a look at efforts to build morality into automated cars, household robots, and robotic weapons.

Jacob Ward

Written by

Technology correspondent for NBC News. Berggruen Fellow at Stanford’s CASBS program. Former editor-in-chief of Popular Science. http://www.jacobward.com

Guidance Systems
Guidance Systems
Guidance Systems

About this Collection

Guidance Systems

We just barely understand the human mind and human behavior, and yet we’re building technologies and businesses that shape our lives in dramatic and fundamental ways. Military robots that have already taken the ethics of war out of human hands. Addiction specialists who are building the neuroscience of habit into apps. Children’s television producers who are trying to use their shows to build better values into their young audience. These are guidance systems, and this series reveals the powerful, occasionally beneficial, and often shortsighted ways in which they’re making our choices for us. Produced in partnership with shift.newco.co

We just barely understand the human mind and human behavior, and yet we’re building technologies and businesses that shape our lives in dramatic and fundamental ways. Military robots that have already taken the ethics of war out of human hands. Addiction specialists who are building the neuroscience of habit into apps. Children’s television producers who are trying to use their shows to build better values into their young audience. These are guidance systems, and this series reveals the powerful, occasionally beneficial, and often shortsighted ways in which they’re making our choices for us. Produced in partnership with shift.newco.co