Photo: John Trainor, via Flickr | CC BY 2.0

Who Put the Robot in Charge?

This morning I tried to read an urgent email message from my wife, but my computer insisted that I finish installing some updates first. Yesterday, my printer refused to print my boarding pass until I fed it a new ink cartridge, even though I’m confident that the old one still had a few good pages left in it and I had no replacement available. (Indeed, my impudent attempt to fool the printer by reinstalling the old cartridge was met with terse disdain.) Why in this modern world are we forced to endure such indignities at the hands of devices supposedly designed to make our lives smoother and easier? Why do they prioritize the risk of a minor malfunction over my need to quickly communicate with a loved one or get to the airport on time? For one simple reason: they relentlessly pursue their designers’ visions of perfection, oblivious to the larger social context and human cost. An annoyance to be sure, but hardly a matter of life and death — until you realize that just such life-critical systems are also creeping into regular use.

Consider the antilock braking system (ABS) on your car. Today, your vehicle does what you want it to, right up until you slam on the brakes too hard. Then it decides exactly how much torque to allow at each wheel in order to ensure that the car goes straight. According to Wikipedia, “ABS generally offers improved vehicle control and decreases stopping distances on dry and slippery surfaces for many drivers; however, on loose surfaces like gravel or snow-covered pavement, ABS can significantly increase braking distance.” Now all of this sounds innocent enough until you realize that you are delegating your ability to make a potentially life-saving ethical decision. You might want to put the car into a skid in the snow at your own personal risk, in order to avoid hitting a pedestrian. By turning over the keys to the ABS, the car’s programmed goal of maintaining traction now trumps your intentions, at the potential cost of a human life.

Soon we’ll be living in a world of artificially intelligent programs where — by any reasonable measure — the machines will be calling the shots. These “synthetic intellects” won’t have magically escaped humanity’s control. Rather, they will be dutifully executing the priorities and goals established for them, often in unanticipated ways that violate our sense of fairness and decency.

In his mini-novel “Manna”, Marshall Brain tells the parable of a relatively straightforward application of AI to the management of a fictional fast-food chain named Burger-G. Using computer vision software and RFID tags to track workers (among other clever technologies), the AI program directs the activities of employees at a frighteningly detailed level — instructing them to clear tables, take out the trash, and rotate the inventory. Breaks are strictly enforced, infractions tallied, and penalties imposed automatically. The program proves so effective that in short order, virtually all retail and food service companies are forced to use it for competitive reasons. Soon the program is given final say over all hiring and personnel decisions — who could cry discrimination when the computer is blind to race, and can offer objective justification for all of its decisions? But as adoption spreads, the program is able to use information about individuals’ performance at one company to decide whether or not to hire them somewhere else. A single bad day, and you’re unemployed forever.

Unfortunately, this scenario is not as far fetched as one might hope. Already, Google is monitoring your search habits to decide whether to solicit your resume — but only after you succeed at a time-limited programming test, administered automatically directly through your browser. Before long, we may find ourselves singing our praises to automated hiring systems deaf to our entreaties.

As machines become more capable, we will be increasingly tempted to delegate supervisory and managerial responsibilities to them, along with a host of other activities that would seem to require human judgment, such as driving cars, deciding which patient should get a scarce medication, and whether to fire a missile at a terrorist wielding a gun. But there are certain decisions that, as a matter of principle, we are unlikely to be comfortable delegating to machines.

Computers can’t make thoughtful choices on our behalf, because sympathy and compassion are not part of their constitutional makeup. Should your fancy new self-driving car be permitted to decide whether to run over the elderly couple or the child on a bicycle? Should the medication go to the patient who is the sole care provider for her aging parents, or to the world-famous ballet dancer? And is that really a terrorist with an AK-47 or a farmer holding a shovel? Let the machine decide and you’re absolved of responsibility.

Which is exactly why this trend seems so disquieting. Delegating such grim conundrums to an electronic agent unable to feel pain robs us of what makes us most human — our ability to empathize with others, understand how they feel, and accept not only legal but moral accountability for the consequences of our choices. Dying at the hands of a machine incapable of remorse robs our life of dignity.

So the first time a computer offers you a job, pause to consider how you will feel when it turns down your request to schedule vacation time for your honeymoon, or to take time off to accompany your child to the first day of school.


Jerry Kaplan teaches ethics and impact of Artificial Intelligence at Stanford University. His new book, “Humans Need Not Apply: A Guide to Wealth and Work in the Age of Artificial Intelligence” was recently released by Yale University Press.