Artificial Intelligence Will Soon Shape Themselves, and Us
A.I.s will evolve to use techniques that no one — not even they — understand
A future where we’re all replaced by artificial intelligence may be further off than experts currently predict, but the readiness with which we accept the notion of our own obsolescence says a lot about how much we value ourselves. The long-term danger is not that we will lose our jobs to robots. We can contend with joblessness if it happens. The real threat is that we’ll lose our humanity to the value system we embed in our robots, and that they in turn impose on us.
Computer scientists once dreamed of enhancing the human mind through technology, a field of research known as intelligence augmentation. But this pursuit has been largely surrendered to the goal of creating artificial intelligence — machines that can think for themselves. All we’re really training them to do is manipulate our behavior and engineer our compliance. Figure has again become ground.
We shape our technologies at the moment of conception, but from that point forward they shape us. We humans designed the telephone, but from then on the telephone influenced how we communicated, conducted business, and conceived of the world. We also invented the automobile, but then rebuilt our cities around automotive travel and our geopolitics around fossil fuels. While this axiom may be true for technologies from the pencil to the birth control pill, artificial intelligences add another twist: After we launch them, they not only shape us but they also begin to shape themselves. We give them an initial goal, then give them all the data they need to figure out how to accomplish it. From that point forward, we humans no longer fully understand how an A.I. may be processing information or modifying its tactics. The A.I. isn’t conscious enough to tell us. It’s just trying everything, and hanging on to what works.
Researchers have found, for example, that the algorithms running social media platforms tend to show people pictures of their ex-lovers having fun. No, users don’t want to see such images. But, through trial and error, the algorithms have discovered that showing us pictures of our exes having fun increases our engagement. We are drawn to click on those pictures and see what our exes are up to, and we’re more likely to do it if we’re jealous that they’ve found a new partner. The algorithms don’t know why this works, and they don’t care. They’re only trying to maximize whichever metric we’ve instructed them to pursue. That’s why the original commands we give them are so important. Whatever values we embed — efficiency, growth, security, compliance — will be the values A.I.s achieve, by whatever means happen to work. A.I.s will be using techniques that no one — not even they — understand. And they will be honing them to generate better results, and then using those results to iterate further.
We already employ A.I. systems to evaluate teacher performance, mortgage applications, and criminal records, and they make decisions just as racist and prejudicial as the humans whose decisions they were fed. But the criteria and processes they use are deemed too commercially sensitive to be revealed, so we cannot open the black box and analyze how to solve the bias. Those judged unfavorably by an algorithm have no means to appeal the decision or learn the reasoning behind their rejection. Many companies couldn’t ascertain their own A.I.’s criteria anyway.
As A.I.s pursue their programmed goals, they will learn to leverage human values as exploits. As they have already discovered, the more they can trigger our social instincts and tug on our heartstrings, the more likely we are to engage with them as if they were human. Would you disobey an A.I. that feels like your parent, or disconnect one that seems like your child?
Eerily echoing the rationale behind corporate personhood, some computer scientists are already arguing that A.I.s should be granted the rights of living beings rather than being treated as mere instruments or slaves. Our science fiction movies depict races of robots taking revenge on their human overlords — as if this problem is somehow more relevant than the unacknowledged legacy of slavery still driving racism in America, or the 21st-century slavery on which today’s technological infrastructure depends.
We are moving into a world where we care less about how other people regard us than how A.I.s do.