Former Secretary of State Henry Kissinger, in an essay in June’s issue of The Atlantic, joined a growing chorus of people warning us against a future shared between humans and advanced (i.e. so-called “general” or “superintelligent”) artificial intelligence (AI).
This group includes the late physicist Stephen Hawking, who said AI “could spell the end of the human race,” Tesla and SpaceX founder Elon Musk, who has compared the creation of AI to “summoning the demon,” and Microsoft founder Bill Gates, who recently wrote of AI that he “[doesn’t] understand why some people are not concerned.”
Our most popular science fiction — “The Terminator”, “2001”, “Blade Runner”, “The Matrix”, “Black Mirror”, “Battlestar Galactica”, “Westworld” — of course plays off these fears, making it easy to imagine a future world where humans will be subjugated to “the machines.”
AI is both awe inspiring and intimidatingly capable. Even now, our infant AI — it’s only a few decades old, after all — can be inspirational: it can save lives, take care of us when we get old, make us laugh, and improve upon our gaming skills. But it can also: be weaponized, take our jobs, or mistakenly run us over in a car.
The fact that it can do all of these things now, in its infancy, makes AI the defining issue of our time. We might as well be talking about the future of everything — just look no further than the changing nature of the Fortune 500, or where China is investing in its quest to be sole superpower.
Curious about how the AI conversation was landing on a broader audience, and not those in the trenches of this discussion, I spoke with my mother. The conversation was illuminating, and a little alarming. For her, and I presume others, the nuance of the issue is entirely lost in the sensational, fear-mongering headlines and dystopian entertainment.
Kissinger and the rest of the chorus are correct that technical development research and discussions around AI safety and risks should be a top priority. Fortunately, there are many smart, thoughtful, expensive efforts to do just that (see one of many compelling examples from the Oxford University Future of Humanity Institute.) The increased interest, funding and efforts in this area are encouraging and necessary.
However, there are is a void of voices explaining the incredible potential of AI to non-technical folk like my mom, and how it may be critical and uniquely important in helping humanity thrive in a vastly more complex and fast-paced future. Our thriving depends upon our co-evolution with AI (see my latest post) and making the journey symbiotic rather than competitive.
While I have long disagreed that fear-mongering would yield an optimal path for AI development, I came away from my conversation with my mom convinced that it’s not just harmless noise. It’s this very chatter that could very well prevent us from understanding, exploring, and adopting AI in the very ways we most need it.
A primary reason why fear-mongering is a mistake: invoking fear does little but invoke our fight-or-flight response. Fear collapses all of our concerns down to the present situation, and prevents us from thinking long-term or rationally. Fear engages a cortisol/hormonal/fight-or-flight-get-out-of-Dodge response that shuts down all but the essential functions. When you have a gun held to your face, all you care about is not getting shot.
When we consider the future of the human species, we can do better than relying upon the least evolved, amygdalar, reptilian parts of our brain.
Kissinger’s recent interest was piqued by Google’s AlphaGo and its recent victories over top humans in the complex board game of Go. The data demonstrates that Google’s AlphaGo showed the greatest levels of brilliance when it taught itself to play knowing only the rules of the game, without any starting human assumptions. In a matter of days, AlphaGo’s contributions eclipsed thousands of years of accumulated human genius.
In its victory, the AI played what Kissinger calls “strategically unprecedented moves,” which were alarming to him, not because he cares about board games, but because he sees them as a proxy for geopolitics. During the Cold War, for example, he argued that the U.S. and Russia were playing both real (Fischer v. Spassky) and political (Korea, Vietnam) chess against each other, with countries as squares and armies as pieces.
Kissinger is correct to consider the impact of AI that can beat humans at board games, not by being better or faster than humans, but by inventing entirely new ways to play.
He is wrong, however, to join in on the fear-mongering. The more considered response to advances in AI is exercised caution and thoughtfulness, coupled with an embrace of its unique abilities.
As Demis Hassabis, himself a chess prodigy and head of Google’s Deep Mind, has said of its chess version: “It’s like chess from another dimension.”
This is incredibly exciting! AI offers an entire universe of expansive moves and strategies that are as yet undiscovered by our human intelligence!
Rather than jump on the fear wagon, might this not encourage us to check our egos at the door and join the dance party?
On one level, AlphaGo was a brilliant PR move on Google’s part — it sent so many shudders through Asia that China has more than doubled down on AI. But looked at another way, Google was tragically wrong to frame the situation as Human v. AI. We are not enemies. We can be on the same side.
What if we applied the same brilliant AI tools to our biggest problems? What else are we missing in science, politics, law, morality, and the battles against climate change, medicine and economics? Historically, we’ve paid a very steep price for progress — inequality, wars, crusades, and persecution. Undoing human belief systems is like trench warfare. We can be and usually are myopic, closed off, neophobic, stubborn, and selfish.
Might we be able to combine human and AI genius and, together, and come up with what would otherwise take centuries of iterative, human work? Could AI be the best antidote to the very things that most hold us back? Could it be the breakthrough we need to get to the next levels of understanding, expansive thought and self-awareness?
Billions of years of evolution and modern day capitalism make us think that competition is the default relationship between any two things, especially things as different as human and machine. Competition is so deeply embedded into our DNA that we fail to imagine a world that could prioritize harmoniousness over all else.
The chorus’ goal is not the total cessation of AI development but the movement of AI toward so-called “goal and value” alignment with humans — basically, making sure that the AI is restricted in order to understand or preserve key features of “our humanness” while preventing unexpected, catastrophic outcomes.
But who is to say what those values are? We’ve already seen how AI is less capable when it learns from human strategies/assumptions (AlphaGo vs. AlphaZero.). So whose goals and values do we align to? The ones within ourselves, that allow us to function and maintain a constant, individual identity? Or the ones between each other, which allow us to function reasonably well as social beings? How about the ones between us and our planet? Or between us and all living things, including our planet? (Akin to the Gaia hypothesis.)
Here we have a broader set of “goal and value alignment” considerations:
(1) Individual Goals/Values: Every single human on this planet is riddled with inner inconsistencies, biases, and conflicting goals and values. Which values do we use? Which conflict, motivation, or morals do we include in AI?
(2) Social Goals/Values: Our social world is tolerable and functional, but human history makes me skittish to include those “goals and values” into AI. Also, whose social values do we use? If we did this 200 years ago we would code slavery and a lack of women’s rights into the AI. If we took a snapshot of today’s values, there would still be racism, sexism, and xenophobia. As Steven Pinker would say, we’re better than we used to be. But we are by no means perfect yet.
(3) Planetary Goals/Values: One of the greatest threats to humanity is the destruction of our planet. Surely the present goals and values with respect to the Earth are not parameters we hope AI would continue.
(4) Other Living Things Goals/Values: Will we treat AI like we do other domesticated creatures? Factory farms, slaughter, and a disregard of their emotional and intellectual well-being? Serious question, I’m not judging — will we? Do we align AI to those values, teach it that it is totally okay to treat certain creatures a certain way?
If we’re going to ring the alarm bells relating to AI, let’s focus our attention on the more immediate and known risk: humans. It’s us that cannot be entrusted to responsibly deploy AI to maximally increase human thriving. So perhaps our fears should be more well placed — around how we can avoid imposing ill-informed assumptions, beliefs and values upon AI while creating safegaurds to prevent human initiated misuse.
The reality is that we need AI. To solve our most vexing problems and acquire the wisdom to successfully navigate the fast-charging future which is way too complex for us or our institutions to handle.
I’m reminded of the classic Ingmar Bergman movie, “The Seventh Seal”, where a man plays a game of chess with a character representing Death. Meanwhile, the Bubonic plague ravages in nearby Europe. If the man wins, he lives. If he loses, he dies.
The fear-mongering would have us imagine that our scenario is similar, with machines on one side versus humanity on the other. But this is wrong. It is humans together with AI on one side, and our species’ extinction on the other. We are playing with AI against Death.
Ask yourself, if the game were actually chess, and our entire species’ survival was on the line: Who would you rather sit in the chair with us? The world’s top AI chess program or the top human? I believe it would be foolish, immoral, and suicidal to turn down the advice, counsel, and wisdom of an AI so clearly better than us at complex problem solving.
At the end of our conversation, I told my Mom that instead of feeling scared, I’m most interested in asking AI: “what strategically unprecedented moves can you offer today?”
The other side of asking that question might be a future so fantastic that none of us can even imagine how good it will be.
In the end, my Mom agreed. I’ll call that a good start.