What Deep Blue Tells Us About AI in 2017
Steven Levy
48015

Kasparov and Strategy in an AI World

I follow the Garry Kasparov on Twitter and have long been fascinated by the assumptions of chess — a rule-based system, not unlike our coding universe — and how brilliance within a rule-based system failed to translate into direct strategic effectiveness in a more open system: politics.

To be honest, the realization shook me to my core.

The now-famous story Kasparov has written about it, regarding Putin, was also recently cited by Russian ex-pat journalist Masha Gessen, in giving advice to journalists and fellow politicians in the age of Trump:

“Okay, well, she opened E2 to E4 and he knocked all the figures off the chess board. He knocked the bishop off the chess board and he knocked the knight off the chess board.”

As a kid, I liked chess, studied it. Later, I became a professor, where I operated under the principle that one could become smarter by training the mind and wits, and that practice, like mental gymnastics, piano finger exercises, etc. would give me and my students an edge.

And, like Kasparov, I have in the last 20 years experienced great frustration. I’ve discovered, as in the Northern European “Dark Ages,” that in a time of power held by the righteously average or downright ignorant (in medieval times, people of learning were reduced to servant roles, lowly scribes, for the often-illiterate holders of power), too much smarts, too much wit, even too much strategy, is actually un-strategic and counter-productive.

The spoils do not actually go to those who offer the greatest merit; they go to the schmoozers, the “game-players,” the scammers, the back-stabbers, the coalition-builders (in the darkly political, smoke-filled room sense of the word, the “room where it happened” sense).

So what are the motivations for striving, for developing wits, smarts, and strategy? Or, for learning ANY rule-based systems to to be gamed? The idea of a game-player implies some explicit or implicit rule set. In basketball, you can’t run with the ball, and you can’t put your foot outside of the square of the court. Also, you shouldn’t foul, except strategically.

Kasparov’s Putin, Gessen’s Trump, would walk on to the basketball court, take the ball, threaten the center, and fire the coach.

And AI? Neal Stephenson (The Diamond Age) has described AI as the logical extension of what is at its core a rule-based system, no matter how complex the extensions can get. The Diamond Age itself is a lite little book (compared to Stephenson’s others), primarily a plot in service of a wonderful extended metaphor about AI.

The test, the big question, is whether AI can master the ineffable, the Buttle/Tuttle (Brazil) moment of non-rule-based unpredictability, AND use the wild cards in a strategic (which is ultimately rule-based) way. It’s why the notion of “being human,” or even “more human than human” (Blade Runner), or incorporating creativity into AI is the holy grail for the ultimate Turing Test.

Humans, meanwhile, can barely manage strategy in the face of anarchic authoritarianism and fascist systems of strongman control. You might call it the “Bully Game.”

Will someone teach Deep Blue how to game the mafia? The Yakuza? The Russian mob? These certainly are rule-based systems, to some extent, even if the rules involve knocking the bishop off the chessboard.

A single golf clap? Or a long standing ovation?

By clapping more or less, you can signal to us which stories really stand out.