Stages of AI in Business and Beyond : The Final Lesson from Chess

BCG GAMMA editor
GAMMA — Part of BCG X
9 min readMar 6, 2018

By Philipp Gerbert and Daniel Schlecht

Many pioneers of computer science and artificial intelligence (AI) played chess and extensively researched the game. For about 40 years, chess was the reference system for progress in AI — similar to E.coli or Drosophila in genetics. Many people think the story ended when DeepBlue beat chess world champion Garry Kasparov in the famous rematch in 1997.

Instead, the “defeat” sparked a fruitful 20-year era of shifting man-machine collaboration that culminated in December 2017 when AlphaZero demonstrated what we call “hyperlearning,” playing chess in a remarkably human-like style. The history of man-machine interaction in chess over this era contains many lessons for AI in business and beyond.

“Computer” chess began with a hoax, the “Mechanical Turk,” a chess-playing “machine” constructed in 1770 in Vienna. In fact, a hidden human hidden was manipulating the machine. Nonetheless, the Mechanical Turk made various tours of Europe and even played Benjamin Franklin and Napoleon Bonaparte before it was destroyed by fire in 1854. Despite its fraudulent history, the stunt did inspire the longstanding quest to build a machine capable of beating man in chess.

When computers finally became a reality, pioneers Alan Turing and Claude Shannon were among the earliest researches to write chess programs. They were followed by Alan Newell and Herb Simon, and later Ken Thomson and John McCarthy, the organizer of the seminal 1956 “Summer Research Project,” a brainstorming session at Dartmouth College that is often considered the birth of AI.

Chess quickly evolved into a reference system for exploring and testing AI systems. As the “queen of games,” chess was seen as both a complex reasoning problem and the ultimate stretch of intelligence. It had the virtue of relative simplicity compared to other potential references — 32 pieces, 64 squares, and an outcome of win, draw or loss.

The human element of chess was also appealing. The relative strengths of opponents was calibrated through their ratings, and the game was universally known and studied in the Western world for centuries.

So what were the stages of AI in chess and what can it teach us about AI in business and beyond?

Alan Turing finished his chess program in 1950 but did not live to see it implemented. He incorrectly predicted machines would beat humans within ten years. The real competition was initiated in 1968, when McCarthy met the 23-year-old Scottish chess champion David Levy and bet £500, the equivalent of close to £10.000 today, that his program would beat Levy within 10 years. However, in 1978 David Levy won the match in Toronto and indeed kept winning many more — until finally losing in 1989, 21 years after the original encounter.1

At that time, chess started to decouple from AI research.2 In chess, strategies that relied on memory and brute force — efficiently refining search algorithms for effective moves rather than strategic intent — consistently started to beat the best humans. (For an inside look at the most prominent first win of a computer against the world champion in the 1997 DeepBlue II–Kasparov rematch, see Kasparov’s Deep Thinking.3) The half-century of competition between man and machine effectively ended in 2005 with Hydra’s crushing 5.5 to 0.5 win against world-class player Michael Adams.

These victories were disappointing to researchers who were more interested in advancing the goal of “general intelligence” than brute-force victories. While the press sometimes compared Kasparov to the folk hero John Henry, the steel-driving man who symbolized the transition from muscular strength to steam engines, during this first chapter of the story computerized chess was simply relying on a different type of strength, and not a very elegant one.

The State of AI in Business

In business, 20 years after machines started beating man in chess, many AI applications are surpassing human capabilities. When computers interact with customers through recommendation and personalization engines, AI generally performs at a speed and scale that exceed human capability. Similarly in radiology, machine diagnoses are approaching human abilities and — with enough training data — becoming increasingly accurate. Many of these applications leverage the strong non-linear optimization ability of AI or recent advances in machine language and vision. Among the most sophisticated applications are self-driving vehicles, where the quest to exceed human abilities is on.

In all these areas, machines solve tasks differently from humans — often still relying on computational power more than inference or insight. Today, AI machines normally complement rather than a replace humans, leading to endless conjecture on how man-machine collaboration might evolve.

Chess as a Leading Indicator for (Narrow) Business Applications

Once again, chess has been leading the way and providing clue to what may happen in narrow business applications of AI. The use of computers in chess did not end in 1997.

Promoted by Kasparov, “advanced chess,” in which world-class chess players could seek the assistance of computers during the game, emerged. In today’s business context, this would be called “augmentation” of human work by AI. Later “freestyle chess” — in which man and machine could collaborate in any conceivable way — took hold. The heyday of freestyle chess occurred during the famous 2005–2008 PAL/CSS tournaments on the Playchess server.

The big surprise in the first tournament of the series was that two amateurs from New Hampshire, Steven Crampton and Zackary Stephen, won using three standard PCs, beating computer-assisted grandmasters and human-aided top chess computers. Crampton and Stephen essentially merged human and machine intelligence. Relying on an intensely trained process, either of the two humans or one of the computers took the lead depending on the position on the board.

Both “augmented intelligence,” with AI assisting people, and the Crampton and Stephen’s “cyborg” approach have often been quoted as blue-prints of a powerful man-machine collaboration in business.4 5 And indeed for roughly ten years such combinations have arguable played the highest quality chess. However, one of the big lessons is that, in the zero-sum game of chess, machines start taking over more and more tasks. Eventually, humans, while still having some superior insights, can no longer cope with the speed of the machine and contribute meaningfully to a real game, explaining why advanced chess competitions have gone out of fashion.

At the same time, all serious chess players rely on augmentation by machines in preparing for traditional chess tournaments. In the professional world, one can foresee similar developments. In radiology, for example, machines will likely move from offering assistance, to second opinions, to first opinions. In the long run it will conceivably only make sense to keep a “human in the loop” as a common-sense control.

Some business areas, such as the auctioning of online advertisements or algorithmic securities trading, have already reached the automation stage. In many of these instances, machine logic is still clearly distinguishable from human logic, relying on superfast but primitive statistical ratings. Many observers believe that this is the steady state in man-machine interactions. Machines will provide scale and speed, while humans will offer insights and training.

This conventional wisdom was upset on the chessboard in December 2017.

In its quest for general artificial intelligence, DeepMind, a company owned by Alphabet, has revived interest in games as a learning ground for AI. AI itself has also significantly evolved with the increasing mastery of efficient deep learning. DeepMind’s core interest is in “reinforcement learning,” in which computers do not receive explicit external feedback until the end of the game. Reinforcement learning had early successes in Atari games, but the most widely publicized win had been DeepMind’s conquest of Go, a popular game in Asia, which is played on 361 intersections and more complex than chess, despite its simple rules.

DeepMind’s victory in Go included an interesting twist that is relevant for future stages of AI. AlphaGo was originally trained on the best human Go games until it beat the European champion in late 2016. By then the machine had run out of top-level games for further training. The ingenious authors of the program solved this roadblock by letting AlphaGo play millions of games against itself. In March 2017, AlphaGo defeated Lee Sedol of South Korea, a top player, and later in the year the Chinese champion Ke Jie.

These victories contained an important lesson for many AI applications. If programmers can create a virtual training environment, in which the machine no longer requires human input, AI can enter a hyperlearning stage.

However impressive its victory, AlphaGo was still built on extensive training by humans and in depth optimization to the specifics of the game.

In December 2017 DeepMind took a step toward more independent intelligence when it introduced AlphaZero, a program that was taught only the rules of Chess, Go, and Shogi, and learnt the games exclusively by playing against itself. AlphaZero proceeded to win against the best machines: Stockfish in chess, AlphaGo in Go, and Elmo in Shogi.

We already knew that machines played well, so what is interesting about the “better mousetrap” demonstrated by AlphaZero? Let us again focus on chess, and what might be the final lesson it teaches us about AI. Stockfish competed comparatively well. Over a match of 100 games, AlphaZero won 25 with white pieces and 3 with black pieces, with the remaining 72 games ending in a draw. AlphaZero also profited from potentially unfair advantages, such as a superior Google cloud environment, and special rules. The machines, for example, could not rely on libraries of opening moves, and all moves had to be completed in one minute. These limitations prevented Stockfish from using many of its specific strengths.

Nonetheless, two aspects of AlphaZero’s performance shocked chess world — and have potentially broad implications for business:

  • AlphaZero learned to play chess in the incredibly compressed time of just four hours by playing 300,000 games against itself. No training by humans or other machines was involved.
  • Its style was strikingly human. The typical machine logic traces of the brute force approach were missing. Indeed, AlphaZero evaluated about 1,000 times fewer moves per second than Stockfish (about 80,000 to about 70 million), and its play was often intensely strategic.

While we have yet to see further matches, there is little doubt that AlphaZero has ended the days of cyborg chess: Its human-like style — combined with its speed and staggering lack of flaws — renders a collaboration with people pointless.

The Danish chess grandmaster, Peter Heine Nielsen commented in a BBC interview: “I always wondered how it would be if a superior species landed on earth and showed us how they played chess. Now I know.”

Clearly, chess is played in a relatively simple and well-defined environment that hardly resembles the complex and often muddy real world. But there are a few undeniable insights:

  • In well-defined environments, the actions of machines — even besides their speed — can become very human-like. A machine can act creatively and strategic and display strong judgment of connections and potential advantages that only play out after a comparable long time frame.
  • In virtual environments of self-training machines, computers can enter a hyperlearning phase with learning cycles occuring at machine speed — about one million time faster than human neural speed.

All this might seem irrelevant for the business world, where companies are still struggling with successfully applying AI to seemingly simple real world processes or offerings. But it might pay to watch out for disruptive approaches: Virtual environments for hyperlearning can be envisioned in the real world. Flight simulators already are quite good at training human pilots and might be adapted to train machines. Also, recently popular generative adversarial networks (GANs) aim to create virtual learning environments where algorithms can train one another by solving simple problems — such as creating and discriminating false pictures of faces.8 It is imaginable that the engineering of simple machines or parts can be hyper-learned in such virtual environments, creating a decisive competitive advantage.

We should not succumb to simplistic extrapolations or fears about the real world. Man continues to be unique in our being able to “go meta,” leaving the constraints of the environment and reframing the task. But inside well-defined domains, including many functional business tasks, machines will play increasingly large and relevant roles. We need to challenge simplistic claims — often based on current performance — of the future role of AI, and the ‘obvious’ work split between man and machine. It might be hard for anyone who does not play chess to appreciate what just happened with AlphaZero. And it might be the very last lessons chess can teach us about AI. But it is certainly an important one.

Philipp Gerbert is a Senior Partner at BCG, focusing on the impact of AI in business. He is a physicist, holding a PhD from MIT. In his youth, he was the German Youth Champion in chess and played Kasparov in the Youth World Championship.

Daniel Schlecht is a partner at BCG, leading Digital and AI in Energy. He is an International Master in chess and played against the current chess world champion Magnus Carlsen in 2004. He is an electrical engineer, holding a PhD from the RWTH Aachen.

--

--

BCG GAMMA editor
GAMMA — Part of BCG X

BCG GAMMA is a global entity of BCG dedicated to Analytics, Data science and Artificial Intelligence.