AlphaGo: The Ultimate Go Master

For more than 2,500 years, humans have been playing and studying the incredibly complex game of Go. In a matter of months, AI learned how to do it better.

At “The Future of Go Summit” in Wuzhen, China, Google’s Go-playing machine AlphaGo topped the world’s №1 human player, Ke Jie (柯洁), winning the first two games of their three-set match on Thursday May 25. It was a historic moment — a machine decisively surpassing the top human’s capabilities in the most complicated strategy board game ever.

Ke Jie, the 19-year-old Chinese Go Master, did his utmost to stop AlphaGo, but the cyber challenger prevailed by a 0.5 point margin in the first game, and won the second game by resignation.

In the press conference following the first game, Ke marvelled at AlphaGo’s strength, calling it “The God of Go.”

For many in the Go community, AlphaGo’s triumph was not surprising. Released in January 2016 by DeepMind, a Google-owned AI company in London, AlphaGo was initially trained with machine learning algorithms that studied online human game records. In its latest incarnation, AlphaGo trained with reinforcement learning, playing millions of games against itself. These two algorithms were integrated with Monte Carlo Tree Search, an analytical search algorithm for decision processes, to improve its selection of each move.

AlphaGo has been unstoppable. In a Seoul showdown that made front-page headlines across East Asia last March, AlphaGo took on Korean Lee Sedol, one of the best Go players of the last ten years, and won four of the five games. After the match, the South Korean Go Association awarded AlphaGo a Nine-Dan professional rank, the highest possible. Nine months later, AlphaGo appeared on online go servers under the alias Master, compiling a 60–0 record against top players, including Ke, who it beat three times.

Ke, who after Lee Sedol’s loss had brashly declared that he would never lose to AI, has now admitted that the future belongs to AI. The night before his game with AlphaGo, he posted on Weibo, China’s Twitter: “Win or lose, these will be my last three games playing against AI.”

Ke Jie plays an opening move in game two.

10X less computing power

AlphaGo was powerful last year but had weaknesses. During its fourth game against Lee Sedol, the Korean Go master made an unusual move that flummoxed AlphaGo. The Go AI responded with several “buggy” moves before resigning.

DeepMind has improved AlphaGo’s algorithms while reducing the computational power it requires. In the Wuzhen games with Ke, AlphaGo showcased a more mature performance. “In the past, it had some weaknesses. But now I feel its understanding and judgement of the game is beyond our ability,” said Ke after the first loss.

David Silver, the main developer of AlphaGo at DeepMind, said AlphaGo is driven by a new and powerful architecture. It now plays on a single TPU-based machine in the Google cloud, with 10X less computing power and better algorithms than used in the match with Lee Sedol.

Silver’s PhD mentor Martin Müller of the University of Alberta, the world’s top professor in computer Go programs, said AlphaGo’s performance has risen to a new level.

“I do not know the details, and am eagerly waiting for Deepmind to release that information,” said Prof. Müller. “But I heard that the most important improvement is how the training games for the machine learning are created.”

Martin Müller (right) watches the second Go game between Ke Jie and AlphaGo with a Synced’s journalist in Beijing.

Humans are learning from AlphaGo

AlphaGo is not playing an entirely villainous role against human players. On the contrary, its unique understanding of the game is bringing new interpretations and inspirations from which human players can benefit.

Defeated by Master online, Ke studied how the program plays Go and adjusted his strategy. For example, during his first game, Ke employed a “3–3 point” strategy, an unusual opening that had been played by AlphaGo regularly in its 60-victory streak. Ke had experimented with the same unorthodox opening against human Go competitors earlier this year.

Demis Hassabis, Founder and CEO of DeepMind, says the goal of AlphaGo and DeepMind is to help humans with AI.

“We want to use AlphaGo as a tool for the Go community to improve that knowledge about the game,” said Hassabis in the press conference following the first game. “The reason ultimately we develop the technology is to use them more widely in areas of science and medicine, and to try to help human experts in those areas make possible breakthroughs.”

What’s next?

Today’s AlphaGo victory marks the end of an era. There are parallels with the 1997 showdown World Chess Champion Garry Kasparov and IBM’s chess-playing computer Deep Blue, after which it became clear that no one could beat computers at chess. Human players will continue challenging AlphaGo and other Go computer programs, but will henceforth approach such games as learning experiences.

Meanwhile, competition between different Go-playing machines is intensifying. Jueyi, a Go-playing AI developed by China’s tech giant Tencent, won the 10th Computer Go UEC Cup in Japan this March. It is expected to challenge AlphaGo in the future, and the Go community will be watching. And learning.

Author: Tony Peng | Editors: Michael Sarazen