The latest AI can work things out without being taught

Learning to play Go is only the start

The Economist
7 min readOct 23, 2017
South Korean professional Go player Lee Sedol is seen on a TV screen during the Google DeepMind Challenge Match against Google’s artificial intelligence program, AlphaGo — AP Photo/Ahn Young-joon)

In 2016 Lee Sedol, one of the world’s best players of Go, lost a match in Seoul to a computer program called AlphaGo by four games to one. It was a big event, both in the history of Go and in the history of artificial intelligence (AI). Go occupies roughly the same place in the culture of China, Korea and Japan as chess does in the West. After its victory over Mr Lee, AlphaGo beat dozens of renowned human players in a series of anonymous games played online, before re-emerging in May to face Ke Jie, the game’s best player, in Wuzhen, China. Mr Ke fared no better than Mr Lee, losing to the computer 3–0.

For AI researchers, Go is equally exalted. Chess fell to the machines in 1997, when Garry Kasparov lost a match to Deep Blue, an IBM computer. But until Mr Lee’s defeat, Go’s complexity had made it resistant to the march of machinery. AlphaGo’s victory was an eye-catching demonstration of the power of a type of AI called machine learning, which aims to get computers to teach complicated tasks to themselves.

AlphaGo learned to play Go by studying thousands of games between expert human opponents, extracting rules and strategies from those games and then refining them in millions more matches which the program played against itself. That was…

--

--

The Economist

Insight and opinion on international news, politics, business, finance, science, technology, books and arts.