Go On…

Go Board, Hoge Rielen, BelgiumEdit Fcb981.jpg. (Source.)

I stayed up late last night watching a livestream of a game of Go being played in Korea. I don’t play Go, yet I’m quite certain that I’ll remember last night for the rest of my life. Playing against world champion Go player Lee Sedol was a computer program built by the team at DeepMind, a company that Google acquired in 2014.

Experts as recently as last fall believed that computers were still a decade away from mastering Go. As powerful as today’s computers are, there are more legal positions in Go than there are atoms in the observable universe (and 1x10¹⁰⁰ more possible moves than in chess) — which means that until now, it was believed computers were simply incapable of using brute force to analyze each possible move and predict what would happen with each next move.

AlphaGo from DeepMind changed the approach to playing Go, using a combination of deep neural networks (computers that learn from experience instead of by being programmed to perform specific tasks) and something called Monte Carlo tree search to identify within a subset of possible moves the most promising moves. (I’m woefully oversimplifying things; you should read DeepMind’s blog post from January that goes into more detail.) AlphaGo learned — from past games of Go as well as its own experimentation. It gets smarter every time it plays.

The result is a computer that can not only evaluate an astonishingly complex set of possible moves (without having to analyze every possible move), but also predict what its human opponent will likely do and make plays based on those predictions. In October, DeepMind beat a ranked Go player, a first for a computer. This week’s opponent, Lee Sedol, is ranked #4 in the world and was confident going into this week’s match that he’d win 5–0 or 4–1 in part because he’d seen AlphaGo’s game play from last fall and knew that it had made mistakes. What he apparently failed to take into account was how much better AlphaGo is today compared to where it was in October.

What’s newsworthy isn’t just that AlphaGo has won its first two matches, but what this means for the acceleration of the pace at which computers are getting smarter. The world’s experts in A.I. thought we were still 10 years away from computers smart enough to win at Go. Instead, we have a computer that beat a ranked opponent, then got even smarter over the next several months, and is now on the verge of defeating the world’s #4 player in a best-of-5 match.

The next ten years, instead of building towards a computer that can play Go, will be built on top of a computer that has mastered Go using a novel combination of A.I. approaches. That’s why I stayed up late last night watching the livestream on my phone: it’s not every day you get to watch the future actually come at you faster than anyone expected.


If you want to read more on what those next 10 years might look like, this 2-part essay from Wait but Why is well worth your time:

Game 3 between AlphaGo and Lee Sedol is on Friday night (Pacific time; Saturday in Korea). You can watch the livestream here.