Why it matters that computers can now beat humans at Go.

As of Saturday, Lee Sedol has lost the first 3 of 5 matches against AlphaGo.

I started playing Go in my twenties in part because it was a game computers couldn’t beat.

I was in high school when when Gary Kasparov lost in chess to the “Deep Blue” computer. Already an avid coder at that point, I could easily understand how Deep Blue’s software was written.

But Go is a totally a different beast. In chess, there are only a few reasonable moves in any position, so the computer can look forward many, many moves. In Go, the decision tree is more like a forest. To put it numerically, in chess, there are 10^120 possible games whereas in Go, there are 10^761.

This week’s news that a computer program beat one of the world’s top Go players is fascinating because it signals a turning point in machine intelligence. Google is making these strides using GPUs, a type of hardware that has become increasingly important in technology and one that I’ve been experimenting with lately. I’ll return to my thoughts on GPUs later in this post.

But first, I wanted to state in plain English why it matters that a computer can win at Go: to win at Go, the computer had to learn to think. The number of options on the Go board makes it impossible to analyze all possible moves and pick the best one, the way computers do in chess. Instead, the computer had to learn to play like a human - with judgment and intuition.

For Google to get a computer to this point is hugely important for artificial intelligence. All the years of progress making chess programs better didn’t accomplish anything else useful. Teaching a computer to play Go, however, will lead to all kinds of other applications: it’s less about a particular game, more about computer thought.

Appropriately, the Google company that built the computer is called DeepMind. Google bought that company two year ago after it had created a computer that could learn to play seven classic Atari 2600 video games. What makes DeepMind interesting is it plays Go the same way a human does - by learning what positions "look good" at a qualitative level, and by playing the game over and over again.

As some of DeepMind’s creators wrote in an interesting research paper a few years ago, DeepMind “can infer simple algorithms such as copying, sorting and associative recall from input and output examples.”

GPUs are a critical part DeepMind’s AI success. This is exciting because GPUs, or graphics processing units, were once used mainly for comparatively frivolous features like the imagery and physics calculations in video games. GPUs are in all of our computers and phones, but they have been a sort of sidekick to the CPU, which is the primary calculation engine of any computer.

So it is sort of cool that something that was once used mainly for 15 year old boys’ video game fantasies is now becoming critical to advances in artificial intelligence.

Companies like Google are increasingly running problems that might have been run on CPUs in the past onto the GPUs and finding enormous performance benefits. Google has built lots of predictive technologies, like speech recognition software in phones, and with GPUs, the company is able to speed up the pace with which it trains its systems. This matters in voice recognition, for instance, because a new phone user wants the phone to pick up on the quirks of his speech as fast as possible. (Google Senior Research Fellow Jeff Dean spoke about this a year ago at a GPU conference.)

The reason GPUs work so well for fast training of machine learning systems is that they are designed to carry out an enormous number of tasks at once. Think of a GPU as being an army. All the soldiers work away in tandem at whatever problem is put before them. Individually, they’re not that fast. But there’s a LOT of them. This sets GPUs apart from CPUs (the main computing engine), because the CPU is more like one military general working on his own. CPUs are very smart, but they don’t run multiple tasks in parallel.

But for all the progress in working with GPUs, there are still some serious shortcomings, which I have recently been working to improve in two significant ways:

First, I am trying to make GPUs easier to use. Right now, people programming GPUs have to gerry-rig their code to make it work. They can write in Python, but they have to put all sorts of C++ footnotes, and make adjustments to speak to the GPU. I’m working on a new way to let people write code for GPUs using more “normal” Python code. My goal is to give people the same performance benefits of GPUs while removing a lot of work they do to optimize the code.

Second, I am trying to enable GPUs to run faster. One of the things that is truly tricky about the GPU “army” (the parts of the GPU carrying out parallel tasks) is that the compute units all have to perform roughly the same task at once. (What I’m talking about is known as SIMD). Groups of “soldiers” must all execute the same instructions in lock-step. If part of the army finishes its task first, it must sit around idle while the rest of the army keeps working. If that happens, everything slows down enormously, which is terrible, because the whole point of GPU is speed. I’m working on a very cool idea that will make it much easier to write programs that don’t suffer from this problem.

Stay tuned -- I will write on this more in the future.

But for now, I’m heading off to Pandanet to play some Go against my fellow humans. Maybe our defeated champion Lee Sedol will be there.