AlphaGo vs. Lee Sedol 3:0

Machines have conquered the last games. Now comes the real world.

why is it so important?

  • This is the first time an artificially intelligent system has topped one of the very best at Go, which is exponentially more complex than chess and requires an added level of intuition — at least among humans.this is the first time an artificially intelligent system has topped one of the very best at Go, which is exponentially more complex than chess and requires an added level of intuition — at least among humans.
  • In one sense, this is a game. But the match also represents the future of Google. The machine learning techniques at the heart of AlphaGo already drive so many services inside the Internet giant — helping to identify faces in photos, recognize commands spoken into smartphones, choose Internet search results, and much more.

AI tipping point

  • AlphaGo had won once playing black and twice playing white. Just a few days earlier, most in the Go community were sure this wasn’t possible.
  • The victory shows how quickly AI will progress in the years to come. Just two years ago, most experts believed that another decade would pass before a machine could claim this prize.

why Lee Sedol?

  • The Korean-born Lee Sedol is widely-regarded as the top Go player of the last decade, after winning more international titles than all but one other player.
  • He is currently ranked number five in the world, and according to Demis Hassabis, who leads DeepMind, the Google AI lab that created AlphaGo, his team chose the Korean for this all-important match because they wanted an opponent who would be remembered as one of history’s great players.

Lee speaks

  • “I am in shock. I can admit that,” Lee Sedol said after Game One. “But what’s done is done”.
  • I’ve never played a game where I felt this amount of pressure, and I wasn’t able to overcome this pressure,” Lee said at a post-game press conference.

how it works?

  • Essentially, they taught AlphaGo to play the game by feeding thousands upon thousands of human Go moves into these neural networks.
  • But then, using a technique called reinforcement learning, they matched AlphaGo against itself. By playing match after match on its own, the system could learn to play at an even higher level — perhaps at a level that eclipses the skills of any human.
Its moves are an emergent phenomenon from the training. We just create the data sets and the training algorithms. But the moves it then comes up with are out of our hands — and much better than we, as Go players, could come up with.
  • During the match, the commentators even invited DeepMind research scientist Thore Graepel onto their stage to explain the system’s rather autonomous nature. “Although we have programmed this machine to play, we have no idea what moves it will come up with,” Graepel said.
One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.