Byte Size Weekly #9

Jordan Abramsky
QTMA Insights
Published in
3 min readOct 24, 2017

--

Is this the beginnings of Skynet?

Google’s AI Taught Itself To Become World’s Best GO Player

What Happened?

A self-taught computer has become the world’s best player of Go, the board game which has been dubbed the world’s most complex, without any input from human experts. This machine, called AlphaGo Zero , was developed by Demis Hassabis and Google’s DeepMind subsidiary, which was acquired for $500 million in 2014. The AI learned initially by analyzing thousands of games between the world’s top human players to discover winning moves. The new development, called AlphaGo Zero, uses zero human expertise and starts just simply knowing the rules and objective of the game. Chief Executive of DeepMind Demis Hassabis, stated that:

It learns to play simply by playing games against itself, starting from completely random play. In doing so, it quickly surpassed human level of play and defeated the previously published version of AlphaGo by 100 games to zero

How Does It Work?

By not using human data in any fashion, AI can now create knowledge by itself from a blank slate. Within a few days, the computer had not only learned the ancient Chinese game from scratch but surpassed thousands of years of accumulated human wisdom about the game. The highly complex algorithm combines simulations of future moves with a network that decides which moves give the highest probability of winning. AlphaGo runs on a distributed system of processors rather than a supercomputer. It takes a description of the Go board as an input and process this through different layers containing millions of neuron-like connections. This neural network is constantly updated over millions of training games, producing a slightly superior system each time. AlphaGo learnt to discover new strategies for itself, by playing thousands of games between its neural networks and adjusting the connections using a trial-and-error process known as reinforcement learning.

The picture below shows Alpha Go Zero’s evolution over a course of three days

Source: FT

What Does This Mean?

While games are a great platform for developing and testing AI algorithms, the ultimate goal is to apply these techniques to social and corporate issues that combine pattern recognition and forward planning. For example, implementing an improved “intelligent personal assistant” for smartphones or combining images from scans and other patient data to diagnose disease and decide treatment in medicine.

Think of a child who learns about the world by assimilating masses of sensory data from scratch, DeepMind’s algorithms can be applied to pools of unstructured information to produce deep insights and predictions about it, which could have huge business and technology impacts.

This breakthrough in AI infers that AI can quickly and quickly exceed the skill of a human without the need to learn from one. One of the founders of DeepMind, predicted that human level AI will arrive by 2030. However, the human brain is extremely complex, with a network of 100 billion neurons and 1,000 times as many synapses, all working simultaneously. Most neuroscientists say a system of this complexity cannot possibly be rivalled by software any time soon.

--

--