DeepMind and the Art of Replacing Humans with Code.

Dr. Adam Hart
the digital ethicist
5 min readJan 20, 2020

In November 2019 9th Dan Go Master Lee Sedol has RETIRED from the South Korean Go Community citing his defeat by AlphaGo Zero.

In 1997, IBM DeepBlue bettered a world chess grand master 2–1. The smallest amount of unthrottled compute on your smartphone today will provide a challenging game for those of us who do not frequent the chess halls.

In 2015, DeepMind’s AlphaGo beat the world Go grandmaster 5–0. Then in 2017, the machine AlphaGo Zero learnt to play the game from scratch, by playing with itself.

AlphaGo is Google DeepMind’s important effort to tackle games of increasing complexity to fuel their research into artificial intelligence for purported humanity changing benefits.

This has risen in importance through the ability of technology to generate big data, and has been fundamentally enabled through the availability of graphical processing unit (GPU) compute power and Deep Neural Network mathematics at its most basic level using something called forward and backward propagation and gradient descent, a form of calculus.

Unlike in 1997, when a super-computer that was prohibitively expensive was used, Deep Neural Networks computations today rely on large scale distributed “elastic” cloud compute power using data centre “commodity” hardware. As an example you can see one of Google’s data centres in Hamina Finland here.

The big brains behind this outfit is Demis Hassabis. With Greek and Chinese heritage, obviously supremely mathematically gifted and a great education.

According to Wikipedia, Demis was a child prodigy in chess, reaching master level at the age of 13, is a world class games player, has a PhD in cognitive neuroscience. He founded DeepMind in 2010, and just 4 years later in 2014 was acquired by Google for £400M.

He’s not short of brains or a quid then.

Now, this all sounds great, especially for Demis’s personal wellbeing, and certainly Google’s ethics statements with photographs of actors or co-opted staff looking concerned and vague motherhood statements about engagement with ethics experts and the public as a marketing effort are a token nod to the latent danger and human boredom inherent in this trajectory.

But let’s focus on a key public statement from AlphaGo Zero’s machine “achievement”:

“This powerful technique is no longer constrained by the limits of human knowledge [1]. Instead, the computer program accumulated thousands of years of human knowledge[2] during a period of just a few days[3] and learned to play Go from the strongest player in the world, AlphaGo.

AlphaGo Zero quickly surpassed the performance of all previous versions and also discovered new knowledge[4], developing unconventional strategies and creative new moves, including those which beat the World Go Champions Lee Sedol and Ke Jie. These creative moments[5] give us confidence that AI can be used as a positive multiplier[6] for human ingenuity.”

https://deepmind.com/research/case-studies/alphago-the-story-so-far#alphago_zero

Analysing this statement as a fragment of their public discourse, and especially their choice of language:

  1. Limits of human knowledge — this is kind of a sci-fi inspired statement, assuming that “in a galaxy far far away” we will be like the federation and everything will be altruistic. Clearly the real limits of human knowledge have not been hit since a human team created the algorithm in the first place.
  2. Thousands of years of human knowledge — well, Lee Sedol is 36 now, and still retains his 9th Dan rank. He didn’t have thousands of years of Go instruction, so what does this statement mean? Or he stood on the shoulders of Go giants like Demis and his team?
  3. Just a few days — true, elapsed temporally, but how many petaflops of compute were made available and what was the cost?
  4. New knowledge — this is the goal of every human who studies and obtains a PhD, and new strategies unthought of are fine, but is it knowledge if a human brain cannot understand how it was derived? In mathematical proofs or other theses, a peer review is required. Who can peer review Dr. AlphaGo Zero — itself? And if quantitatively the machine beat Lee Sedol 5–0, is he or the other Dan level Go masters qualified to peer review?
  5. Creative moments — the machine has created new strategies, but this is a result of maximizing calculation of the combinatoric permutations from compute scale, not “inspiration” or even “desperation” such as the Manhattan project. This fundamentally is saying, “if I can analyse all permutations I have created”. Created what exactly?
  6. Positive multiplier — if it cannot be understood, which is a fundamental property of DNN’s that they are a black box, then how can this multiply human understanding and insight?

The most obviously disturbing statement is #4. So, is it new knowledge that has been created, or is it like a bomb, a human just looks at the external results from the outside and say, job done, the explosion worked, grand master thrashed? And, is new knowledge like an vase that just sits on a shelf looking vase-like, or is it the use the vase is put to that is its definition?

AlphaGo Zero can therefore also be perceived as a destructive kind of achievement and raises obvious questions about the motives behind doing this.

In an energy bound system like a human brain, or an economy with finite resources, priorities are set for what is done and what is not done. This team decided to invest in doing something that tackled a complex game that is “a googol more complex than chess”. And won. Destroyed the human who took “thousands of years” to learn this game.

This achievement can also be seen as a loss of the richness of experience and what makes us human, the human spirit.

The published learning from Alpha Go and Alpha Go Zero certainly have contributed to progressing the AI agenda for what is a very small and inwardly focussed AI research community, and like any technological revolution it will be up to the law makers and market economies both free and controlled to say what the purpose and uses of this progress means.

Apart from Google’s undisclosed agenda for DeepMinds research, I suspect that this is also allowed to happen because of the potential militarisation potential, as the AI ‘arms’ race label between the US and China (and Russia) seems to crop up in the AI narrative. Just like Boston Dynamics initial association with DARPA.

It is plausible that it is an ever-evolving face of digitally inspired living that the general human population will be subject to logarithmically increasing invasive technological based controls, of which a the code inside a black box machine like AlphaGo Zero is but one example. This is supported by centuries if not millenia old sovereign legal agendas which seek increasing government of populations and loss of freedom of speech for self-justification and taxation reasons.

The other aspect to this evolution is that this kind of future looks possibly awfully boring. Futures where machines we can’t understand do things that are not understandable and perplexing. All because they’re following a non-human decision tree grounded in a mechanistic philosophy of humanity that is compatible with computing ability, not humanist sensibility.

--

--