Why the AlphaGo story made me scared of (the people behind) AI

by Dr Carmel Kent, Head of Educational Data Science at EDUCATE Ventures Research

The AlphaGo documentary from 2017 is a fascinating movie. It has heroic characters, both human and non-human, a twist in the story, a climax and a resolution that leaves you with many more questions than answers.

Above all, there is drama — and what a fantastic drama. Way beyond a 2,500 year old game, well above the countless people who devoted their lives to studying Go strategies or AI, the movie puts a spotlight on the narrative of humans vs. machines. And it clearly shows how the machines won. 4:1 to be precise.

The movie’s narrative, as well as the media coverage of the Lee Sedol-AlphaGo match which took place at the Four Seasons Hotel in Seoul in 2016, doesn’t just present a technological breakthrough. It is much more dramatic. It depicts the beginning of a new era where the lines between humans and AI have been crossed. It pictures a new level of threat, way beyond games and narrow AI. It plants a huge doubt in its viewers’ hearts. As Fan Hui, the Go European champion and one of AlphaGo’s human trainers puts so articulately, it makes us doubt humanity, what it means to be human, and most of all — to doubt ourselves. It is very hard to watch this movie without feeling loss and sadness.

It also left me very sad and concerned — not about a war in which humanity had lost, nor about AI out-smartening us, as I don’t see that coming any time soon. I was very concerned, however, about the humanity behind AI and about how colonial and oppressive we might get.

Go is an extremely complex game. The number of possible scenarios that could potentially result from any move is huge and undoubtedly challenges the human working memory. This means that the human capacity to look ahead and plan is very limited, which is why Go requires developing a strong intuition rather than pure raw mathematical calculation. Go (nicknamed as “hand talk” in Japanese) is rooted in East Asian culture and is considered to embody a way of communication.

Trained to win

AlphaGo does not develop or is even capable of developing intuition. AlphaGo does not care about communication. AlphaGo, as the DeepMind engineers explain, is trained solely to win. The gap in which it won or lost with, learning new strategies, the disheartening, the elevation, the astonishingly quick pace in which Sedol went from being so sure he will win, then apologizing for it and concluding that “winning this [one] time, it felt like it was enough”,– none of it has even the minor significance to AlphaGo models.

AlphaGo has been trained by far more Go hours than any human being could, and it can calculate far more steps ahead. However, after Sedol’s famous Move 78, AlphaGo’s inability to adapt made its odds of winning dive quickly down from a comfortable 70%.

When it lost to Sedol on the fourth game of the match the algorithms showed the message “AlphaGo resigns. The result “W+Resign” was added to the game information”. For AlphaGo, it is a probability game. For Sedol and Hui, it’s a whole life experience hanging on a thread and a new learning experience about themselves. Either way, AlphaGo won in an unfair game at best, or simply put — a different one.

The humanness of Move 78, and what follows it, is fascinating though: the commentators’ embarrassed giggles when they are not able to trace the logic of AlphaGo’s moves from that moment on, and the anxiety on the faces of DeepMind’s engineers when they realise that neither the machine nor they are able to really explain or be accountable to its behaviour. “… I knew AlphaGo somehow became crazy, but I didn’t realize why”, one of the engineers reflects, and another one says, “I really don’t know what AlphaGo is trying to do here…”. A third DeepMind engineer summarizes — “as it turns out, none of us know Go well enough to accurately judge what AlphaGo is doing”.

By this point, references to Shelley’s Frankenstein’s monster getting out of control sprang to mind, and I was so relieved to remind myself that despite this drama, this is just a game. The risks that could result from AlphaGo’s zero accountability and explicability are huge. The one that the movie emphasizes easily is how emotions (of Go players and fans) were played with. However, what would have happened if the same approach was taken to develop a medical diagnostic system? I could not shake the feeling that something has gone terribly wrong, and it was not Move 78.

Invented in China and believed to be the oldest board game still played today, Go was considered one of the four essential arts of a cultured Chinese scholar. It is a two-player abstract strategy game. Each player aims to surround more territory than its opponent. To me, the movie shows exactly how we, humans, can be very quick to surrender other humans’ cultural territory.

Compared to Chess, Go was slow in being adopted by the West. This was explained as being due to it being based on intuition, its abstractness, and lack of a climactic ending (unlike the Checkmate in chess). The modern Western lens on AI is also very different to the Eastern one. It is usually based on threatening mental models of AI, influenced by pop culture references such as The Terminator and The Matrix, but also such as Frankenstein. Unlike East Asian culture, Western culture tends to frame AI around horror and the extinction of humanity.

When covering the Sedol-AlphaGo match, the Chinese press was more likely to frame AI as a non-threat compared to the American press. This was despite — or thanks to — the Chinese press’s deeper understanding of the game’s complexity (having American’s press comparing Go to Chess), their cultural appreciation of Go, and the American press’s fondness for high drama.

This clash of cultures made me realize how DeepMind, a London-based startup (bought by Google) which pursued the idea of developing an AI merely to beat humans in Go, was so wrong. And how this concept was disrespectful to Go’s values of diligent learning and developing the players understanding about themselves. Chen Lei, CEO of Beijing-based Wantong Technology commented about this cultural clash, saying that “one characteristic of Eastern wisdom is its tendency towards the qualitative rather than the quantitative, whereas computers are purely quantitative.” How else can we explain Sedol’s urge to put himself into the fourth and fifth game, where the match’s bottom line was already scored? AlphaGo, by definition, would not have bothered.

Gender bias

Almost throughout the movie, DeepMind is seen as an all-boys club of white Alpha males, led by Cambridge graduates, who years before already “couldn’t stop thinking about how this was the one game of intellect that machines had never cracked. This begs the question of whether a human revolution could really come out of such a narrow slice of it. Academic studies have shown again and again sound evidence that groups who are higher in their collective intelligence are also higher in their average social sensitivity and the proportion of females in the group.

This evidence is well known to Google and DeepMind executives. Yet this all-male culture that we see in AlphaGo has not changed since. In 2018, Google reported that only 10% of their AI workforce was female. The world beyond Google is not much better. The World Economic Forum reported in the same year that only 22% of professionals working in the fields of AI and data science are women. The reasons are not a mystery. Women are likely to be less well paid than men in the sector and are shown routinely to self-report having fewer skills than men, while having higher formal educational levels. Men, on the other hand , are shown to have more confidence about applying for roles for which they they meet only part of the job criteria. When the industry resembles the make-up of the DeepMind team above, no wonder women working in AI leave the industry and change their roles more often than men16. How can a field so susceptible to bias such as data science and AI be driven by a workforce whose demographic is so skewed?

At the end of the documentary, one of the commentators summarizes Sedol’s experience by stating that “His humanness was expanded after playing this inanimate creation”. I must say — mine was undoubtedly narrowed.

--

--

--

Helping to solve global education challenges using education technology and digital learning

Recommended from Medium

The Role of AI in Recruitment

DIFFERENT MACHINE LEARNING MODELS

Is Your Company Ready For Generation AI?

A Human Probably Wrote This

Bias in Algorithms

Day 1 as Atlas AI CEO

The three environments for AI Professionals — Research, Development, and Production

Time to have Robot Schools

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Dorothy Lepkowska

Dorothy Lepkowska

Dorothy is the Communications Lead on EDUCATE Ventures, and former education correspondent of several national newspapers.

More from Medium

4 essentials most companies still don’t realize about AI.

A robot looking out the window and thinking

Where is all the AI in the land of industrial IoT?

Too Big To Ignore - When Overparameterized Models Fail

Federated Learning: A key technology for privacy-preserving AI?

Federated learning animation