AlphaGo’s failure is a significant triumph
Everybody learnt something different from the high-profile Go match between AlphaGo and Lee Sedol. Some scientists would have to eat their words for proclaiming that humans would rule Go for a few more decades. AI enthusiasts would feel vindicated, and Go players are more interested by the moves in the games.
And the move that captured all the attention is undoubtedly move 78 in game 4. The match could have easily gone 5–0 for AlphaGo, but Lee’s spark of brilliance saved the day. The computer initially missed the significance of the move, and this was later followed up by a series of blunders that ended the game decidedly. There were other great moves from both sides, but move 78 stood out for “forcing AlphaGo into a panic”.
This statement is apt but nonsensical. DeepMind’s AI might be highly sophisticated, but the feeling of anxiety could not possibly be within the specifications of its neural networks. There is much speculation as to what could have caused AlphaGo’s mistakes, with some commentors even proposing that the moves were a deliberate strategy to throw Lee off. Without insider knowledge from DeepMind’s engineers, it would be hard to pin down the cause, but everyone is certain that panic is not one of them.
Or is it? Indeed, if we can accept anthropomorphization of AlphaGo for a moment, panic does appear to be the best description for how the loss unfolded. Despite underestimating Lee’s brilliant move, AlphaGo didn’t lose the game straight away — the game of Go is much too complicated for that. It was when AlphaGo realized its chances of winning had diminished, as tracked by an internal estimate within the AI, that the program began the series of “bizzare” moves that plunged its game into an abyss. (To its credit, it put up a good fight afterwards. Lee didn’t exactly have a walkover and came close to forfeiting the game from running over time.)
At least one expert believes that this weirdness in the AI’s behavior may be explained by a weakness of the Monte Carlo tree search alogorithm. Due to the mathematical complexity of the game, computers have to simplify the analysis process where possible. In human terms, it performs a certain degree of “guess-timation” to focus on promising leads. This may be its weakest link, as getting surprised by an unexpected move can throw subsequent calculations off track. Indeed, that sounds like a reasonable explanation.
Yet, one cannot escape the irony of an emotionless algorithm stumbling after being thrown a curveball. Panic, fear, anxiety — these emotions have long been considered antitheses to our logical minds when dealing with emergencies and unexpected situations. It is often presumed that a calm and measured response would be the path to finding an optimal solution to any tough problem. Now we have the perfect counterexample: a cold hard calculating machine showing signs of panic despite being incapable of harboring emotions.
Of course, the idea that panic and anxiety adversely affect performance, especially in emergency situations, remains sound. It is their origin that deserves re-examination. The biology in humans and higher animals are sufficiently complex to support high-level decision making and feel emotions, but the situation for other species, such as invertebrates, are far murkier. Nonetheless, many “lower” animals demonstrate response reminiscent of panic, or at least anxiety, under stressful situations. However, when even an AI demonstrates such panic-like response, it shows that external evaluation of a subject’s behavior may not be accurate enough to assess its state of mind, that is, if it has a mind to start with. Furthermore, it raises doubt over the cause-and-effect relationship between the emotional state of panic and the inappropriate response to an unexpected situation. Humans are well known for our capacity to rationalise and draw inferences to fit our pre-existing concepts; it would not be too much of a stretch to suggest that the emotion of panic helps justify a poor decision-making process. My nerves are to be blamed, not my brain!
We are only 16 years into the 21st century, and humanity has lost the game of Go to computers. But, in building an AI that exceeds human expectations, we might stand to gain some insight into our own psych.