Predict
Published in

Predict

Can AI solve the world’s hardest Go problem?

AI is good at Go, but how good?

Since the impressive feats of DeepMind’s AlphaGo in 2015/2016, the use of AI in Go has continued to rise. Go playing AI has reached such levels of superhuman performance that they now play in their own tournaments, and are also used in online sites to provide analysis and teaching. But how does the AI fair when posed a different problem — rather than just playing the game starting from an open board, how does it get on when facing human-designed Go problems, and can it solve the world’s hardest Go problem?

This a summary of the amazing work of others: see this thread.

I was not involved in this project in any way.

I’m merely a (bad) Go player and AI enthusiast.

An introduction to Go puzzles

Go originated in China over 2,500 years ago, so we’ve had a pretty long time to get good at the game. As part of the accumulated wealth of human knowledge about Go, we have developed Go puzzles — situations where stones are already placed on the board, and an objective is given. A common type of Go puzzle is centred around life-and-death scenarios, known as tsumego. In these puzzles, there is often only a single “line” of moves that leads to the correct solution, so the challenge is for players to discover the correct sequence of moves and hopefully learn principles that can then be applied in real games. Below is an example tsumego, where the player plays as black with the objective to “live” — play stones in such a way that white cannot capture (completely surround) black’s stones. The solution for this puzzle is only three moves (two for black and one for white). Although simple, it highlights important pieces of strategy such as sacrificing the stone on F19 and forming two eyes to keep the group alive.

Black to live. An example tsumego taken from Cho Chikun’s Encyclopedia of Life and Death, the second most viewed set of puzzles on OGS, a popular online Go website.

These problems are often designed to represent real situations seen in actual games of Go, i.e., if you want to become a better Go player, do tsuemgo (something I’m particularly guilty of not doing). However, these puzzles can be pushed to the extreme to challenge and train professional Go players, such that they get incredibly complex and difficult (far, far beyond the simple example given above). Complex puzzles have many possible lines that can be explored and played out, such that determining the solution is incredibly difficult. Before exploring the world’s hardest Go problem, we first we need to discuss what we mean by “solving” a puzzle.

Big Data Jobs

What does it mean to “solve” a Go puzzle?

While on the surface this might seem like a rather self-explanatory question, if you dig into it, there’s actually a bit more involved in definitively solving Go puzzles. I’m coming at this from the perspective of “solved games”. The solution we’re looking for is a set of moves that secures a win, regardless of any possible moves made by the opponent. In the case of Go puzzles, winning is achieving the given objective (which could be a variety of different things; life-and-death puzzles are only one example). Given that the solution has to hold up against all possible opponent moves, solutions that only work if the opponent makes a certain move are not solutions at all. In some games, complete solutions are known — noughts and crosses (tic-tac-toe) is a prime example: it always ends in a draw if both players don’t make any mistakes. More complex games have been solved, for example it is known the first player can always force a win in Connect Four.

A solved game. The complete game tree for noughts and crosses (reduced by symmetry). From this, it’s possible to show that the game will always end in a draw if both players don’t make mistakes.

Applying this same sort of reasoning to Go is impossible — the game tree is just too big. For a standard 19x19 game, there are 361 possible opening moves, and then 360 possible responses — 129960 possible board states after only two moves (and games can sometimes go on for more than 300 moves). But that’s for the complete game — what about for Go puzzles? In simple puzzles, where the board size is significantly reduced and there are already stones on the board, it is possible to exhaustively explore the game tree and come up with a definitive solution (however, solution diagrams often only focus on interesting or tricky lines). Beyond these “simple” (I often find them frustratingly difficult) problems though, is it possible to find definitive solutions that work in all scenarios? Enter the world’s most difficult Go problem.

Trending AI Articles:

1. Why Corporate AI projects fail?

2. How AI Will Power the Next Wave of Healthcare Innovation?

3. Machine Learning by Using Regression Model

4. Top Data Science Platforms in 2021 Other than Kaggle

The infamous Igo Hatsuyoron 120

Go problems are often compiled into collections exploring different aspects of the game: life-and-death, ko, capturing races, and so on. Of the many Go collections studied throughout the long life of Go, Igo Hatsuyōron is considered the most difficult. Originally compiled in 1713, it contains 183 problems of the highest complexity. One problem in particular stands out, and has been award the title of “The most difficult problem ever”. As such, a book has been written devoted solely to this problem, representing a collaborative effort to find a solution (The most difficult problem ever: Igo Hatsuyoron 120 by Thomas Redecker). And here it is:

Igo Hatsuyoron 120. The most difficult problem ever. Fancy a Go?

Immediately, it should be obvious that this problem is vastly different to the first example I gave. Firstly, it covers the entire board, and secondly, there are a lot more stones to deal with. The objective is for black to win, and the current main variation (more on this later) involves over 200 moves. Again, the crux of the problem is, can black win regardless of what white does? While this problem has been extensively studied by people, I’m interested in the recent revelations regarding the use of AI to solve this problem.

If anyone can, KataGo can

We know AI can play Go well. Like really well. But can it do Igo Hatsuyoron 120? The interesting thing about the initial application of AI to this problem is: it couldn’t. Existing Go AI systems were trained to play the actual game (i.e., starting from an empty board and learning through self-play). While Igo Hatsuyoron 120 is a legitimate position that could be from a real game, it involves lots of unusual shapes and difficult fights, and tripped up the AI systems. As such, the system needs to be trained specifically to solve this problem. The latest effort to do so occurred in February 2022.

From an AI perspective, this project is a really interesting iterative process combining the prowess of deep learning with existing human knowledge.

What AI system was up to this challenge? Enter KataGo, an open source Go bot. KataGo is trained in the same way as AlphaZero, but has several improvements beyond the original work of DeepMind. The nice thing about open-source software is that it can be adapted and used by anyone. The creators of KataGo state “any researcher/enthusiast should be able to train a neural net from nothing to high amateur dan strength on the full 19x19 board”. A user from LifeIn19x19, Friday9i, has recently completed a project training a specific version of KataGo just for Igo Hatsuyoron 120. This project was supported by Thomas Redecker (one of the world experts on Igo Hatsuyoron 120 and the author of the book mentioned above), and David Wu (the creator of KataGo).

From an AI perspective, this project is a really interesting iterative process combining the prowess of deep learning with existing human knowledge. To force KataGo to learn about difficult lines and overcome weaknesses, around 20,000 initial Igo Hatsuyoron 120 positions were shown to the AI. This ensured that the self-play element of the AI development didn’t get “stuck” — it forced exploration of the game tree. Furthermore, initial models were evaluated and fed an additional 50,000 positions to further force the AI to explore the entire tree, aiming to come up with a more definitive solution to the problem.

The final verdict

The final version of KataGo for Igo Hatsuyoron 120 took a staggering 11 months of training. The result is actually somewhat surprising — White wins! The best black can do is to lose by one stone (assuming a komi of zero). The solution found is a variation of the previous best solution discovered (without AI) several decades ago. This ties into what we discussed above: the aim of solving a problem such as this is not to find one solution where black wins, but consider the problem as exploring the game tree of possible moves by optimising the play of both black and white. Interestingly, the network was also very strong at normal Go, so it had clearly learnt generalisable rules that apply to the real game (further supporting the argument for human players to practise puzzles!).

So, has the problem been solved? Friday9i is “90% confident” that this is the best possible solution. Thomas Redecker is confident that KataGo plays black’s best line of play, and even with several alternative variations of this line, the final result remains a loss for black by one stone. This project is a fantastic application of AI to a specific problem, with a creative combination of AI and human knowledge to discover new things and reach a new solution. However, without some remarkable advances in computing technology, we’ll likely never know if this is the absolute best solution.

Don’t forget to give us your 👏 !

--

--

--

where the future is written

Recommended from Medium

What is the SOLID?

Drop entire provide.

Gitlab CI/CD Pipeline to Deploy APIs to WSO2 API Manager

Forge HackTheBox Write-up| Forge hack the box Walk through | 10.10.11.111 | forge.htb

Solving the Office Problems of 2017 through Document Management

Discoveries from a Hobby Project

before, after, final, from engineering discoveries to personal discovery

Welcome to the Uniswap Docs

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Joseph Early

Joseph Early

PhD student at the Alan Turing Institute and the University of Southampton. Machine Learning and Explainable AI www.jearly.co.uk

More from Medium

Showing Bias in BERT

How much is the open source ecosystem “worth?” ⚖️💰📏

How to Deploy ArangoDB Graphs on GPUs for Accelerated Graph Algorithms using Nvidia’s RAPIDS…

Hydrogen Production | Steam Methane Reforming (Part 1)