Distilling the genius of human players into quantum error correction
Last year I made a citizen science game. It lets players have a go at cleaning the errors out of a quantum computer, a process we call decoding. They are given a problem that is quite easy to understand, but for which we don’t know good solution methods. By finding ways to get high scores, the players can actively take part in scientific research.
This problem is presented in the form of a simple puzzle game, based on a grid of numbers. It isn’t very similar to any current games, but it does have certain similarities to Threes!, 2048 or Sudoku. I call it Decodoku.
Based on the methods developed by Decodoku players last year, I decided to make a bot. This was to be designed based on the methods of our high scorers, and would aim to achieve such scores itself. It could then be incorporated into decoding algorithms for real quantum computers.
It hasn’t achieved this quite yet. It gets fairly decent scores, but is certainly not as intelligent as the players who inspired it. Even so, it has found an important use: to greatly enhance the game.
The original version of Decodoku has a simple text-and-pictures tutorial. Players were given a little lecture in how to decode, and then left to get on with it. The current version is quite different. The tutorial is interactive, and has suggested moves given by the bot. Players can learn on the job, with hints inspired by previous high scorers.
The original version also didn’t give players much help in trying to explain their method. Save files let players share the complete story of how they got their high score, but the burden of explaining why they made each move was still fully on the player.
This bug is one that we view as a feature. The aim of Decodoku is to let players have a taste of being a researcher, and struggling to figure things out is very much part of that.
Even so, it is good to have a wise and experienced supervisor to bounce ideas off. This is another role that the bot can play, at least to some extent. By comparing their moves to those of the bot, seeing when they agree and disagree, players might be better able to understand what they are doing and why.
To help the players with this, it might be good to see what the bot is actually thinking. That’s what we’ll be looking at for the rest of this article.
The basics of Decodoku
First we should take a minute to explain what job the bot (and the player) is expected to do.
In Decodoku, there are a bunch of numbers on a grid. Each belongs to a group. The numbers from each group add up to a multiple of 10.
Over time, the numbers change. New groups come. Old groups get bigger. Once one gets big enough, the game ends.
Your job is to prevent this. Keep the groups small by moving numbers around. Typically the groups will grow a little faster than you can deal with, so you are doomed to a Game Over at some point. But the longer you stave it off, the higher your score.
In tutorial games you are shown what the groups are. The image below is an example of this. Different groups are shown in different colours.
You won’t usually have enough moves to clear up everything, so you’ll need to prioritize to get a high score. That’s where the strategy comes in.
In a normal game you also have to work out which numbers belong to which groups. A hard game is the same, except that one square will always lie about its contents. But we won’t dwell much on these here, because the bot is designed specifically for tutorial games.
What the bot thinks
The bot thinks about each number, and asks the following three questions.
- How many numbers are in the same group? This includes the number itself. For example, the light blue 7 in the picture above is in a group with two numbers, and the green 9 is in a group with 3.
- Where is the centre of the group? This is the middle point of the group, though that isn’t uniquely defined. We use the centroid. To see how this is defined for weird shapes like the United Kingdom, see here.
- How far is the number from its centre? Though this might seem quite straightforward, there are multiple ways to define distance. The bot uses the Manhattan distance, which is just the vertical distance plus the horizontal distance, because this reflects how the numbers can actually move.
The bot also thinks about each pair of numbers, and how they relate to each other. For each pair, it asks the following x questions.
- How far apart are they? We again use the Manhattan distance.
- Are they part of the same group? Moving numbers from different groups together forces their groups to merge, and hence get bigger. So it would seem like a good idea to only combine numbers from the same group.
- How far apart are they if you go via the centre of the group? To get this, just sum the distances to the centre of the group for both the numbers in the pair. It will be high if one is far out from the centre, or if both are a little far out. If both are quite far out, it will be very high, even if they are right next to each other.
- Do they add up to 10? Moving numbers together makes them add up, and multiples of 10 disappear. So if a pair add to 10, combining them would get rid of them entirely.
- How many helpful neighbours do they have? Perhaps just this pair doesn’t add up to 10. But does one have a neighbour that could make up the difference? How many such neighbours are there?
- How many neighbours does each have, not including the other? Are the two numbers each other’s neighbours in an island far from anyone else? Or does one have lots of other options?
Is this everything the bot needs to know? Probably not. If you think other factors are important, test your theory by playing the game and tell us what you found out.
What the bot does
Given all this information, the bot assigns a ranking of pairs. For the best ranked pairs, the bot will suggest moving them closer together.
The ranking I chose is a bit strange, but I’ll try to explain it.
- Pairs that are part of the same group are better than those that aren’t.
- Pairs separated by a distance less than 4 are better than those separated by higher distances. The smaller the distance, the better. This prioritizes pairs that are very close. Why is 4 special? It isn’t! Other ways of doing this might well be better.
- For pairs with a distance of 4 or greater, bigger distances are better. If all close pairs are dealt with, this rule makes sure we don’t forget to bring in things that have floated far away.
- The larger the distance via the centre of the group, the better. If all else is equal, we focus on pairs that are both fairly far from the centre of their group.
- Pairs for which one has other neighbours are the best. Pairs for which neither has other neighbours are good too. Pairs for which both have other neighbours are okay. This prioritizes pruning in things that are escaping from groups over getting rid of isolated pairs. Pairs in the middle of a big patch of numbers are dealt with last.
In this, earlier rules have precedence over later ones. So even if a pair seem great according to rules 2–5, if they don’t satisfy rule 1 they’ll be ranked lower than any pair that does.
You might disagree with how this ranking is defined. You might notice that some of the information the bot was interested in has been left out. You might have ideas on how this can be improved. I hope so! Test your theories by playing the game, and tell us what you found out.
Over to you
I don’t expect Decodoku players to worry too much about the details of the bot. I don’t expect them to try making a bot themselves.
What I would like them to think about is what questions they ask when confronted with a Decodoku grid, and how they use that information. This article gives an example of this, but players don’t need to be as formal. Any tips and tricks might help us to write a great algorithm for quantum error correction. Let us know what you find out.