Magic: the Turing Machine

Sean Li
henngeblog
Published in
9 min readFeb 20, 2020

--

What are you about to read? Mainly something to do with this article:

“Magic: the Gathering” is the world’s most complex game.

What does that even mean? What a clickbait-y headline! What does ‘official’ mean here? What is complexity?

I’m not going to answer those questions, but I’ll start by answering this:

What is Magic: the Gathering?

Magic: the Gathering (Magic or M:tG for short) is a trading card game. You take the role of a powerful wizard, summoning creatures and casting spells in order to overcome your opponents.

It’s a game of many pieces and objects, where you use strategy and tactics to try and outwit your foes. There are many different resources to manage, and you are given many opportunities to interact with your opponents.

I’ve been playing it for nearly 20 years now, and have played it competitively for some amount of time. That’s how I came across the article — of course people who know me sent it to me as soon as they saw it.

Here’s me at a couple of tournaments:

GP Nagoya 2016 https://mtg-jp.com/coverage/gpnag16/article/0016430/
The 6th God of Limited Tournament (2019) https://article.hareruyamtg.com/article/35648/#3

So that’s Magic. Let’s get back to looking at the article.

What’s the article about?

The article reports on the findings of a study posted in 2019:

The study by Churchill et al. embeds a Turing machine into a game of Magic: the Gathering to show that it is Turing complete. They state that determining the outcome of a game of Magic is equivalent to solving the ‘famous’ halting problem. They hypothesize that:

Determining the outcome of a game of Magic: the Gathering in which all remaining moves are forced is undecideable.

What does this all mean and why should we care? Let’s take it a step at a time, with some definitions of the jargon just mentioned.

Definitions and background

A Turing machine is an abstract machine invented by Alan Turing in 1936. A Turing machine consists of:

  1. A tape of unbounded length. This tape is made of cells that may be blank or have symbols written on them.
  2. A stored machine state. The machine keeps track of this state.
  3. A set of instructions that tells the machine what to do each time the tape is read. These instructions tell the machine how to move the tape, what to write and also whether or not to change state.
  4. A read/write head that can read one cell of the tape at a time. After a cell is read, the machine performs what the instructions say, based on the symbol read and the state of the machine.

With just these components, a Turing machine can be made to simulate any algorithm, given the right amount of states, instructions and symbols. A Turing machine can be made to compute anything a modern computer can.

A system is Turing complete if it can compute anything a Turing machine can. Almost all modern programming languages are Turing complete.

The Turing machine was originally conceived whilst attempting to prove whether or not undecideable problems exist. These are problems where it is impossible to create an algorithm that will always give a correct yes or no answer. Problems that cannot be solved by a computer, even with unlimited time and processing power.

Turing proved that these problems do exist, by showing an example — the halting problem. It essentially boils down to:

Can an algorithm be created that looks at a program and its input, and decide whether or not it finishes running?

Turing shows that the answer to this is “No”. You can create an algorithm that is correct for some programs and some inputs, but not for every program and its inputs. The simplified reason is because you could give this algorithm a program that does the exact opposite of what is predicted.

I’m not going to go into great detail here, as I don’t think more words will make it easier to understand, but here’s a video more specifically on the halting problem. University of Cambridge also has a nice little page with diagrams that might help with conceptualising Turing machines, and here’s a video that sums up a lot of the above terminology.

So, what about the paper? Churchill et al. aimed to show that, similar to the halting problem, there is no algorithm that can take any game of Magic as its input and determine if there is a winning move. Basically, to show that the following could not exist:

How do they embed a Turing machine into a game of Magic?

Magic has an extremely detailed comprehensive rule book that is over 230 pages, describing the rules engine that underlies the game. You don’t need to know the exact ins and outs to play the game (fortunately — that would be so unreasonable!), as the engine runs invisibly under the surface. The game is usually played with many shortcuts, and the rules are there just there to explain exactly how all mechanics work—to remove ambiguity in how effects might happen.

Magic is a game that has been around for over 25 years and as such, the number of unique individual cards has reached over 20,000. There is a huge range of mechanics and such a wide variety of effects that allow for some complex constructions to be created. All of this, together with the deep rules engine, allows for putting a Turing machine into the game.

As we talked about above, there are some things that need to be encoded into the game to set up the Turing machine:

  1. The unlimited length of tape to perform computations on and with.
  2. The state of the machine.
  3. The set of instructions that tell the machine what to do in each computational step, depending on what is read from the tape and the state of the machine.
  4. The read/write head of the machine that performs the computational steps.

One of the unique features of the game is that there are virtual game objects, ‘tokens’, that are generated by regular card effects. These can be represented by anything and don’t need to literally exist in the same way that a deck of ‘real’ card objects does. This means that in a game of Magic, there can be an unlimited number of game objects.

Two cards used in the study that create virtual ‘token’ objects, and are used for the tape and the machine state.

This, along with the multitude of characteristics that Magic cards can have, allows them to construct the unlimited length of tape used in a Turing machine. The state of the machine is also stored using two sets of very similar, but specifically different cards. You can see a couple of the cards above.

The set of instructions used in a Turing machine are a set of conditional statements that cause different behaviours based on the state of the machine and the current cell of the tape being read. In Magic, many cards have ‘triggered abilities’, which are equivalent to conditional statements. These start with the words “at”, “whenever” or “when”. Using a carefully selected array of cards and effects, the authors of the paper are able to create the set of instructions needed for the Turing machine.

Examples of ‘triggered abilities’

The read/write head is constructed by assembling a set of cards that forces moves to occur. Executing a computation step involves forcing a player to cast a specific spell that causes a cascade of different effects to occur in sequence. Churchill et al. decide to encode these four components into a two-player game of Magic, as it is the format that is most played in tournament Magic.

With these parts in place, the Turing machine has been encoded. All that’s left is to let it run. Each player has no decisions to make and the game will proceed in a predetermined fashion. This Turing machine sometimes stops and other times it continues in an infinite loop. Despite every move being forced and determined, predicting whether or not the machine will stop is unreliable. Determining the outcome is equivalent to determining the halting of a Turing machine.

Demonstrating this, Churchill et al. state that their theorem has been proved in the affirmative. It is impossible to determine the correct move in some situations during a game of Magic.

Why is this special?

It is an incredibly unlikely scenario, but it can be created using a tournament-legal Magic deck and obeys the rules exactly — it can be set up in an actual tournament game. They use this as a basis for suggesting that Magic is the most computationally complex game that is played in a real-world setting.

Compare this to incredibly complex games such as Chess, or Go — where moves are actually finite, despite there being an exponentially large number of combinations and permutations. We may not have the computing power to be able to brute force this currently, but they are theoretically computable.

This is mainly significant for further algorithmic game theory research, as it goes against the leading formal theory of games that assume that real-world games are computable. It is a bit less practical — the outcome of the study doesn’t teach us how to play better at Magic.

Photo by Randy Fath on Unsplash

Additionally, Magic being Turing complete means that it can technically be used to compute anything a computer could compute. Of course, this is highly impractical and you would have to find a way to encode the algorithm you wanted and a way to decipher the result. The idea is nice, though.

What do I personally think?

I think it’s a neat study and has obviously taken a lot of work. I guess that from the definition of computational complexity, Magic might be “officially the world’s most complex game”, but that claim still irks me. It is definitely very complex — and probably impossible play perfectly — but I do feel that games like Chess are harder to play well. The conclusion of the study is less practically focused, however, and is mainly significant from a research perspective.

Wrapping up

Whilst reading, I found some lists about other systems that are ‘unintentionally’ Turing complete. Some examples:

  1. Encoding a Turing machine in Microsoft Excel using only formulas.
  2. A Microsoft PowerPoint presentation file that uses AutoShape, Animation and hyperlinks and that can be encoded to perform computational tasks. While not technically Turing complete as it requires user input, the creator has written an amusing paper about this and has made an enjoyable YouTube video of the presentation.
  3. TypeScript is a modern Turing complete programming language that is statically typed. Its type system is used to ensure type safety for the language’s code. An issue on the language’s GitHub repository — Typescript’s Type System is Turing Complete — shows that even the supporting type system can be used to perform computations.

Looking into this paper gave me a lot of appreciation for some ideas and opened my eyes to a lot of new concepts I wasn’t aware of. I didn’t really think about some problems being non-computable. I was under the impression that a computer could solve anything given enough time and power.

I want to end by sharing a video about computational complexity and problems. Churchill et al.’s paper mentioned terms including NP and coNP-complete, prompting me to have a little look into these. I’m glad I did, as the video was incredibly inspiring and insightful, and I definitely recommend watching it.

--

--

Sean Li
henngeblog

began life in the UK, now working on software in Tokyo