This is the transcript of a talk given at the PCG workshop at the FDG 2020 conference. You can find the paper it’s based on here. It’s all about procedural generation, which is basically making content (like terrain, levels, puzzles, etc) for things that need content (like games).

I’m James Wootton from IBM Quantum, where we work on quantum computation. This is a new technology, so you’ve probably not heard it being applied to procedural generation before. I’ve been looking into how it might be done, and that’s what I’m here to tell you about!

First, I’ll need to convey some idea of what quantum computers are and how helpful we can expect them to be. I’ll spare you from all the details about how they work. Suffice it to say that they represent a completely different model of computation than both conventional digital computers and analogue computers. They don’t just speed up normal software through some quantum magic. Instead they require a completely different approach to software, as well as different hardware to run it on.

It turns out that quantum computation could one day perform some tasks far more efficiently than normal computation. This will enable us to solve problems that are currently intractable. After decades of research we now know many examples of quantum algorithms that will provide these “Quantum Advantages’. They relate to a range of possible applications, including some that could be relevant for procedural generation. These include solving optimization problems, constraint satisfiability problems, doing graph-theoretic analysis and more.

The hardware required for quantum computation has also been in development over the last few decades, but building it is a tricky business. Imperfections are inherent, and our favorite methods to work around them require huge overheads in software.

To be specific, one important parameter in describing a quantum device is the number of qubits. These are basically bits, but implemented in a quantum way. To run these flagship algorithms we need many thousands of qubits, which is not something we will have for a good few years.

There are a bunch of other important considerations which mean that we really shouldn’t compare devices with qubit number alone.

What Is Quantum Volume, Anyway?

But since we won’t be drilling down into those details, let’s just continue to use qubit number as our main metric of comparison.

We don’t currently have many thousands of qubits, or even just one thousand. We don’t even have 100. The largest device that can be used at the moment has 65 qubits. Though not enough to run the flagship quantum algorithms we know and love, they do have the potential to offer results that conventional computers cannot. This is demonstrated by the hardness of emulating these devices, which would require you to book the world’s largest supercomputer for a couple of days to even make an attempt.

So, we live in interesting times! The hardware can offer unique answers, but our decades of research have not prepared us to ask the right questions. A major focus of current research is therefore to try and figure out how to do something useful with these machines as soon as we can.

As well as the biggest and best of current devices, there are also many at the more modest level of around 20 qubits. And there are also a whole bunch with up to 15 qubits that have been made available to the public for free by us at IBM.

Also, we have been building some great emulation software to use to aid in design and testing. Though it is a massive computational task to emulate more than 50 qubits, even your laptop wouldn’t have much trouble doing it for up to around 20 qubits.

This all combines to make ~20 qubits an important landmark. Whether we use real quantum hardware or emulation, it is really easy to run quantum software targeted at this level.

So, what does this all mean for procedural generation? Well, that depends on which of these three eras you are looking at: today, the upcoming era where devices have hundreds of qubits, or the future where devices have over a thousand qubits.

Once we get quantum hardware that can run standard quantum algorithms, we’ll get new tools to use for procedural generation. So, we think that quantum computers of that time will be able to achieve unique things that are useful for this field.

At the other extreme, quantum software targeted at the 20-qubit level cannot do anything unique. The very fact that we can run it on an emulator means that you don’t even need the quantum hardware. But can the quantum software nevertheless do something useful? Can we write a 20-qubit quantum program that will give results that a real-life end user can use in a real-life situation?

If we can achieve usefulness with 20 qubits, then we can aim to keep on expanding that usefulness as we target ever larger quantum hardware. The middle era becomes one of exploring ever more sophisticated ways for quantum computation to offer useful things for procedural generation. We’ll begin by making tools that we probably wouldn’t have thought of if we weren’t thinking in terms of quantum software. Then at some point we’ll move on to things that we can’t do without using quantum hardware.

Reaching such a point is known as ‘Quantum Advantage’, and is a big deal in the quantum computing community. Actual users, however, will care more about whether it is actually useful to run any given piece of quantum software. They won’t care about the digital vs. quantum hardware wars going on in the background. They just want the results at the end. So, it’s achieving usefulness with quantum software that I prefer to focus on.

But let’s not get ahead of ourselves. The first step towards this era of expanding usefulness is to prove that even quantum software for up to 20 qubits can be useful. So let’s do that!

Now’s the time to get specific. Procedural generation means that I need to generate something. But what? Terrain generation is a classic, so let’s start with that!

Perlin noise is a ubiquitous tool for procedural terrain generation. It would be great to be able to make something equally useful with quantum software. So I started with the aim of making something for terrain generation.

There is also another, much less sophisticated method for terrain generation: create a bunch of random points, concentrated where you want high ground, and then apply a blur effect. Implementing something like this seemed like a more attainable goal for a first step, so that’s what I did.

For this I started by developing a way to encode height maps as quantum circuits (which are the fundamental elements of quantum software). These have some similarities to those of the Boolean circuit model of digital computation. The qubits are the bits, and they pass through various operations as the computation proceeds.

Once we can encode height maps as circuits, we can manipulate them. The encoding I used focuses on squeezing as many points into as few qubits as possible. This doesn’t leave a lot of flexibility to offer much high-level control. However, pretty much any change you make to the circuit will induce some kind of interference effect.

Mostly I work with operations than can be thought of as partial forms of the Boolean NOT gate. When applied to normal bits, the NOT flips the value between 0 and 1. With quantum gates we can parameterize this to perform a manipulation that can do half a NOT, or a quarter, or any fraction you want.

If we apply this operation to all our qubits, then we can look at how the height map changes as we increase the fraction. We start off seeing something like a blur effect. Interference effects then begin to take over, creating patterns that we wouldn’t get from a simple blur.

In this example shown here, we start off with two seemingly arbitrary points in (a). They start to blur, but then the interference effects kick in. This results in a pattern emerging: in this case the chessboard pattern of (f). This particular result is explicitly due to the two pixels I started off with. If you use a different seed, you’ll see a different journey.

As well as generating these patterns, I have also used them. My main use is as a substitute for high-frequency Perlin noise in terrain generation. So, you make the main profile of, for example, an island through some other method. Like the one in (c) here. Then you make a seed set of pixels, as in (a), and quantum blur it to generate a pattern like the one in (b). Then you splat the pattern all over the profile to create your final terrain.

Here you can see a 2D example, as well as a 3D rendering that was used in our education game QiskitBlocks. Details such as information about grass type and tree placement in the 3D rendering was also made using the quantum process.

Since RGB images are basically just three height maps stuck together, we can also manipulate images. This could be be to just make weird looking images using the interference effects. A more sophisticated use is to encode a pair of images and induce a teleportation-like effect between them. Quantum Teleportation is an actual thing for quantum systems where information is transferred between qubits, so it is something that we can actually write a program for. Of course, it’s not the same as teleportation in Star Trek, but we can use it to make transition animations.

The idea behind the encoding can also be used for other forms of data. I’ve tried music

and a Mario level:

Evidently I can use it to make content for some nice tweets. But is the method actually useful? For this I need external validation.

There are two major examples of external use. One is the a game studio that I’ve been working with, who are using the method for elements of procedural generation in an upcoming game. The game has a sci-fi setting, with the quantum method here injecting a genuine flavor of science into their fiction.

The other is that the artist Libby Heaney used some of these ideas as the starting point for some of her own art.

The fact that the results emerge from quantum software is something that is highlighted in both these use cases. There is some form of what I call ‘algorithmic provenance’ going on here: It is important to the user that these results originate from the quantum region of algorithm space, and aren’t just sparkling linear algebra.

This is not a sustainable feature. There won’t always be people wanting to use quantum software just because it is quantum. But it is a start, so I’ll take it. These use cases show that the quantum blur method is useful to external parties, at least to some degree.

In conclusion, the era of using quantum software to create actually useful methods for procedural generation starts now!

With this method described here, it is better to use emulation than real quantum hardware, but that will start to change soon. In fact, I’ve already used one of our biggest devices in a completely different method for PCG.

Introducing: A Quantum Procedure for Map Generation

So expect to see me again in the future, still banging the drum of quantum computing.

Until then, you can try out using the quantum blur effect yourself. It’s native to Python, but we have a Unity plugin too. Hopefully these can help provide even more ways to be useful!

Introducing: Procedural generation using quantum computation was originally published in Qiskit on Medium, where people are continuing the conversation by highlighting and responding to this story.

]]>Hi, I’m James Wootton. I work at IBM Research. We build quantum computers and I used a couple of them to do some map generation for games. That’s what I’m going to tell you about today.

So what are quantum computers?

Well first let’s think about our normal digital computers for a moment. These express information in bits, and any algorithm can be compiled down into very simple operations on one or two bits. In saying this, I’m really thinking of the Boolean circuit model as a model for what a digital computer is. As an example, the simplest of these Boolean operations is the NOT gate, which just flips a single bit between 0 and 1.

Quantum computers are the same, except that the bits are stored in a way that can be described with quantum mechanics. Quantum mechanics is a more general theory than the usual Boolean logic that we use to describe bits. So this gives us more freedom, and allows us to manipulate the information in new and different ways. We get a set of basic quantum operations that we can use to build algorithms in very different ways. Some problems that are intractable for digital computers, because they need unreasonable time or resources to run, will become solvable with quantum computation.

This all requires much larger and better quantum hardware than we currently have. However, we do have prototype devices that can be used even now. Anyone can go to quantum-computing.ibm.com, create a quantum program, and send it off to our labs to run. It’ll go through control hardware which turns it into a series of microwave pulses. These then go into a fridge which cools a quantum chip down to nearly zero Kelvin, runs the program, and then sends back an output. This gets decoded by the control equipment and turned into nice, familiar 0s and 1s for you to look at.

So what does this hardware look like?

It is made up of ‘qubits’. These are quantum bits: quantum systems used to store bit values.

We can manipulate single qubits and also pairs of qubits in various ways. Some look like Boolean operations (like NOT and XOR), and others don’t. However, for the two qubit operations, we are limited in our choice of which pairs of qubits can be used. This is governed by the so-called ‘coupling map’ of the device. For example, in the device called *Rochester* that you can see here, the qubit labelled 0 can only directly interact with qubits 1 and 5.

Also, every manipulation we can perform, as well as the act of reading out the result, is subject to imperfections. Everything that can go wrong, will go wrong with some probability. This means that, if we were to attempt the arbitrarily long and complex processes found in most of the algorithms we know, we can expect to get nonsensical results.

To get sensible results from near-term hardware, we can try to base our algorithms explicitly on the things that quantum computers do naturally. One example is the generation of quantum interference effects. Doing this with a quantum computer is almost as easy doing it with water: like throwing a rock into a pond!

Another example is the simulation of quantum dynamics. Qubits are quantum particles, that evolve and interact according to the operations applied to them. If we want to study a process that acts like this, then it is trivially easy to get a quantum computer to help.

The application that we will look at now is exactly of this form. We will find something that we want to simulate for the purposes of procedural generation, argue that it is naturally captured by the dynamics of qubits, and then implement the simulation on a quantum computer.

To explain how, let’s have a closer look at qubits (although, admittedly, not close enough).

The contradiction at the heart of qubits (which we won’t have time to resolve) is that:

•They cannot store any more information than a single bit.

•Their internal state is more complex than that of a single bit.

So they are more than just a bit, while being no more than just a bit. Because of quantum!

The internal state of a qubit can be described in terms of three real numbers, ⟨X⟩, ⟨Y⟩ and ⟨Z⟩, each between +1 and -1. These are subject to the constraint that you can see here on the slide. This description is exactly the same as a much more familiar system: a ball. Any point on or in a sphere of unit radius can be described in terms of the cartesian coordinates: x, y and z. None of these may exceed the radius in either direction. And the distance from the center shouldn’t exceed the radius either. This is all exactly captured by these same conditions.

From this coincidence we get a nice spherical visualization of the qubit, which is known as the ‘Bloch sphere’.

Let’s just focus on the value of ⟨Z⟩. If we ask the qubit for an output, this value will tell us how likely we are to get a 0 or 1. For ⟨Z⟩=1, when we are at the north pole, we are certain to get 0. For ⟨Z⟩=-1 (the south pole) we are certain to get 1. For ⟨Z⟩=0 (any point along the equator) the outcome will be random. Anything else will be also random, but with a bias towards the closest pole.

The reverse of this is how we can calculate what the value of ⟨Z⟩ is for a qubit: run the process many times, determine what the probability is for the 0 and 1 outcomes, and use this to estimate ⟨Z⟩.

Note that the algorithm for map generation we present here depends on the values of the variables estimated in this way. Not on the random outcomes themselves. It is therefore a (mostly) deterministic process.

For n qubits we need 4ⁿ variables (which is exactly why these things are hard to emulate on standard digital hardware).

For example, ⟨ZZ⟩ describes how likely the outputs of two qubits are to agree. ⟨ZZ⟩=1 means definitely agree (both 0 or both 1). ⟨ZZ⟩=-1 means definitely disagree (so 01 or 10). The rest of the variables describe other important details regarding the way in which the qubits are correlated.

These correlations also satisfy constraints. Some are similar to the single qubit ones, allowing us to draw more spheres. There are also things like the monogamy of entanglement: Put simply, the more correlated two qubits are with each other, the less they can be correlated with other qubits.

We are going to use qubits to generate geopolitical maps. This will be done by simulating the sort of process you see in a game like ‘Civilization’: a bunch of settlers around the world all suddenly decide to found nations. These then grow and interact.

A quantum process is used to decide how the nations will act. Each nation will correspond to one qubit. The ⟨X⟩, ⟨Y⟩ and ⟨Z⟩ variables correspond to three different policies that a nation can pursue: attack, defend, and expansion into unclaimed territory.

We would expect that no realistic nation can maximally do all three things at once. Limited resources will mean that priorities need to be decided upon. In this way, the natural constraint on the state of a qubits automatically restricts the simulation to realistic behavior for nations.

Also, nations do not make their decisions independently. The variables describing correlations will cause their decisions to be correlated also. The constraints on correlations will also ensure that the behavior remains realistic. For example, the monogamy of entanglement will ensure that two nations at war with each other will focus their decisions on each other.

Admittedly, the constraints of quantum states are not a perfect reproduction of ‘The Art of War’. Someone not motivated by a love of quantum computing probably wouldn’t have come up with this method. But it seems to work quite nicely.

The quantum decision making process is embedded into a ‘game’. Nations get turns in which they can place a city on the map. Cities exert influence over surrounding territory. This then determines which nation controls territory, and can even result in cities changing loyalty.

These results are then fed back into the simulation. If a nation loses territory, for example, its qubit state will be manipulated to make it more defensive. If a city changes hands, the correlations between the two nations involved will be altered to reflect a state of war.

All of this is implemented in Python. The ‘qiskit’ framework is used to create and run the quantum jobs. More specifically, I built a package called ‘QuantumGraph’ on top of Qiskit, optimized exactly for these kinds of applications.

It is suitable for all your procedural generation needs! (or at least some of your quantum ones)

Put it all together, and we have a process that generates geopolitical maps growing over time.

If you hand over control of a nation to a player, you also have a game. The quantum process then serves as the AI for opponents (and also for an advisor to the player).

So how intelligent is this AI? Admittedly, it is not the best you will have ever seen. But it should at least be better than random decision making.

To test this, we ran a process in which all nations are are initialized in the state with ⟨X⟩=⟨Y⟩=⟨Z⟩. Since they have no bias to any particular policy, they make random decisions at first. Half of the nations (which we call the ‘standard’ nations) then undergo the standard process in which gains and losses of territory are fed back into the qubit states. This is not done for the other half (the ‘opponent’ nations). So these remain random, and insensitive to what is going on.

We’d expect to see the standard nations doing better, because they base their decision on what is actually happening. They should therefore gain more territory than their opponents. That’s exactly what we see when we emulate how the process would run on perfect quantum devices, using standard digital computers. This is only possible when we have small numbers of nations.

Note that I tried to use the word ‘simulate’ in the paper to refer to quantum computers being used for simulation tasks, and ‘emulate’ for when a digital computer is used to reproduce the results we would expect from quantum devices. But when doing this, I forgot about the titles of the figures, as you see here. They should actually refer to emulation.

Now for real hardware: 28 nations on the 28 qubit device called ‘Cambridge’, and 53 nations on the 53 qubit ‘Rochester. In both cases, evidence of the ‘intelligence’ remains. It is not quite as clear as before, perhaps due to the effects of noise.

What we have seen here is an initial and rudimentary example of current quantum devices doing a task that is relevant for procedural generation, and for games.

If we look to the history of using computers for games, what can we compare it to? The goal would be to develop a quantum milestone equivalent to 1962's *Spacewar!*: the first example of a novel and unique game that was made possible through the new technology.

Is this map generation process an example of novel and unique procedural generation, or an AI for a novel and unique game?

Probably not.

Perhaps a more realistic comparison is the Checkers AI developed at IBM in the 1950s: more about testing the capabilities of the new technology than giving users something that they need.

So, in conclusion, quantum computers exist. They are currently limited, but we can use their intrinsic ability to perform quantum simulations for procedural generation.

What about their ability to generate interference effects? Can that be used for procedural generation too? You can find out about that in my talk in the PCG workshop at FDG next month!

Introducing: A Quantum Procedure for Map Generation was originally published in Qiskit on Medium, where people are continuing the conversation by highlighting and responding to this story.

]]>First, we need to be realistic with our scope. No world crisis will be solved with a current quantum computer. No fortunes will be made. When thinking of these current devices as a computer, you have to think of them as pretty bad computers. The limited number of qubits means that they can’t do much, and the effect of errors means they can’t do even that very well.

Instead, we can think of these devices as pieces of experimental physics equipment that we can run tests on. From that perspective, their size and precision is very impressive. Never have we had such control over such a large quantum system. They represent the frontier of what we have been able to probe experimentally. With them, we have an opportunity to test whether quantum theory continues to hold in this unexplored territory.

It was in 2016 that IBM handed this opportunity over to the world. At that time, I worked at the University of Basel and primarily did research on quantum error correction — ensuring that we don’t lose quantum information to the effects of noise and quantum decoherence. So I started to probe this particular corner of quantum theory: if we implement elements of quantum error correction on real devices, do they behave as we would expect them to?

I wrote some papers with those results. These are now among the more than 200 papers that have been written by scientists around the world who have performed experiments on IBM’s quantum devices.

Experimenting with the world’s most advanced technology

Using prototype quantum computers for science is obviously something that is easier when you are a professional scientist. However, given the right resources and tools, this is something that everyone else can have a go at as well.

This was one of the main motivations for creating the topological_codes module in Qiskit. The module has all of the basic tools you need to create experimental tests of quantum error correction. Specifically, it helps set up and analyze instances of the so called ‘repetition code,” the simplest example of something that uses the principles of quantum error correction to store information and protect it from errors. The repetition code uses multiple physical qubits to simulate a logical bit, rather than a logical qubit. To store a bit you simply repeat the desired value many times, to make it unlikely that that the majority of copies will get garbled by errors. It is simple enough that it can be run on current devices, and used to see how well the ideas of quantum error correction actually work.

To see it in action, you can read the relevant section of the Qiskit textbook. You can also now check out our paper, “Benchmarking near-term devices with quantum error correction” that has just been published in the journal *Quantum Science and Technology**.*

This paper aims to serve as an introduction to what quantum repetition codes are all about, and how to implement them with topological_codes. It also shows an example of the software in use, by doing the largest test of quantum error correction performed so far: a repetition code with 43 qubits.

If you want to discover more about all this, just check out the paper. It has been written for a broad audience, and so it isn’t too full of technobabble. It’s also open access, so you needn’t worry about a paywall.

To give you a taste, let’s take a look at some results from the 53 qubit device called *Rochester*. Unfortunately, this specific device is not accessible to the public. But it is accessible to me, so I couldn’t resist the urge to put it to the test.

The main result was to look at how well repetition codes of different sizes can protect information. For this, we make repetition codes and use them to store a bit value ( 0 or 1). Then we read out the bit value from the code. In a perfect world, we read out exactly what we put in. However, due to errors, sometimes this will go wrong. The probability of this is called the ‘logical error probability’, which we use as our main measure of how well the code is working.

The theory of quantum error correction tells us that making bigger codes should give us better protection against errors. In fact, we should find that the logical error probability decreases exponentially. Let’s see what actually happens.

Here the ‘code distance’ is a measure of how large the code is. So we do indeed see that the logical error probability decreases with code size, and does so in a way that is consistent with the exponential decay we expect. This is true whether we are doing the test by storing a bit value of 0 (blue) or 1 (orange).

In addition, we get to see some of the details of real devices that theorists like me sometimes forget about. The decay of logical error probability is not smooth, but instead rather bumpy. This is due to the fact that different qubits suffer errors of different types and different strengths. So if making our code bigger means using some dodgy qubits, things might not improve as much as we’d like.

Also we see that it is easier to store a 0 than a 1. This is due to to the fact that we use a lot of qubits in the state |0 ⟩ to encode the 0, and a lot in state |1 ⟩ to encode a 1. The superconducting qubits we use prefer to be in the |0 ⟩ state rather than the |1 ⟩ state. So more errors cause a |1 ⟩ to flip to |0 ⟩ than vice-versa. The results we see are due to exactly this.

You don’t need 53 qubits to get insights like these. Repetition codes can be run just as well on 5 qubits or 15, which is what is available on the public devices. They can be used to test how well you can store a bit value if you use only the best qubits, or how much of an effect the errors will have if you use the worst. You could also add in extra noise using tricks like the ones below.

- The id gate makes the qubit pause for a moment and risk experiencing some stray interaction with its environment.
- Doing any gate and then immediately undoing it gives you the errors associated with that gate (make sure to put a barrier in between so the compiler doesn’t just remove them).

You could even start thinking about new ways of decoding the results that come out, and contributing them to the toplogical_codes module for other people to use.

In summary, by running repetition codes, you can start getting insights into how our devices work. You’ll be able to see the effects of errors and how error correction is able to deal with them. By comparing against simulations, you’ll be able to investigate whether these devices really can tamed in the way that we require to achieve fault-tolerant quantum computers.

Whether you are a high school student or someone doing a research project for their masters, you can do experimental tests of quantum physics from your own home. All the hardware and software you need is free and accessible through the IBM Quantum Experience.

So why not see what you can discover? And perhaps get to know a quantum device even better than the people who made it!

Here’s How to Test Error Correction on an IBM Quantum Computer was originally published in Qiskit on Medium, where people are continuing the conversation by highlighting and responding to this story.

]]>Quantum computers are a new technology! Games are great! Let’s put them together! This is what I’ve been looking into for the past few years, and many others have now heard the call and joined the fun.

The History of Games for Quantum Computers

Nevertheless, you might be wondering why anyone would bother, and how they’d go about doing it. To answer this, I think it’s best to go back to the dawn of all computer games: the 1950s.

Games have been around for as long as humanity. Computers are a more recent phenomenon. It was the 1950s when they really began to coalesce.

The first example was Bertie the Brain, exhibited at a trade show in 1950.

It used vacuum tubes and light bulbs and played *Tic-tac-toe*. Not because it offered a better play experience than using pen and paper, but because it helped to show off the vacuum tubes. The fact that I’m writing about it now, 70 years later, shows that it certainly had some impact. It showed us the first possible reason for making games with computers: to showcase new technology.

Then there was *Nimrod** *in 1951. Again vacuum tubes and light bulbs, but this time the game was *Nim*. It was shown at the Festival of Britain with a stated aim, among other things, to

…illustrate the algorithm and programming principles involved.

This gave us a second reason to make games with computers: for education. Programming would have seemed a strange and arcane art in those days. How better to make it relatable than to show it playing a game?

Next was a research project in 1952 at Cambridge University.

The idea was to study human-computer interaction, done using another implementation of *Tic-tac-toe* (or *Noughts and crosses*, as it is known in Cambridge). It again used vacuum tubes, but this time had cathode ray displays: the first graphical upgrade of the games industry! Nevertheless, the driving principle was research, giving us another reason to make games with computers.

Another project to note was that done at IBM Research in the 1950s: making an AI to play Checkers and beat a human opponent.

This would be something that IBM would repeat decades later, using Deep Blue to beat Gary Kasparov at chess, and then Watson to beat champions at Jeopardy! DeepMind has also been prominent in this area, using AlphaGo to beat a champion at *Go*. All of which are more examples of using games to showcase technology.

There were more games in the 1950s, but the ones here show what the decade was all about. Whether it was to showcase the technology, to educate about the technology or to research the technology, games were used to help computers rather than the other way around.

The exception was 1958’s *Tennis for Two*, a game made for fun and a sign of what was coming.

In 1962, a team at MIT decided to put their new PDP-1 to good use. They embarked on a project that would:

- Test out the new device (and later, new installations);
- Showcase its capabilities;
- Be fun!

They created a game. But it wasn’t just an existing game, with a computer-based implementation tacked on as a gimmick. It was something new: space battles without the need to build your own space ship. It was called *Spacewar!*

This was the first clear example of computers doing something for games. They were used to make something new, and something fun. Functions made for trigonometric calculations were repurposed to let the player experience flying around a star.

It wasn’t the only game this decade to be set in space. There was also the original *Lunar Lander*, as well as the unimaginatively named *Space Travel*. More down to Earth was an economics simulator — *The Sumerian Game** — *which was an early ancestor of games like *SimCity.*

In each case, the computer was used to do exactly what computers were made for: crunching numbers and performing simulations. But rather than being used for payroll calculations or planning a real trip to the moon, the computers were used to provide unique play experiences.

Not many people know about *Spacewar!* or *Space Travel*. Any familiarity with *Lunar Lander* is usually due to one of its later incarnations, rather than the 1960s text-based original. This wasn’t due to any secrecy on the part of the developers: they usually freely shared the source code. It was more because few had the room for a room-sized computer, never mind the money.

The 1970s is when it became possible to bring the hardware to the masses. The first arcade game was *Computer Space,* conceived of as a coin-operated *Spacewar!*, that ran on hardware dedicated only to the game. The first generation of home consoles soon followed, including the *Magnavox Odyssey,* *Atari Home Pong* and more.

With these breakthroughs, computer games became something you could sell to the masses, either in an arcade or at home. The games industry had truly begun. You probably know the rest.

So far, I’ve assumed you know what computers are. Probably a good assumption, since most of us now use them every day. However, we probably don’t spend much time thinking about how they work at the most fundamental level.

For example, what is the simplest thing a computer can be programmed to do? Most might think of something like adding two numbers together. But even that has its complexities: What kind of numbers are they? How are they encoded? What does the addition look like when it actually runs through the microchips?

For the truly simplest elements of computation, we need to think in terms of bits. All information is encoded using bits, and all information is processed using many simple manipulations of bits known as logic gates.

For example, the so-called NAND gate is a simple comparison of two bit values. It outputs 1 if they are both 0, and 0 otherwise. Amazingly, every possible program can be compiled down into many applications of the NAND gate. If you don’t believe me, there is a course that takes you all the way from NANDs to Tetris!

For every problem that computers can solve, the so-called ‘computational complexity’ is a measure of the resources we need to crack it. Essentially, it is like asking how many NAND gates we would need. Some problems take more than others. For example, adding two *n-*digit numbers has a complexity that rises linearly with *n*: doubling the length of the numbers doubles the effort. Multiplying numbers (at least in the way taught in primary school) takes quadratic complexity, so doubling the number of digits would quadruple the complexity. There are some problems for which the computational complexity increases much faster this this. For these, we can easily run into seemingly straightforward problems that cannot not be solved within the lifetime of a human, even with a planet sized supercomputer!

Simulating molecules is one such example. Just writing down the quantum states of atoms is something that you need exponentially many bits for. Processing them isn’t any more efficient. Even quite common molecules in our everyday life, such as caffeine, are beyond our ability to fully study with computers.

The difficulty of simulating molecules within a human lifetime is a rather extreme example, but the limitations of computers arise in many other cases too. Whatever you are doing with computers, you will run into issues with computational complexity sonner later.

Game design is field that can be particularly sensitive to computational complexity, since we need to be very careful with how long things take. Even calculating the inverse square root of a number, a relatively easy task, can take too long when we need to do a lot of them. This motivated the use of some almost magical methods in *Quake III Arena*, to get it done fast enough to render the graphics.

Whether in game design or some other application of computing, software developers have become adept at finding workarounds like this. Sometimes it comes in the form of new hardware, such as graphics cards, dedicated to solving certain types of problems.

This is exactly how we hope to solve the problem of molecular simulation. We want to find a new form of hardware that is perfectly suited to storing and processing information about quantum states. For this, all we need to do is take something that works in a quantum way anyway, and harness it as a computer. These proposed devices started to be seriously studied in the 1980s, and were given the name ‘quantum computers’. It was soon discovered that they are useful for more than just quantum-related work, but could speed up our computations in all sorts of different areas.

Like normal computers, quantum computers are based on the idea of bits that are manipulated by simple logic gates. All quantum programs are just great big piles of these basic operations.

The difference is that the evolution of the states is described by quantum physics. This gives us a few extra kinds of basic operation, with which we can find completely new paths from input to output. The number of quantum logic gates we need to solve certain problems can be dramatically fewer than when using the logic gates of normal computers. Because of this, programs that would take impractically long to run can be made practical.

To run this quantum software, we need quantum hardware. It’s taken a few decades, but we’ve finally gotten to the point of making prototype versions of this hardware. The quantum bits, also known as *qubits*, can be made using tiny superconducting circuits that have been cooled down to near absolute zero.

Since 2016, these are available on the cloud for anyone to play with at the IBM Quantum Experience. Since 2017, it is been possible to interface with them programmatically using a Python API.

This marked this beginning of a new era. We can start making games with quantum computers!

In 2017 I started making games that use quantum computers. Why? Because I could!

The first one worthy of note — which I have now retconned as the first ever — was a simple version of Battleships.

The display was just ASCII in a terminal. The gameplay was no better than you could do with pen and paper. Rather than being made to be played, it was made to be the basis of a blog post.

How to program a quantum computer

The idea was to give an example of quantum programming that wasn’t a complex algorithm, but instead something more tangible. The states of a qubit were used to encode whether a ship had been sunk or not. Quantum gates were used to implement attacks. Without realizing at the time, I had repeated the history of *Nimrod*. Just as normal programming seemed like an arcane art then, so quantum computing can seem so now, so I made a game to “illustrate the algorithm and programming principles involved.” It was a quantum game made for the purpose of education.

My next major game was *Quantum Awesomeness*. This was based on the fact that there are many aspects of a quantum processor that determine how useful it is: number of qubits, which pairs of qubits we are allowed to apply a two-qubit gate to, and what imperfections are present. To help people get an idea of how different devices compare, I made a puzzle game.

The game was designed so that the puzzles were based on the specifications of the device used to run it. The more qubits you had, and the more complex the layout of possible two-qubit gates, the better the puzzles would be. The more imperfections there were, the less it would seem to make any sense. So to compare devices and see which was best, one only needed to see which was the most fun. It’s not as informative as looking at benchmarking data, but it is a lot more accessible to the public.

The easiest way to see it in action is to read one of my blog-based play-throughs.

Getting to know your quantum processor

This game was designed so that the player didn’t need to have direct access to the devices. Instead I would use my own access to generate a bunch of puzzles. I also made a some of graphs and wrote a paper. So this was an example of making quantum games for the purpose of research. Again, repeating the history of the 1950s.

The next game to use a real quantum device was made in 2019, by a team at a quantum game jam in Helsinki. It was a card game in which each player has a qubit and each card represents a quantum gate. The quantum computer was used at the very end to judge who played their gates best.

It’s a fun game to play, but it also helps to teach people about quantum gates. Instead getting them to sit through a lecture or read through a textbook, playing the game helps us explain the basics of quantum computing in a more friendly atmosphere. Another quantum game made for education.

The motivation behind these games was exactly the same as in the 1950s: education and research. They also served the other big purpose of 1950s games, to showcase the new technology, since their very existence helped to get the word out in the form of blog posts, tweets and more.

Now it’s time to get serious. What can quantum computers actually do for games?

Like everything, they bring some benefits but also some design constraints. For one thing, the need to cool the hardware to near absolute zero means that you’ll never get one inside your Switch. They’ll live on the cloud, and so probably aren’t something that you wanting to send jobs to every frame. For any game that might use a quantum computer, the vast majority of it will still run on a normal computer. We just need to find a small corner that quantum computers can excel at.

As with any attempt to find applications for quantum computers, we also need to think about the different eras that the technology will go through. Our main aim is a future era, where all imperfections are removed by error correction, where you’ll have more qubits than you’ll ever need, and where you’ll be able to do any two-qubit gates you want. This is the era of fault-tolerant, scalable quantum computing. Don’t expect to see it during the next decade!

What we’ve had since 2016 is what has been called the ‘quantum ready’ era. Prototype devices have been made and put on the cloud for everyone to use. But most can’t do anything that you couldn’t emulate on your phone while reading this article.

Between these two eras is something a bit more mysterious. We’ll have quantum devices that are too complex to simulated, but which do not have the capability to run the standard textbook algorithms developed over the last few decades. This is the so-called ‘NISQ era’. We’ll need some hindsight to determine whether it has already begun, is beginning now, or will begin in a few years. Nevertheless, finding potential applications of quantum computers for this era is currently a big area of research: by universities, large companies and even start-ups.

In my opinion, an ideal type of application to look at is one where experimentation can be started in the quantum ready era, made even more sophisticated in the NISQ era, and then become fully awesome in the fault-tolerant era. I also like to encourage people outside the field to get involved, so something with a gentle learning curve would also be very useful.

I am very much of the opinion that procedural generation is the answer: Starting off simply and branching out in complexity; providing tools for hobbyists to play around with; absorbing any wait time for quantum results in a loading screen; and making nice content to tweet about.

I could go on about this endlessly, but instead I’ll direct you to where I go on about this endlessly.

For current and near-term applications, the most important factor is that we need to be easy on the quantum processors. We can’t get them to do highly abstract things that require huge amounts of gates. The small imperfections in each will just pile up to give us nonsense at the end. Instead, we need to let the quantum computers be their natural quantum selves, and find applications for that.

My first idea was to take one of the most natural processes for quantum systems: making interference patterns. By devising a method for encoding images in quantum states, these interference effects could be used for simple image manipulation.

For example, here’s the process turning a couple of pixels into a chessboard, and finding some interesting patterns along the way.

One possible use of these patterns is to create interesting textures for procedurally generated terrain. So that’s what I did.

Is this a revolution in terrain generation? I don’t think so. Are AAA studios offering me huge wads of cash to put this in their games? Not that I’ve noticed. It is the first step towards figuring out where quantum computers could be useful in games, but there is still a long way to go.

Before taking the next step along this road, there are a few more things we can do with this quantum image encoding. For example, in quantum computers we often use teleportation-like effects. These are not just sci-fi, but something you can do with a simple quantum program. Applying them to an encoded image can give an interesting transition effect.

The method is not limited to images. It can be adapted to encode and perturb many different forms of information. I’ve dabbled with music

and even Mario levels.

This didn’t give the most beautiful symphony, or the best possible Mario level. But they were fun to do!

My current project is something a bit more sophisticated: I’m trying to get a quantum computer to play a *Civilization*-like game.

The hope is that this might be the *Spacewar!* of quantum games: The first example of the new technology contributing something truly unique. Though perhaps it is more likely to be the quantum equivalent of the 1950s checkers AI: something done by some guy at IBM to show off. Either way, it’s a step in the right direction: into (or towards) the 1960s.

The quantum tool behind this project will be open-source and freely available for everyone to play with. It could be used as part of procedural generation projects or function as a rudimentary AI. As in the 1960s, the spirit of the next decade should be one of experimentation and collaboration. Near-term quantum computers are for game jams rather than AAA studios.

There will also be papers and graphs. They are seldom sexy, but they keep us rigorous. Here’s a sneak preview.

If the history of computer games began in the 1950s, then the 1940s was its prehistory. One relic of this era was *Turochamp*, a chess AI developed by Alan Turing and David Champernowne. It had one big problem: it was too complex for the computer hardware of the time. It was run in 1948, but with a human manually simulating the process.

This too has a parallel in recent experiments with quantum games. It is easier to simulate simple quantum programs on a normal computer than to run them on real quantum hardware. This allows us greater flexibility to explore what qubits can do, within the familiar and fun context of hacking together simple games.

Many games have been made in this way. Such as *QPong* (a quantum version of *Pong*), *Qiskit:Blocks* (building quantum programs in a *Minecraft*-like world), and many more. Most of the procedural generation techniques I talked about before also ran on simulators rather than real devices.

So if you want to have experiment with the brave new world of quantum computing, why not try making a simple game yourself? I introduce some of the tools you might use in the following Twitter thread.

]]>

Next you’ll need to use a tool that allows to to create and manipulate quantum programs. As someone who works on a framework called Qiskit, I’d suggest Qiskit. Here are the basics.

To use Qiskit, you can either install it or use our web-based interface. Installation is done with pip install qiskit. If that makes no sense to you, you’d probably be better off with the web version.

Now you have the resources needed to write quantum software, and then run it either on simulators or on real prototype quantum processors (made by IBM, and available on the cloud).

To get ideas on what to do with this new superpower, you could check out our textbook

or this gamified tutorial

GitHub: qiskit/qiskit-community-tutorials/master

Or a whole bunch of other stuff at our main website

If quantum AI is what you are interested in, I actually don’t know much about that. But I have colleagues who do, and they wrote this example of quantum-enhanced support vector machines

For a quantum classifier, there’s also something on our blog.

]]>Whether you are attending a quantum hackathon, or just want a quantum project to work on at home, I have some ideas for you!

What follows is by no means a comprehensive account of everything useful or interesting that can be done with quantum computing. It is completely biased towards the things that I am personally interested in. So if you want to know more about any of them, feel free to ask me.

Making a game doesn’t necessarily mean making something big and fancy. Just pushing a pixel around can be enough, and it can be a fun and creative way to turn knowledge about quantum gates into something tangible.

If you want some pointers on how to get started, here’s a twitter thread with some ideas and resources.

There are two approaches that can be taken to combine quantum with games. One is to ask “What can games do for quantum computing?” From this starting point, you could design a game that helps teach something about quantum programming. This could be a game that is explicitly educational. It could also be one that uses a quantum effect to implement the central game mechanic, allowing a player to interact with quantum mechanics in a playful way. Here are some examples of these types of game.

The other approach is to ask “What can quantum computing do for games?”. In this case, the aim is to find something that computer games need, and that only quantum computation can provide. You won’t be able to implement anything with such a ‘quantum advantage’ quite yet, but you could create a proof-of-principle version based on a small-scale example. One of my own efforts towards this is the topic below.

Procedural generation is when we design algorithms to design things for us. It is often used to content such as puzzles and maps for use in games, or to create realistic textures for use in CGI. It can also be used to create images, animations, music, stories, or pretty much anything.

For example, here’s an island I made.

In the future, quantum computers could be of great use in the creation and analysis of procedurally generated content. But we can also create interesting and unique results with current quantum hardware and simulators. I know because I have a proof-of-principle projects to prove just that principle. See the blog posts below for more details.

- Introducing: Procedural generation using quantum computation
- Introducing: A Quantum Procedure for Map Generation

These are just me dipping a toe in the vast ocean of possibilities for quantum procedural generation. So why not see what you can come up with?

We are not yet at the point where quantum error correction can be used to reduce noise in quantum computation. Nevertheless, small-scale implementations of QEC can and have been done. One motivation for this is to test how well current hardware can implement the basic tools and techniques required.

It is with this aim in mind that QEC-based tools have been added to Qiskit Ignis.

Here’s How to Test Error Correction on an IBM Quantum Computer

Specifically, this package is designed to allow people to implement a certain family of quantum error correcting codes — the topological codes — in order to test prototype quantum hardware.

Currently, the tools in Ignis allow you to create quantum repetition codes, and then decode the output to see how well errors were corrected. There are many ways in which this can (and will) be expanded, including new quantum error correcting codes, or different ways of testing a quantum computer using the existing codes, and different decoding algorithms. All are potential projects for a hack.

The idea of creating new decoders is especially interesting to me. I created a citizen science project in 2016 in which the process of creating new decoders was made into a game called Decodoku.

Players came up with various heuristic algorithms, which had some interesting properties. These will be incorporated into Ignis, allowing them to be used on all the codes that get implemented within the TopologicalCodes package. If you design a new decoding algorithm, it could share the same destiny.

There are many more potential projects that you can do. From the hardware level of writing new backends for Qiskit and designing your own gates in Qiskit Pulse, to high-level applications for chemistry or material design. For inspiration, check out recaps of our flagship hackathons: the Qiskit camps.

]]>- A randomly generated puzzle needs to be solvable;
- A randomly generated level needs to be completable;
- Randomly generated terrain should not have features that trap the player.

When trying to ensure that these conditions are satisfied, we can easily run into computational problems that simply take too much time or computer memory to solve.

For example, suppose we wanted to make a puzzle game based on the traveling salesman problem. In this we would generate a map with a set of random cities, and challenge the player to find the shortest route that visits them all.

Finding such a route is well known to be a computationally intractable problem. As the number of cities increases, the computation effort required to find the correct solution increases exponentially. We can therefore easily generate puzzles for which we cannot calculate the best solution. And if we don’t know the best solution, we can’t tell the player how well they’ve done. It would not be a good game.

While trying to find a workaround for this problem, there are a few more conditions for our random generation that need to be taken into account:

- Each random sample should give a unique player experience;
- That experience should be fun!

These can be even trickier to satisfy, since here we run into the “10,000 bowls of oatmeal” problem.

I can easily generate 10,000 bowls of plain oatmeal, with each oat being in a different position and different orientation, andmathematically speakingthey will all be completely unique. But the user will likely just seea lot of oatmeal.

- Kate Compton

This shows the danger of content that is too random. Though each sample is unique, it is unique in ways that humans don’t notice or care about.

We get a similar problem for content that is not random enough. Humans are adept at pattern recognition. If we simply remix the same few blocks of content in a simplistic way, player’s will soon see through the facade. The illusion of infinite content will be broken.

From the computing perspective, being ‘not random enough’ is a good thing. If generation of puzzles, levels or other content has some structure to it, analysis of the content can make use of that structure. This makes it easier to ensure that our puzzles are solvable, our levels are completable, and to compare our player’s solutions to the best that is possible. Unfortunately, the same is also true for the player: once they see the structure, things can become boring.

Balancing all of these conflicting requirements means that high quality random content generation can be a challenging task!

Quantum computation is a new architecture for computing, allowing completely different approaches to be taken in designing and implementing algorithms. It has the power to solve problems that were previously intractable. Though hardware is currently still in the prototype stage, we already know exactly what the software will be able to do. This means we can start to look at how it might be applied to the problems of random content generation.

For example, let’s think again about our traveling salesman game. The complex optimization problem on which this is based can be done with a significant speedup when using quantum computing. So we can generate a map however we like and give it to the player. While they are working out their solution, we can use a quantum computer (via IBM’s cloud quantum computing services, for example) to determine the best solution possible. Then we can use this information to rate the player’s efforts.

The same is true of many other kinds of problems that are relevant for randomly generated content. By expanding on the kinds of puzzles and levels that can be analyzed, quantum computing will allow greater freedom in creating algorithms to generate this content.

Quantum computers can also help in the generation of random content itself. These devices can be thought of as extremely fancy random number generators, which can generate random distributions that no normal computer ever could.

For example, Shor’s algorithm takes an integer and computes its prime factors. At the heart of this is essentially a quantum process that takes an integer as a seed and then spits out random factors of that integer. Computing these factors would take exponential effort for a normal computer, and so it becomes practically impossible for large numbers. But quantum computers will be able to do it without breaking a sweat.

Shor’s algorithm shows that quantum computers will be able to produce random solutions to computationally hard problems. Generating good puzzles, levels or terrain for a game can also mean producing random solutions to computationally hard problems. The two are therefore well poised to work well together. So now we can start exploring how to do it.

In the rest of this article we will create a proof-of-principle. We’ll make some randomly generated but player-pleasing content that can be used within a game. We’ll show that it can be done in the current era of quantum computing, which means using simulators and prototype quantum hardware. And we’ll show that it can be done fast enough to fit in a loading screen.

With this we’ll show that the era of generating random content in a quantum way is here. Not coming soon. Not decades away. It’s something you can start doing now!

Admittedly, we won’t see anything that normal computers couldn’t do too. The aim instead is a proof-of-principle: to show that playable content can be quickly generated, and to serve as a starting point for others to start experimenting with what quantum could do for them.

Since this is going to involve some coding and the occasional bit of mathematical notation, we’ll move over to a Jupyter notebook. You’ll find it in the box below (or here, if the embedding doesn’t work for some reason).

So there we have it: islands made with quantum computing. Available now for game designers everywhere.

Creating infinite worlds with quantum computing was originally published in Qiskit on Medium, where people are continuing the conversation by highlighting and responding to this story.

]]>For me, one of the best things about games like Minecraft or Civ is the ability to explore randomly generated landscapes (and occasionally blow stuff up). This is all done through the magic of procedural generation.

One algorithm that recently caught my eye is called *Wave Function Collapse*, because it is inspired by ideas from quantum mechanics. From now on, we’ll just call it *WFC*.

Since my job is to write software for quantum computers, I instantly saw an opportunity. I would upgrade WFC from being just quantum-inspired, to truly being quantum.

To understand how I did this, we first need to understand the algorithm and its quantum inspiration. The WFC algorithm owes its name to the fact that each random choice is an event rather like a measurement of a quantum object. These objects can, in some sense, be doing multiple mutually exclusive things at once. This is called a *quantum superposition*.

For example, suppose you had a bunch of cats which obeyed the laws of quantum mechanics, and then shot at them with a quantum bullet. This bullet, while flying towards the cats, could be in a superposition of every possible trajectory at once. Then, once it hits, the superposition is spread to the cats too. Each will be simultaneously alive and dead, depending on which path the bullet took. But since there was only one bullet, only one cat will be dead. So the cats aren’t each in an independent superposition. Instead their fates are entangled together in a superposition of all possible victims.

Once a big, lumbering, non-quantum object such as ourselves comes along, the superposition will disappear. When we look at a cat, it has to decide whether it is alive or dead. This process of randomly dropping out of a superposition is the collapse of the wave function.

This all gives us a nice method for generating a collection of cats, where one of them is dead.

- Take some quantum cats
- Shoot them with a quantum bullet
- Observe them one-by-one to whether each is dead or alive
- …
- Profit!

The WFC algorithm takes this quantum inspiration and turns it into a powerful and useful method for procedural generation. Rather than just creating piles of abused cats, its actual function is to take bitmaps, and randomly riff on them to create larger images. The wave function collapse event is simulated by standard pseudorandom number generators. Nothing actually quantum ever happens.

To port it to run on a quantum computer, and hence make use of real collapses of wave functions, we need to make some choices. First, do we want a fancy algorithm that will run on the large, fault-tolerant device of the future, or do we want something that we can run today. Though the former is a very interesting possibility, I choose the latter.

For this, we just need to change one little thing. We need to find the point at which WFC makes its random choice, and replace it with a quantum random number generator. This will do exactly the same job, but it will do it using the genuine randomness that comes from collapsing wave functions in a quantum computer.

So that’s what I did. The quantum computer I targeted IBM’s 5 qubit prototype device, provided to the public over the cloud. The software I used was Qiskit, IBM’s Python package for creating and running jobs on quantum computers.

The method was exactly the same as explained in this tutorial on performing random number generation with Qiskit. Specifically, I wrote a simple Python class which, when initialized, asks IBM’s quantum device to set up a superposition of the two bit values 0 and 1, observe it, and then repeat the process thousands of times. The result is a list of thousands of bits, each the result of a wave function collapse. This list is then used for the random numbers required in WFC.

To test it out, it seemed fitting to get it to riff on the image of a cat. Here is the result. I’m not sure whether they are alive or dead.

]]>You just need to head over to the website, create and account, and you’ll be given a blank canvas on which to create something quantum.

But what to create? What is a quantum program anyway? In fact, what is a quantum computer?

These questions can be answered by making comparisons to standard digital computers. Unfortunately, most people don’t actually understand how digital computers work either. So in this article we’ll look at the basics principles behind these device. And to help us transition over to quantum computing later on, we’ll do it using the composer.

The first thing we need to know about is the idea of *bits*. These are designed to be the world’s simplest alphabet. With only two characters, 0 and 1, we can represent any piece of information.

One example is numbers. In European languages, numbers are usually represented using a string of the the ten digits 0, 1, 2, 3, 4, 5, 6, 7, 8 and 9. In this string of digits, each represents how many times the number contains a certain powers of ten. For example, when we write 9213, we mean

9000 + 200 + 10 + 3

or, expressed in a way that emphasizes the powers of ten

9×10³ + 2×10² + 1×10¹ + 3×10⁰

Though we usually use this system based on the number 10, we can just as easily use one based on any other number. The binary number system, for example, is based on the number two. This means using the two characters 0 and 1 to express numbers as multiples of powers of two. For example, 9213 becomes 10001111111101, since

9213 = 1×8192 + 0×4096 + 0×2048+ 0×1024 +1×512 + 1×256 + 1×128 + 1×64 + 1×32 + 1×16 + 1×8 + 1×4 + 0×2 + 1×1

Here these numbers are all powers of 2 (8=2³, 4096=2¹² etc).

A string of bits is called a *binary string. *From now on, we’ll write them in a special font to avoid confusion.

9213 = 10001111111101

Binary strings can be used them to represent more than just numbers. For example, there is a way to represent any text using bits. For any letter, number or punctuation mark you want to use, you can find a corresponding string of at most eight bits using this table. This choice is quite arbitrary, but it is a widely agreed upon standard, and was used to transmit this article to you through the internet.

This is how all information is represented in computers. Whether numbers, letters, images or sound, it all exists in the form of binary strings.

Now let’s write a binary string using the composer. We’ll use the qubits of the composer as if they were just normal bits, and not do anything quantum with them.

The picture above shows what is known (for historical reasons) as a *circuit*. It depicts the story of what happens to the qubits during the program, from left to right. In this particular example, not much is happening. We set up eight qubits and number them from 0 to 7 (because that’s how programmers like to do things).

After setting up the 8 qubits, we immediately measure them. This is simply the process of seeing which bit value they represent. Qubits are always initialized to give the answer 0 if you immediately measure them. So if we press ‘simulate’, we should get the bit string 00000000: a 0 from each qubit.

Programs made on the composer give us a histogram. This is because sometimes quantum computers have some randomness in their results. But since we aren’t doing anything quantum, we get one result with certainty: exactly the 00000000 that we expected.

To encode a different string, we need what is known as a *NOT gate*. This is the most basic operation that you can do in a computer. It simply flips the bit value: 0 becomes 1 and 1 becomes 0. On the composer, we do this by dragging over the green box marked *X*. Let’s do it on qubit 7.

Now our computer outputs the string 10000000 instead.

The bit we flipped, which comes from qubit 7, lives on the far left of the string. This is because the composer numbers the bits in a string from left to right. If this convention seems odd to you, don’t worry. It seems odd to lots of other people too, and some prefer to number their bits the other way around. But this system certainly has its advantages when we are using the bits to represent numbers. This is because means that qubit 7 is telling us about how many 2⁷s we have in our number. So by flipping this bit, we’ve now written the number 128 in our simple 8-bit computer.

Now try out writing another number for yourself. You could do your age, for example. Just use Google to find out what the number looks like in binary (if it includes a ‘0b’, just ignore it), and then add some 0s to the left side if you are younger than 64.

Now we know how to encode information in a computer. The next step is to process it. To take an input that we have encoded, and turn it into an output that we need.

For that we need a problem to solve. Let’s do some basic maths. In primary school you will have learned how to take large mathematical problems and break them down into manageable pieces. For example, how would you go about solving the following?

9213

+ 1854

= ????

One way is to do it digit by digit, from right to left. So we start with 3+4

9213

+ 1854

= ???7

And then 1+5

9213

+ 1854

= ??67

Then we have 2+8=10. Since this is a two digit answer, we need to carry the one over to the next column.

9213

+ 1854

¹

= ?067

Finally we have 9+1+1, and get our answer

9213

+ 1854

¹

= 11067

This may just be simple addition, but it demonstrates the principles behind all algorithms. Whether the algorithm is designed to solve mathematical problems or process text or images, we always try to make things easy with small and simple steps.

To see what the smallest and simplest steps look like, let’s do the above addition problem again, but in binary.

10001111111101

+ 00011100111110

= ??????????????

Note that the second number has a bunch of extra 0s on the left. This just serves to make the two strings the same length.

Our first task is to do the 1+0 for the column on the right. In binary, as in any number system, the answer is 1. We get the same result for the 0+1 of the second column.

10001111111101

+ 00011100111110

= ????????????11

Next we have 1+1. As you’ll surely be aware, 1+1=2. In binary, the number 2 is written 10, and so requires two bits. This means that we need to carry the 1.

10001111111101

+ 00011100111110

¹

= ???????????011

The next column now requires us to calculate 1+1+1. This means adding three numbers together, so things are getting complicated for our computer. But we can just break things down into smaller chunks again. To do it in a way that only ever requires us to add two bits together, we can start with just the first two 1s.

1

+ 1

= 10

Then we add the final 1 to the result using our usual method of going through the columns.

10

+ 01

= 11

The final answer is 11 (also known as 3).

Now we can get back to the bigger problem. With the answer of 11, we have another carry bit.

10001111111101

+ 00011100111110

¹¹

= ??????????1011

So now we have another 1+1+1 to do. But we already know how to do that, so it’s not a big deal.

In fact, everything left so far is something we already know how to do. This is because, if you break everything down into adding just two bits, there’s only four possible things you’ll ever need to calculate. Here are the four basic sums (we’ll write all the answers with two bits to be consistent).

0+0 = 00

0+1 = 01

1+0 = 01

1+1 = 10

If our computer can do these, it can add anything. This is called a *half adder*. Let’s make one with the composer.

Since we are using the composer, our half adder will look a little something like this.

The two bits we want to add are encoded in the qubits 0 and 1. The above example encodes a 1 in both these qubits, and so it seeks to find the solution of 1+1. The result will be a string of two bits, which we will read out from the qubits 2 and 3. All that remains is to fill in the actual program, which lives in the blank space in the middle.

The basic operations of computing are known as *gates*. We’ve already used the NOT gate, but this is not enough to make our half adder. We could only use it to manually write out the answers. But since we want the computer to do the actually computing for us, we’ll need some more powerful gates.

To see what we need, let’s take another look at what our half adder needs to do.

0+0 = 00

0+1 = 01

1+0 = 01

1+1 = 10

The rightmost bit in all four of these answers is completely determined by whether the two bits we are adding are the same or different. So for 0+0 and 1+1, where the two bits are equal, the rightmost bit of the answer comes out 0. For 0+1 and 1+0, where we are adding different bit values, the rightmost bit is 1.

To get this bit of our solution correct, we need something that can figure out whether two bits are different or not. We call this an *OR gate*.

In the composer, the job of the OR gate is done by the *controlled-NOT *gate, also known as the *CNOT*.

This is applied to a pair of qubits. One acts as the *control qubit* (this is the one with the little dot). The other acts as the *target qubit* (with the big circle).

There are multiple ways to explain the effect of the CNOT. One is to say that it looks at its two input bits to see whether they are the same or different. Then it writes over the target qubit with the answer. The target becomes 0 if they are the same, and 1 if they are different.

Another way of explaining the CNOT is to say that it does a NOT on the target if the control is 1, and does nothing otherwise. This explanation is just as valid as the previous one (in fact, it’s the one that gives the gate its name).

Try the CNOT out for yourself by trying each of the possible inputs. For example, here’s a circuit that tests the CNOT with the input 01.

If you set this up, you’ll find that the output is 11. We can think of this happening because of either of the following reasons.

- The CNOT calculates whether the input values are different and finds that they are, which means that it wants to ouput 1. It does this by writing over the state of qubit 1 (which, remember, is on the left of the bit string), turning 01 into 11.
- The CNOT sees that qubit 0 is in state 1, and so applies a NOT to qubit 1. This flips the 0 of qubit 1 into a 1, and so turns 01 into 11.

For our half adder, we don’t want to overwrite our input. Instead, we want to write the result on a different pair of qubits. For this we can use two CNOTs.

Qubit 3 is where the rightmost bit of our output will live, and so it is exactly where we want the same-or-different information to end up. This is what is done by the combined effect of the two CNOTs shown above. If you are not convinced, try it out for all the different inputs that you can encode on qubits 0 and 1.

We are now halfway to a fully working half adder. We just have the other bit of the output left to do: the one that will live on qubit 4.

If you look again at the four possible sums, you’ll notice that there is only one case for which this 1 instead of0: 1+1=10. It happens only when both the bits we are adding are 1.

To calculate this part of the output, we could just get our computer to look at whether both of the inputs are 1. If they are — and only of they are — we need to do a NOT gate on qubit 4. That will flip it to the required value of 1 for this case only, giving us the output we need.

For this we need a new gate: like a CNOT, but controlled on two qubits instead of just one. This will perform a NOT on the target qubit only when *both* controls are in state 1. This new gate is called the *Toffoli*. For those of you who are familiar with Boolean logic gates, it is basically an AND gate.

To use the Toffoli in the composer, you’ll need to used the ‘Advanced’ checkbox. This gives you a few new gates, one of which is called ‘ccX’. This is another name for the Toffoli, since it can also be thought as a controlled-controlled-NOT. When you drag it onto your circuit, you’ll get three boxes marked ‘a’, ‘b’ and ‘c’. The ‘a’ and ‘b’ boxes are the controls, so we drag them to our input qubits. The ‘c’ box is the target, so we drag it to qubit 4. Then we have our half adder.

In this example we are calculating 1+1, because the two input bits are both 1. Let’s see what the result is.

We get 10, which is the binary representation of the number 2. We have built a computer that can solve the famous mathematical problem of 1+1!

All that’s left is to try it out with the other three possible inputs, and show that our algorithm gives the right results for those too. Perhaps the easiest way to try this is to use the ‘Qasm Editor’ rather than the graphical interface of the composer. Just paste in the following.

include "qelib1.inc";

qreg q[4];

creg c[2];

// We want to calculate a + b, for two bits a and b

// For a=0, remove the following line. For a=1, leave it.

x q[0];

// For b=0, remove the following line. For b=1, leave it.

x q[1

cx q[1],q[2];

cx q[0],q[2];

ccx q[0],q[1],q[3];

measure q[2] -> c[0];

measure q[3] -> c[1];

The half adder contains everything you need for addition. With the NOT, CNOT and Toffoli gates, we can create programs that add any set of numbers of any size. You could try making some more complex adders yourself, or check out some examples in Qasm form.

These three gates are also enough to do everything else in computing too. In fact, we can do even do without the CNOT, and the NOT gate is only really needed to create bits with value 1. The Toffoli gate is essentially the atom of mathematics. It is simplest element into which every other problem solving technique can be compiled.

In quantum computing, we split the atom.

The atoms of computation was originally published in Qiskit on Medium, where people are continuing the conversation by highlighting and responding to this story.

]]>A billion qubits working in complete isolation will never achieve anything. They need to talk to each other. This means that the need to be connected by so-called *controlled operations*. Every device has its own rules for which pairs of qubits can be connected in this way. The better the connectivity of a device, the faster and easier it will be for us to implement powerful quantum algorithms.

The nature of errors is also an important factor. In the near-term era of quantum computing, nothing will be quite perfect. So we need to know what kinds of errors will happen, how likely they are, and whether it is possible to mitigate their effects in the applications we care about.

These are the three most important aspects of a quantum device: qubit number, connectivity and noise level. To have any idea of what a quantum computer can do, you need to know them all.

So let’s make a game that runs on a quantum device and shows all these things directly. By just playing the game, the player will see exactly how big and connected the device is. By comparing a run on the real device with one on a simulator, they’ll see exactly how potent the noise is. Then by playing again with some error mitigation, they’ll get an idea of how much useful information can be salvaged even in the era of slightly noisy quantum computers. We’ll call this game *Quantum Awesomeness, *and we’ll play it on the IBM device known as* *’ibmq_16_melbourne’.

The image above gives a quick guide to what’s going on in the *Melbourne* device. There are 14 qubits, numbered from 0 to 13, shown by coloured circles. The qubits that can talk to each other via a controlled operation are shown connected by lines, each of which has been given a letter for a name.

The most useful thing you can do with controlled operations is create and manipulate entanglement. Only with this can we explore the full space of possibilities that is open to our qubits, and hope to do things that are practically impossible for classical computers.

The simplest kind of entanglement involves just two qubits. It will cause each to yield outputs that are random, but with correlations between them. For example, consider following program on a couple of qubits.

This is a circuit diagram: a history of the qubits in our program told from left to right.

Here we perform an rx operation for the angle π/2, which results in what is essentially half an x gate. This is an example of what we called a ‘partial NOT’ in the *Battleships with partial NOT gates* game. Instead of flipping the qubit all the way from |0⟩ to |1⟩, it parks it in a quantum superposition state in-between.

The operation acting on both qubits is a controlled-NOT, which applies a NOT to the bottom qubit only when the top one is in state |1⟩. Since the top one is in a superposition of both possibilities, the effect of the controlled-NOT is to spread the superposition to the bottom qubit as well: a combined superposition of both |0⟩ and both |1⟩.

The final part of the circuit is to extract a simple bit from each qubit: |0⟩ becomes 0, |0⟩ becomes 1, and a superposition becomes a random choice of one or the other. But even though both qubits will give a random result in this case, they are sure to always agree.

To verify this, let’s run it. The result I got was,

{'11': 503, '00': 521}

Out of the 1024 samples that the program was run for, all results came out either `00` or `11`. And we have approximately equal numbers of samples for each. All as predicted.

By replacing the value of π/2 we can change the nature of the superposition. A lower value will produce results more biased to outputs of 0, and a value closer to π will result in a bias towards 1s. This means we can vary the probability with which each output will give a 1. But whatever value we choose, this form of circuit ensures that the results for the two qubits will always agree.

In this game we will make lots of these pairs of entangled qubits across the device. To do this, we must first choose a way to pair up the qubits monogamously (possibly leaving some spare ones left over). This pairing will be chosen randomly, and the whole point of the game is for the player to guess what the pairing was.

Once we have the random pairing, we’ll basically run the quantum program above on each independent pair. Though we will introduce one difference: for each pair we’ll randomly choose a different value for the rx operation, so the degree of randomness shared by paired qubits will differ from pair to pair.

When we run the circuit, the result will be a string of 14 bits: with each bit describing the output of each qubit. Since we run it for many samples to take statistics, the full result will be a list of all the bit strings that came out, along with the number of times that each occurred.

Since the aim of the game is for the player to deduce the pairing from the output, we could just dump all this data on them. But that probably wouldn’t be very fun. Instead, we can just focus on the important points:

- What is the probability of each qubit giving an output of 1 instead of 0?
- What is the probability that each pair of connected qubits give the same value?

We can then put that information on our image of the device. For example, here’s one particular set of runs.

The number displayed on each qubit is the percentage of samples for which the result was 1. The number on each connection is the percentage of samples for which the corresponding pair of qubits had outcomes that disagreed. With this information, we can easily find the pairs of qubits that were entangled: either by looking for the qubits that share the same probability of outputting a 1, or finding the pairs that never disagreed.

The above data was extracted from a simulator. Now let’s try it on the real quantum device.

Here the results are not so clear as for the simulator. The effects of noise are much stronger, making it much harder to identify the pairs.

But all is not lost! We can do some error mitigation. We know that the output should have a certain structure, so we can search for that structure and use it to clean up the result.

Here’s a very simplistic way to do that. First, each qubit will look at all its neighbours and see which one it agrees with most. It’ll then assume that this most agreeable qubit is its partner. To try and balance out the errors in its results, we’ll then replace the probability of getting an output of 1 for that qubit with the average from them both.

Though things have improved, they’ve not been made perfect by any means. This is because our error mitigation scheme is pretty simple, and is just bolted on at the end of the process. Error mitigation can have powerful effects, but it is most effective when built into the quantum program itself (something you could try to experiment with if yourself if you’d like to expand upon this project).

For now, let’s just play the game using these mitigated results.

Our job is to look at the numbers on the qubits, and try to find pairs that either have the same number, or at least have numbers that are close as possible. The two 48s on the left seem like a good start. So let’s go for pair *A* as one we think is entangled.

The two 46s of pair *H* are the same as each other, and also quite distinct from their neighbours. So we’ll go for that one too.

Then pair *E* seems pretty certain.

And pair *C*.

Now we’ve reached a point where noise makes it a little hard for us. Should the 42 be paired with the 55, or the 56? Since the 55 is closer in value, we’ll go for pair *O*.

Finally we are left with two qubits that must have been paired with nothing. That will always happen on this device due to the way its connectivity graph is structured. So we have reached the solution! But was it the correct one?

Let’s look again at the noiseless simulation, where the correct pairing was much easier to see.

The pairs that show perfect agreement here are exactly the ones we just chose. So we were indeed correct. We played a game on a quantum computer, and we won!

If you want to get started with quantum programming, why not take a look at the source code for this game, available on the Qiskit tutorial.

We’ve even collected a few ideas for ways you could extend it

- Write an algorithm to play the game. This would guess the pairs as correct as possible by cleverly analyzing the analyse the data.
- The puzzle is very easy to solve when there is no noise. You could introduce different difficulty levels.
- The error mitigation used here is very simplistic. You could try to implement something more sophisticated, such as the ‘robust phase estimation’ described in this talk.
- Could more complex entangled states, such as ones with three qubits, be used to ensure that no qubits on a device are left out of the fun?

Have fun, with this and our other quantum games!

Getting to know your quantum processor was originally published in Qiskit on Medium, where people are continuing the conversation by highlighting and responding to this story.

]]>