Quantum Error Correction using Qiskit

Hirmay Sandesara
The Startup
Published in
9 min readAug 13, 2020

If you have heard about quantum computers you have likely heard about quantum error correction. If not, Quantum computing is the use of phenomena of quantum mechanics such as superposition, interference, and entanglement to perform computation. This is done via computers known as quantum computers. Now that we have established briefly what a quantum computer is, let’s understand what is quantum error correction?

But before we discuss quantum error correction, we should understand what error correction is.

Due to the current pandemic, many of us would have been in an online meeting on a platform like zoom. Here there could be someone let’s say Alice who might have a bad mic, internet connection, or both for that matter. Alice and Bob are in a zoom meeting, Bob asks Alice do you want pineapple on pizza? Now she says no(like a normal human being, no offense!) but Bob is unable to understand what she is saying.

An easy solution to this problem is to just tell Alice to repeat herself and so he’ll hear mostly ‘no’s with a few random ‘yes’s thrown in. Bob now for sure can say that she said no. This is the basic idea behind repetition codes which is one of the many error correction codes. With this encoding of our message, it has become tolerant of small faults. Here this example contained

  • Input: Information we have to protect(yes or no for pineapple on pizza).
  • Encoding: Transforming information to make it easier to protect(making Alice repeat herself)
  • Errors: Random perturbations of the encoded message(network issues and bad mic).
  • Decoding: Working out what the input was(deducing that no was the input by the one which has the highest no of counts)

The basic idea is that for computation, errors are introduced whenever we operate. We need to keep corrected errors as they are introduced. This can be done by constantly decoding and re-encoding.

Readers familiarized with quantum computing can ignore this note.

Note: From now on there may be some things that some readers who aren’t familiarized with the basics of quantum computing wouldn’t understand for them I will put hyperlinks where they can read those topics in detail. You can also look at the qiskit textbook where these topics would be explained in detail. If you feel like for some topic you need help with please put those concerns in the comments, I’ll edit my article accordingly.

Unfortunately this method works properly only for bits but not for qubits.

Why can’t we do it? Unfortunately due to the no-cloning theorem, we can’t do something like this,

|α>⇏|α>⊗|α>⊗|α>

Let’s say we wanted to encode some superposition state we can do it this way,

α|0>+β|1>→ α|000>+β|111>

Now decoding it destroys the superposition state. To solve one issue we have caused another and to solve this issue we need to be more careful with measurement. But wait, what issue are we solving?

Qubits in real life don’t work exactly like the way they should behave as each operation would be slightly wrong, qubits experience something called decoherence (can be viewed as the loss of information from a system into the environment) and measurements can also be incorrect at times.

Source: ibmq_16_melbourne

If we run a circuit in qiskit you can easily see the effect of noise like this

Circuit
Expected
Outcome on device-ibmq_london

Due to the errors, algorithms like Grover, Shor, Bernstein-Vazirani, etc can’t work properly. Even if you wanted to run an algorithm let’s say Grover’s on those noisy qubits in qiskit you’ll get something like this,

Running on ideal qubits
Running on noisy qubits

As you can see in the case for perfect qubits you can certainly say that the answer would be corresponding to 101 and 110. Whereas in the case of noisy qubits you can’t say for sure what would be the answer for sure. Thus we need qubits that are ideal or at least somewhat close to them. Those qubits are called logical qubits. The downside of that is for every logical qubit we need many physical qubits. Getting even a handful of logical qubits would require hundreds of physical qubits which even the biggest computers don’t have. Therefore using QEC in algorithms is therefore a long-term goal along with those algorithms which require it.

Coming back on being more careful with our measurements. What do even mean by that? The basic idea is that we can extract some sort of information out of those qubits without actually knowing their encoded information. If we just know which values have a different value to the rest than that could also work for us. To do this, we need a qubit for every pair of qubit which is initialized in state |0> and is the target for two CNOT gates. We’ll look at an example of two qubits in qiskit.

when both qubits are same(|00> or |11>)
when one is different than the other(|01> or |10>)

These measurements are repeated throughout computation and are used to identify where errors likely occurred and how to remove their effects. A decoding algorithm(classical) is used for this procedure. With this, we can protect against bit-flip errors(0 flipped to 1) for an arbitrarily long time. This measurement is a syndrome measurement and the result is known as Syndrome. They are ways to check that all the qubits are doing what they should be. They provide clues that allow us to detect and correct errors. Even though their form changes from code to code, but their job is always the same. Thus all codes have them. But how does one even go on decoding them?

We’ll look at a very simple case where there are errors between parity measurements only not during any(syndrome measurements are perfect). Here we are at first just trying to identify errors for now but not correcting them. Due to errors, pairs of defects would be created. The one with the highest number of counts will be used to find a minimal pairing.

Source: IBM Quantum

Here in the picture, the left side is where errors happen and the right side is where we take into account those errors. Suppose that we have all the qubits initialized to 0. Here in the picture steps are happening in sequence from bottom to top and position are in sequence from left to right. Then we can see in step-1 that positions 3,4 and 4,5 are different so we log those errors as seen in step 1 which can be seen. In step 2 we can see positions 7,8 and 8,9 are different, we’ll log this error in step 2 but not the one in step 1 as can be seen. Step 3 you can see that positions 4 and 5 are now equal, which you think would be a good thing but it is inconsistent with steps seen before. Thus, this happened due to an error and thus we’ll take this also into account as seen in step 3. This process will be repeated several times to get a good idea of those errors.

We’ll now look at a somewhat realistic case where syndrome measurements don’t happen perfectly(they will randomly lie). You can see in step 2 that for positions 8 and 9 are different but that isn’t true. Therefore pairing is now a 2D problem, majority voting will not work here.

Source: IBM Quantum

One of the best ways to visualize this problem is via graphs, where defects are nodes and the numbers of errors required to link them are weighted edges. A likely set of errors corresponds to the minimum weight perfect matching of the graph.

Source: IBM Quantum

Look at the picture below to understand how does this works. It is clear that first, it said that 0=0 but in the line below it says 0≠0 same is the case for 1 in the first and second line. In the second image below, we visualize these errors according to the procedure described above and come to these results.

Source: IBM Quantum
Source: IBM Quantum

Understanding how to manipulate logical qubits is very crucial. This can be done by a logical operation which is made up of many physical operations. For example, doing a logical X is very easy compared to doing logical Ry. For doing Ry gate you have to take the code apart and again put it back together. Now you would remember that when the qubits are not encoded they are not resistant to noise. Things from here get complicated. Therefore the limited set fault-tolerant logical gates is a major problem with the repetition code. Not to forget the code only allows us to detect and correct bit fips, and only bit flips. Even though we made sure it doesn’t cause superpositions to collapse but on the other hand, it doesn’t protect them either. We conclude that the repetition code cannot give us fault-tolerant quantum computation. Question comes to everyone’s mind is that which code gives us fault-tolerant quantum computation then?

The problem with the repetition code is that it treats z basis states very different to x and y basis states.

To solve these issues we have something called The Surface Code. But what exactly are surface codes?

Surface codes are defined 2D lattices of qubits responsible for detecting and correcting errors. They are a family of quantum error-correcting code. We define observables for vertices and the smallest closed-loop, enclosing the region between four lattice sites(plaquettes).

v is for vertice and p is for plaquette

Let’s quickly look at plaquette syndrome and vertice syndrome. Plaquette syndrome is similar to the two-qubit measurements in the repetition code, the difference being we measure the parity around plaquettes in the lattice. Which if you recall could be done with CX gates and an extra qubit. Vertex syndrome observables can also be measured using CX gates as an ancilla. Here we look at the |+>and |−>states, and count the parity of the number of |−>s. If you wanna learn more about this you can read this research paper and this video by Andrew Houck.

Wrapping things up, we learnt that quantum error correction is the detection of errors caused by noise or decoherence, reconstruction of the input without those errors in a quantum computer. We need quantum error correction as without it most algorithms built for quantum computers wouldn’t work. Limitations of the repetition codes, and thus the need for something like surface codes. I would like to end this article with something truly beautiful James Wootton said,

It is not good when things are too easy because they are easy for errors too!

PS: Many parts of this article are based on notes I created during James Wootton’s lecture where he explained the topics very well to me. I hope that I have done at least half a decent job to you. You can see his thought provoking articles here and again a big thank you to him.

--

--

Hirmay Sandesara
The Startup

I’m a student who is quite inquisitive, sarcastic and always ready for a debate. I’m ready to explore options rather than spending my life in a cave.