Untangling qubits

by the QWA writing squad*

Quantum computing is all the rage these days, and followers of tech news will be used to hearing all about qubits, the basic logical units of quantum computers. In our previous article, we introduced the idea of the Bloch sphere as an abstract — mathematical — way of thinking about qubits, and what happens to them during a quantum computation. That picture is very powerful, because it applies to any qubit, regardless of its physical implementation.

But then the question remains: what is a qubit?

In this article, we set out to give some identity to qubits: we’ll describe what they really are in the physical world. You’ll know by now that in an abstract sense, qubits consist of two basic logical states, |0> and |1>, and that you can have any superposition of those two states. But what physical systems can actually exhibit that kind of behaviour? Where do you find such hardware in nature?

It turns out there are numerous candidate systems for implementing qubits. In this article, we shall focus on what are arguably the two most advanced and popular platforms at present — trapped ions, and superconducting circuits.

Trapped ions

As a physical theory, quantum mechanics was developed in large part to describe how atoms absorb and emit light, or radiation. It is perhaps not too surprising, then, that atoms themselves are a leading candidate for building quantum computers. To appreciate how atoms can be used as qubits, let’s go back to basics for a second. In high school physics or chemistry, we learn that an atom consists of a positively charged nucleus at the centre, and a set of negatively charged electrons orbiting around it. The system is held together by electromagnetic forces, in a conceptually similar way to how the Earth moves around the Sun under the influence of gravity.

The great paradigm shift in atomic theory at the turn of the 20th century — led by physicists such as Max Planck, Niels Bohr, and Albert Einstein — was the realisation that electrons can orbit the nucleus in only a limited set of ways. If you could trace the trajectory of an electron in an atom, you would find its motion is restricted to following a set of very specific patterns. Each of these patterns has a characteristic energy, and since they form a discrete set, the electron energy is constrained to take on a discrete (quantised) set of values. In common physics parlance, we often refer to the orbitals simply as ‘energy levels’. Schematically, we like to represent them as a series of lines, as shown in Figure 1 below for three levels. The numbers between each pair of levels indicate the energy difference between them.

Figure 1: Energy levels and their differences in energy

By now, you’ll be used to the fact that at QWA we like a good musical analogy, and it just so happens there’s a very apt one for this occasion. As anyone who has ever played a piano will attest, the instrument can produce only a discrete set of frequencies. Play an E, and then an F, and you’ll hear the difference in pitch very clearly. But not even Beethoven himself was able to access the infinitely many incremental tones between the notes E and F. In this sense, atoms are similar to pianos: the electron ‘tones’ (energy levels) are discrete.

You might now anticipate where we are going with this. Qubits are systems with two discrete logical states that can be put into quantum superposition. To use atoms as qubits, we can simply take the lowest energy level of the outermost (‘valence’) electron — commonly referred to as the ground state — to represent a |0>, and the second lowest energy level — the first excited state — to represent a |1>. If you read our previous article, you’ll recognise that we can associate the electronic ground state with the north pole of the Bloch sphere, and the first excited state with the south pole.

“Great”, you will be thinking at this stage, “but aren’t atoms just ludicrously small and hard to work with? How on Earth do you build a computer with that kind of hardware?” Fantastic question! One way to address this challenge is to use electrically charged atoms — known as ions — where one electron is removed to give the atom an overall positive charge. These ions can then be pushed, pulled, and dragged around using electric and magnetic fields, just like you can drag a paper clip across a table with a nearby magnet. Over the years, physicists have become really good at shuttling ions around, and pinning them down to well-defined positions. It is now possible to assemble many tens of ions in highly ordered spatial configurations, like the linear chain shown in Figure 2 below. This is the basic architecture of trapped ion quantum computers.

Figure 2. Left: ions trapped in electromagnetic fields (image credit: Institute of Theoretical Physics, Innsbruck). Right: fluorescence images of trapped ions.

“That’s pretty cool”, you might say, and we’d most certainly agree with you! “But, how do you implement those quantum gates you talked about in the last article? If the ion is in the ground state |0>, how do I make it go to the excited state |1>? How do I create the famous superposition of ground and excited states, which is somehow |0> and |1> at the same time?”

“but aren’t atoms just ludicrously small and hard to work with? How on Earth do you build a computer with that kind of hardware?”

Physically, quantum gate operations on ions are performed using laser pulses. A laser is a stable source of monochromatic light, meaning it can produce only a single, well-defined tone. As a musical instrument, it would be pretty boring! However, this is precisely what we need for manipulating the quantum state of atoms and ions.

We start with an ion in its ground state |0>, and expose it to a laser whose tone matches the energy needed to bring the state to |1>. If the pulse duration is long enough, we can flip the initial |0> to a |1>. What about the superposition state? We create this simply by making the pulse duration shorter.

“One last thing” you ask, recalling what you learned from our previous article. “To be able to do universal computation, I need to be able to do a gate that acts on two qubits — how do you make that happen?” You’re right! To be able to perform any arbitrary computation, we need to be able to modify the state of one ion (the target qubit) based on the state of another ion (the control qubit).

The first — and conceptually the simplest — proposal for implementing two-qubit operations in trapped ion systems is known as the Cirac-Zoller gate, named after the two physicists who invented it. The cute trick underpinning their scheme is to use the collective motion of the entire ionic chain as a data bus, allowing information to be passed between the control and target qubits.

In the first step, a laser pulse is shone at the control qubit. The pulse is designed such that if the qubit is in state |0>, nothing happens, but if it is in state |1>, the ion starts jiggling around. Since it’s a charged particle, this control ion will push and pull all the other ions in the trap along with it, and as a result they’ll all be sloshing back and forth together, from left to right. Lastly, we shine a laser at the target qubit. Again, the pulse is designed so that if the ion is moving, the state of the qubit will be flipped, but if it’s stationary, the state remains unchanged.

Now that we’ve got our heads around trapped ion quantum computers, which use real atoms (Made by Nature) as qubits, let’s delve into the other leading platform for quantum computers at present — superconducting qubits, which use artificial atoms (Made by Humans in a Lab).

Superconducting circuits

Quantum mechanics is most commonly associated with atoms, molecules, and photons (particles of light). Electrical circuits behaving in a quantum mechanical way would probably not immediately come to mind. Well — surprise, surprise — this is precisely what underpins our second example of qubit implementations. For around 20 years, physicists have been working to build electrical circuits that mimic the behaviour of atoms, which can then be used as qubits for quantum computing. Nowadays, commercial players are also working to make superconducting qubits a reality, with companies such as IBM, Rigetti, Google, Intel, and Alibaba pouring resources into the construction of increasing larger and better devices. A picture of a 5-qubit device made by IBM is shown in the left part of Figure 3.

It turns out there is a zoo of different ways of building this class of qubits, so here we’ll focus on a rather generic description that captures the most important intuition. Let’s begin with an electrical circuit that engineers will know very well — the LC resonator (shown on the right of Figure 3), which consists of an inductor (L) and a capacitor © . In this circuit, energy is exchanged back and forth continuously between the two circuit elements: in the capacitor, energy is stored in an electric field, while in the inductor it is stored in a magnetic field.

Figure 3: A 5-qubit superconducting chip, Credit: IBM (L); A schematic representation of an LC circuit (R).

Remarkably, if you make the circuit and components small enough — typically around a micrometer in size — quantum mechanical behaviour can start to emerge. Analogous to the way electrons in real atoms occupy a discrete set of energy levels, the total energy in the circuit is also ‘quantised’. You can only increase the energy if you throw in exactly the right amount to jump between these levels. Yes, you guessed it — we’ll use these energy levels as a basis for encoding qubits.

To make this work in practice, it turns out we need to do more than just shrink the circuit. As the title of this section suggests, we will need the magic of superconductivity, too. Superconductors are materials that allow electric current to flow with zero resistance, provided they are cooled below a certain ‘critical’ temperature.

Above the critical temperature, electric current — which is just a flow of electrons in a wire — will experience resistance. Think of the electrons as people moving through a crowd, constantly pushing and bashing into each other, expending energy in the process. For electrons in metals, energy is similarly lost from the circuit by heat dissipation. However, for superconducting materials below their critical temperature, this kind of dissipation does not occur. The electrons move through the material in a highly orderly manner, like subway commuters obediently following instructions to stick to one side of station corridors. Crucially, by operating below the critical temperature, quantum information encoded in the circuit’s energy levels is not disturbed.

Superconductivity plays a second vital role in the operation of these circuit-based qubits. The energy levels of an LC circuit — like the one above — form a ladder, whose rungs are perfectly equally spaced — see Figure 4(a) below. All the excited states are perfect overtones of the ground (lowest energy) state: to go from |0> to |1> requires the same energy as to go from |1> to |2>, which requires the same energy to go from |2> to |3>, and so on. This kind of energy level structure is called harmonic, but it is actually terrible if you want to build a qubit!

FIgure 4: Harmonic (a) and anharmonic (b) energy level structures

“Why so?”, you ask inquisitively. To answer this, let’s first discuss how we do a flip operation in this circuit, taking |0> to |1>. We achieve this by delivering electrical pulses, analogous to the way we shine a laser at ions. The energy of a ‘photon’ in the circuit must equal the difference in energy between the levels |0> and |1>, also analogous to the case of ions. The intuition behind using pulses of different durations to create different superpositions of |0> and |1> applies here, too.

Now, to your question. The photon energy needed to go from |0> to |1> is identical to that needed to go from |1> to |2>. If our electrical pulse contains two photons, as it may sometimes do (we typically know only the average number of photons), then we may end up making a double transition, first from |0> to |1>, and then from |1> to |2>. “But qubits don’t have a state |2>”, you say. That’s precisely right! If we want to make qubits, we’ll need to stem the flow of energy into the second excited state. What could we use to do this?

This is where superconductivity works its magic again. If we swap the ordinary, plain vanilla inductor in our LC circuit for a device called a Josephson junction — made of two pieces of superconductor, which sandwich a very thin layer of insulating material — we break the harmonicity of the circuit’s energy levels. They might then look something more like Figure 4(b). You’ll see there is now a marked difference in the energy difference between |0> and |1>, and between |1> and |2>. We say that the energy level structure is now anharmonic.

Why does this help? Well, suppose the system is in state |0>, and a first photon excites it to |1>. If a second photon then comes along with the energy of the transition from |0> to |1>, it can’t make the system flip from |1> to |2> — it doesn’t have the right energy! We then have a nice qubit we can work with, consisting of the two lowest energy states, |0> and |1>.

What makes a good qubit, anyway?

The fact that people are building qubits across very different physical systems begs the question: which is best? Why would you choose one over the other? We’ll wrap up this article by giving a flavour of the considerations here, and how the two systems we have considered in this article compare to one another.

One major consideration is how susceptible the qubits are to errors. For example, a qubit can flip between |0> and |1> spontaneously and uncontrollably, because quantum systems are very easily affected by their surrounding environment. Stray electric or magnetic fields from nearby devices, or even radiation from outer space, could deliver the energy needed to excite a |0> to a |1>.

For any type of qubit, there is a characteristic time — known as the coherence time — after which we can be confident an error has occurred. The coherence time sets a limit on how long you can reliably use a qubit for performing a quantum computation. For trapped ions, your ‘window of opportunity’ is relatively long, and can last up to 100s of seconds. Superconducting qubits are more susceptible to errors, and their coherence time is on the scale of 100 microseconds, many orders of magnitude lower than for ion traps.

A second consideration is how many gate operations you can perform on a qubit — or pair of qubits — within the coherence time. The more gates you can perform, the more complicated your quantum computations can become, and the more sophisticated are the things you can do. The situation radically changes here. For a single trapped ion qubit, typical gates times are 20 microseconds. This goes down to 100 nanoseconds for superconducting systems, which hence offer much faster gate clock speeds.

A third issue is how good the gate operations themselves are. Suppose that, instead of perfectly flipping a qubit from |0> to |1>, our laser pulse brings the qubit to the state:

A qubit state very close to being a |1>, but not quite there yet.

This is very close to being a |1>, but it’s not quite there. As a result, any subsequent steps in the computation will carry this error forward. For trapped ions, operations on single qubits can be up to 99.999% accurate. In superconducting circuits, the figure is around 99.9% (erroneous once in every 1000 gates — you can see the numbers for yourself on IBM’s device characteristics).

A research group at the University of Maryland has recently released a comprehensive experimental comparison of these two architectures. Together with collaborators, the team compared the performances of two 5-qubit processors: a superconducting chip from IBM, and an ion trap system built by the Maryland group. The paper and the results can be freely accessed here. The Maryland team is led by Prof. Chris Monroe, founder and Chief Scientist at IonQ, a spinoff of the research lab that is building a full-stack trapped ion quantum computer.

There are numerous other factors that must be borne in mind when assessing the suitability of a platform for quantum computing. The ability to scale the device up to larger and larger sizes, with many more qubits, is perhaps the most obvious. In this regard, superconducting qubit systems are easier to scale than trapped ions, thanks to existing manufacturing techniques for electrical circuitry. On the other hand, because they are artificial and thus suffer from minor fabrication imprecisions, no two superconducting qubits are identical. This is in stark contrast to any two ions of the same atomic species, which are 100% identical.

We’ll revisit these considerations in future articles on quantum computing, however it is worth pointing out that at this stage — as with any nascent technology — there remain many unknowns, and it is far too early to say which platform will ‘win’ in the long run. We have focused on the leading candidates right now, but there are several other alternative platforms that could in the future become major contenders. These include photonic devices (such as those being developed by the startup Xanadu), topological quasi-particles (being pursued by Microsoft), quantum dots (pursued by Intel), and nano-diamonds.

The last three articles have covered some of the basics of quantum computing. In the next post, we’ll complete our introductory series on quantum technologies by looking at how quantum effects can be exploited in measurement devices, whose precision can exceed what is possible with classical hardware.

QWA is helping to bridge the gap between quantum and business. The go-to for providing suggestions, feedback and questions is our email address info@quantumwa.org.

* Alba Cervera Lierta, Tommaso Demarie, and Ewan Munro.