How The First Superconducting Qubit Changed Quantum Computing Forever

Qiskit
Qiskit
Published in
11 min readSep 28, 2022

By Robert Davis, Technical Writer, IBM Quantum and Qiskit

The qubits that power today’s quantum computers come in many different forms. Some quantum processors use photonic qubits, which are made up of single photons of light. Others are based on trapped-ion qubits, which store and process information using charged atoms suspended in an electromagnetic field. Among the most mature architectures is the superconducting qubit. This is a fact that many of us in the quantum community take for granted today, but for the researchers operating at the dawn of quantum computing in the mid-to-late 1990s, it might have come as quite a surprise.

Illustration of a four-qubit superconducting quantum processor fabricated by IBM Quantum circa 2017.

Superconducting qubits are part of a broader family of models that comprise what we call “solid-state” quantum computation — quantum computers that do not rely upon moving parts, and whose construction borrows heavily from fabrication techniques developed for solid state classical computation. Twenty-five years ago, superconducting qubits and solid-state qubits in general were thought to be little more than a pipe dream. Even after researchers published a demonstration of the first superconducting qubit in 1999, some in the quantum community spent years arguing that the new qubit regime did not constitute a truly quantum system.

Since then, however, superconducting qubits have evolved to become one of the primary forms of qubit technology used by many of the biggest quantum computing companies in the world — including IBM, Google, Microsoft, and others. To understand why that’s the case, we’ll need to look closer at how superconducting qubits work, and how they differ from the physical realizations of qubits that preceded them.

A Primer on Superconductivity

At the most basic level, a superconducting qubit is simply a circuit loop with an electrical current traveling around it. That circuit is made up of metals that become superconducting — i.e., able to conduct current without resistance — when cooled below a certain critical temperature. The current is made up of “Cooper pairs,” a particular kind of electron pairing that only occurs in these superconductive materials.

According to the standard BCS theory of superconductors, superconductivity occurs because once a superconducting material reaches its critical temperature, the usually repulsive force between electrons becomes slightly attractive. This leads to the formation of Cooper pairs, and those electron pairings are able to flow freely through the superconducting material without scattering.

One reason superconductivity is a valuable feature for qubits is that when an electrical current flows through a metal with no electrical resistance, there is no dissipation. The flow of current through a superconductor is lossless — at least on paper — meaning that the superconductor does not lose energy to its environment, another useful property for constructing qubits. This property allows us to build electrical circuits where currents flow without causing dissipation or heating.

But more importantly, the superconducting electrical circuits in our quantum processors are quantized, meaning that they follow the rules of quantum mechanics and only take on discrete states. We can then control and measure the states of these qubits using physical operations like the application of microwave pulses or magnetic flux pulses. And thanks to ongoing research, we are now in a position to reliably predict quantum properties of these electrical circuits with classically simulable metrics such as impedance.

Why Superconducting Qubits Stand Apart

The superconducting qubit may be one of the most widely-used qubit models in the field of quantum computing, but it is far from the oldest. In fact, superconducting and other solid-state qubit models did not make their debut until years after the earliest quantum computers were built in the late 90s. The very first quantum computers included trapped ion quantum computers and liquid-state nuclear magnetic resonance (NMR) computers, and their qubits differ significantly from superconducting models.

Trapped ion quantum computers are maybe a little simpler to understand —ions are trapped in an electromagnetic field, quantum information is stored in the electronic states of these ions, and lasers induce the quantum gates. Meanwhile, in an NMR quantum computer, the spin states of the atomic nuclei in certain synthetic or naturally-occurring molecules function as qubits. To operate a liquid-state NMR computer, researchers suspend large ensembles of these molecules in a liquid solvent, and manipulate their nuclear spin states through the application of an external magnetic field. This molecular ensemble method makes it nearly impossible to get individual bitstrings out of an NMR computer. Instead, they usually only output ensembles of bitstrings.

The NMR model is a form of “molecular” quantum computing, which is categorically distinct from the solid-state paradigm that gave rise to the first superconducting qubits. Molecular quantum computers can take a number of different forms, and even today molecular qubits offer some key advantages over solid state alternatives. One advantage is that, unlike solid-state qubits, molecular qubits within a given quantum processor are always identical. Another advantage has to do with qubit coherence times, which are a major challenge in quantum computing hardware design. Because they are relatively isolated from the environment that surrounds them, the atomic nuclei that make up individual molecular qubits enjoy a degree of natural (albeit limited) protection from quantum decoherence.

Superconducting qubits, by contrast, are not identical to each other because they are manufactured. They may be small, but they are still large enough such that it is impossible to manufacture them without creating variations from one qubit to the next, which must be accounted for through system calibrations.

However, those same manufacturing processes also give superconducting qubits important advantages over molecular models. Since many of the fabrication techniques we use to build them have existed and undergone continual refinement since well before the first quantum computers were built, some researchers argue that superconducting qubits will be easier to build at scale, and that they will ultimately enable more fine-grained computational operations.

Similarly, the strong coupling of superconducting qubits with their environments may drive faster decoherence times and require additional error correction as compared to molecular qubits, but that also makes it much easier to control superconducting qubits with microwave signals. What’s more, their relatively shorter decoherence times are offset by the fact that quantum logic gates generally operate much faster in superconducting systems.

The First Superconducting Qubits

By the turn of the 21st century, many in the research community believed that liquid-state qubits like those used in NMR quantum computation would likely never be scalable enough to create working quantum computers with more than a few dozen qubits. Researchers of the day had found it all but impossible to address individual NMR qubits without affecting others in the system, making it difficult to implement quantum logic gates. They also had no means of re-initializing individual qubits during computations, an important step in implementing effective error correction schemes.

In a 2000 paper, IBMer David P. DiVincenzo noted that, unless these challenges were addressed, “NMR [would] never be a scalable scheme for quantum computing.” That prediction proved quite accurate. NMR quantum computers would continue to play an important role in the field of quantum computing — helping researchers develop useful control techniques that we still use today. However, in general, they began to fall out of favor in the early 2000s as the research community became increasingly interested in exploring the potential of solid state and trapped ion qubits.

In one paper from that era, Oxford University physics professor Jonathan A. Jones commented, “it is widely agreed that if a general purpose quantum computer ever enters widespread use it will almost certainly be based on a solid-state approach.” (Emphasis added.) In another paper, he noted that at least one proposal for a solid-state quantum computer “keeps many of the advantages of NMR, but also manages to tackle some of the most serious difficulties.”

Proposals for solid-state models like the superconducting qubit drew widespread attention during this period. However, even in the years immediately following the debut of the superconducting qubit, at least some members of the quantum community still doubted that the technology would ever advance far enough to be useful for quantum computation.

In part, that’s because superconducting qubits are so much larger than molecular qubits, technically operating at the macroscopic scale. You can even see the qubits with your naked eye — no microscope required — if you look closely enough at many of the superconducting quantum processors we have today. By the late 1990s, there was some evidence that the laws of quantum mechanics could apply to macroscopic systems, such as experiments showing quantum tunneling across Josephson junctions, which itself is evidence for the existence of quantization in such superconducting electrical circuits.

The big breakthrough, which today we generally acknowledge as the first superconducting qubit experiments, came in the spring of 1999, when a team of Japanese researchers led by physicist Yasunobu Nakamura published a paper demonstrating the first functional superconducting qubit. Their invention would eventually come to be known as the “charge qubit,” the first of many superconducting qubit designs.

A charge qubit is essentially a quantum LC circuit — i.e., a circuit consisting of an inductor (the “L”) and a capacitor (the “C”), two different kinds of energy storage devices. An inductor stores energy in a magnetic field, while a capacitor stores energy in an electric field.

In the charge qubit circuit diagram shown below, the C represents the capacitor and the Ej represents the Josephson Junction — a quantum mechanical device that serves as the qubit’s inductor. (One can also build a charge qubit with a Josephson Junction and no dedicated capacitor, but we’ll explain that in more detail shortly.) The V0, for voltage, is the qubit’s energy source. The black lines that connect the capacitor, Josephson Junction, and voltage gate represent the superconducting metal that makes up the circuit itself.

Circuit diagram of a superconducting charge qubit. Credit: Srjmas, CC BY-SA 4.0, via Wikimedia Commons.

To understand how a charge qubit works, it helps to think of the circuit as being divided into two regions. The small region on the bottom left of the circuit diagram that stretches from the bottom plate of capacitor C to the Josephson Junction Ej is the superconducting “island.” The rest of the circuit, including the voltage source, is a superconducting “reservoir.” In a charge qubit, the zero and one states that form the basis of all quantum computations correspond to the “charge states” of the island region, i.e., the absence or presence of excess Cooper pairs in the island. For this reason, the superconducting island is also known as a “single Cooper-pair box.”

For a set of Cooper pairs to travel from the voltage gate to the superconducting island, they must first tunnel through the Josephson Junction, which is made up of a very thin layer of insulating material sandwiched between two layers of superconducting material. Notably, this arrangement gives Josephson Junctions a small amount of self-capacitance, making it possible to build a charge qubit consisting of a Josephson Junction, a voltage gate, and no dedicated capacitor.

Cooper pair electrons are able to tunnel through the Josephson Junction because of the “Josephson effect,” a physical phenomenon named for Welsh physicist Brian D. Josephson, who in 1962 predicted that — under certain conditions — pairs of electrons could pass through a non-superconducting material placed between two superconductors. Careful adjustments to the voltage make it possible to control whether the island is in the zero state, the one state, or a superposition of the two.

It is only possible to encode these states because, much like the electrons in a naturally occurring atom, Cooper pairs operating in a superconducting circuit with a Josephson Junction have discrete energy levels. This is one of the reasons that superconducting qubits are often called “artificial atoms.” Meanwhile, Josephson Junctions are crucial in this regard because they function as what we refer to as nonlinear inductors.

What’s a nonlinear inductor? Well, if one attempted to build a charge qubit with a regular, linear inductor, then any change in the voltage going across the inductor would produce a proportionate change in current. The qubit’s energy levels would therefore be equally spaced, and attempts to encode a particular state could easily push the qubit into an energy level that isn’t useful for computation (e.g., the two state).

Non-linear inductance makes it so that changes in voltage are not proportionate to changes in current, and the energy levels are therefore unequally spaced. As a result, one can more easily isolate two out of several energy levels to serve as the zero and one states without pushing the qubit into another undesirable state. This is one of the essential ingredients in making superconducting qubits work.

Dawn of The Superconducting Quantum Computing Era

The invention of the charge qubit marked a monumental step forward in the history quantum computing, one that opened the door to a new era in quantum hardware design. However, in looking back at how we arrived at the first charge qubit, we’ve only told one very important piece in the greater story of superconducting qubits — and that story is far from over.

As one might expect, the first superconducting qubits were very limited in their capabilities. Among the earliest superconducting qubit designs, most maintained coherence for less than a nanosecond (0.000000001 seconds). That figure improved significantly over the years, with coherence times approaching a full millisecond. This is thanks in no small part to advances in researchers’ understanding of decoherence, and to the development of more sophisticated approaches to microwave engineering, new techniques for protecting qubits from their environment, and advances in Circuit Quantum Electrodynamics(cQED) — an entire field of study dedicated to the interaction of nonlinear superconducting circuits — among other things.

Measurement also proved to be a significant challenge. Early qubit readout techniques were highly destructive. They usually involved connecting the qubit to a superconducting quantum interference device (SQUID), a sensitive tool for measuring magnetic fields whose usage would send a great deal of harmful quasiparticles — i.e., broken Cooper pairsand heat through the system. The advent of cQED not only produced new architectures for quantum computing, but also introduced the possibility of non-destructive qubit readout, virtually eliminating quasiparticles and heat dissipation near the qubit.

In the years following the publication of the Nakamura paper, the research community would make many other advances, and create many superconducting alternatives to the original charge qubit, such as the phase qubit and the flux qubit. This work would eventually result in the invention of the superconducting transmon qubit, which has become the model of choice for much of today’s quantum computing industry. Indeed, we could devote an entirely separate blog to exploring the impact of the transmon qubit. The transmon model improves upon the original charge qubit’s Cooper pair box design by adding a large shunt capacitor to the circuit, which protects the qubit from charge noise.

We still have many technological mountains to climb before we are able to achieve truly useful, fault-tolerant quantum computing — where quantum computers have qubits that are sufficiently protected from environmental noise, and circuits that are capable of implementing effective error correction schemes. In recent months we’ve seen impressive advances in quantum error correction and error mitigation, as well as the proliferation of improved processor architectures that significantly reduce the occurrence of errors. However, there is much more to accomplish, and achieving our goals will require a tremendous amount of work from a great many quantum researchers and engineers.

But as the field continues to press forward, it is important to look back and appreciate just how far we have come. In the past twenty years, we have invented many technologies that were once thought to be impossible, and found solutions to many hardware engineering problems that were long thought to be intractable. It’s impossible to say for certain what the future will bring, but this history should give us confidence that we will continue to make progress, and that we will continue to see incredible advances in quantum hardware.

--

--

Qiskit
Qiskit

An open source quantum computing framework for writing quantum experiments and applications