# The Quantum Quantity Hype — What it Is and Why it Needs to be Addressed

I recently came across an article by Inside IBM Research, and was pleased to see that they were finally addressing the Quantum Elephant in the Room. ** Stop Obsessing About Qubits. Their Number Alone Doesn’t Matter, **the title read

*.**I smiled and thought to myself:*

“Finally, they’re airing out their dirty laundry and saying what needs to be said.”

But as I continued to read on, the headline and the article began to diverge. They address the issue of noise, and how noise affects the error rates of qubits. It was a good start, but then it took a turn for the worst. Their solution was to fix the problem by adding more qubits. Ironically, the solution runs counter to the originally posed problem. **“Qubit number alone doesn’t matter, so what are we doing instead? Increasing the total number of qubits of course!”**

The Author did a great job at presenting the problem:

Just like a car with a 1,000-horsepower engine is useless if you can’t corner or brake without crashing — a million qubits won’t bring you an inch closer to building a fully-functional quantum computer.

The author then goes on to explain that noise needs to be reduced in order to fix errors at the qubit level, but that there are physical limitations to noise reduction given the physics of superconducting circuits. The only way to overcome the physical noise barrier is to use *Error Correction*. With Error Correction and Millions of Qubits, some day, we will have (cue Dr. Evil Music)…1,000 logical qubits…or will we?

# Qubits as a Vanity Metric

The total number of qubits is nothing more than a vanity metric these days. It’s not uncommon to see hardware companies making grandiose announcements on the total number of qubits they’re deploying on their devices. Even in the article above, IBM Research shows the following image with **1 Million Qubits** being the goal by 2030:

A lot of talk about error correction and qubits, but no mention of the actual error rates…fancy that. Digging further into papers published by the team at IBM, we see 1% error rates, yielding a Quantum Volume of 64. I’ll explain why gate fidelities are important a little bit later in this article.

**PsiQuantum Raises $450 Million to Build its Quantum Computer**

The funds will be used to expand its team, which currently has about 150 people, and to build a

1 million-quantum-bit machine, said Jeremy O’Brien, co-founder and chief executive of the Palo Alto, Calif.-based company.

To reach this goal, we’re on a journey to build

1,000,000 physical qubitsthat work in concert inside a room-sized error-corrected quantum computer. That’s a big leap from today’s modestly-sized systems of fewer than 100 qubits.

**SeeQC: 1 million qubit quantum computers: moving beyond the current “brute force” strategy**

So as you can see, One Million Qubits seems to be the holy grail of quantum computing, but I believe the Hype around Quantity will soon have its day of reckoning.

# When is One Million Qubits Actually Valuable?

Before one considers which Quantum Computer on the market is the most state-of-the-art, one must first look at the most important quality of that device: its** Gate Fidelities**.

As you can see in the chart above, a device’s Algorithmic Qubits saturate as a function of total physical qubits. Thus, you get diminishing returns as you scale those physical qubits.

If, however, we start improving those error rates:

All of the sudden the story becomes far more interesting. The lower the error rates, the fewer physical qubits you actually need. Therefore, a device with 100 physical qubits and a 0.1% Error Rate can solve more problems than a device with one million physical qubits and a 1% Error Rate.

# But What About Error Correction?

Error correction is an incredibly powerful tool that we need in order to achieve Broad Quantum Advantage. However, when the underlying error rates are high, Error Correction is not the saving grace it’s made out to be. The chart below illustrates how error correction can improve a device across the aforementioned gate error regimes.

As you scale physical qubit count, you also scale cost. Therefore, if your physical qubits maintain a high error rate, you’re still going to run into the scalability problems, even with error correction. Take a look at one of IBM’s latest prototypes:

This thing is an absolute monster. Now imagine an entire football field with these things…This is how IBM plan on getting to 1 Million qubits. The higher the error rates, the smaller the gains as you scale your system. At higher error rates, you approach unreasonably larger deployments in order to achieve marginal gains.

According to IBM Research:

And we have to make sure that the fridge — the cryostat — doesn’t collapse. That could happen, if we were to continue adding, by brute force, more and more superconducting circuits to the bottom of the fridge. A cryostat with even one logical qubit made of, say, 500 physical qubits would be a structure of half a ton — and that is simply unfeasible.

which is why error correction is not the silver bullet it’s made out to be. Even if you are error correcting your devices, and they still have less than stellar error rates, it becomes incredibly costly to scale those devices. As the complexity of the problems you are trying to solve scales linearly, your overall costs scale exponentially, which kind of defeats the purpose of building Quantum Computers to begin with.

Therefore Quantum Computing is a delicate balancing act…the power of your device increases exponentially with algorithmic qubits but the ratio of algorithmic to physical qubits decreases as a function of error rates. **The industry is betting that they can make noisy devices work by throwing millions of qubits at the problem because they don’t see a trajectory for getting those errors down. **That’s why we hear this talk about building football field/whatever sized quantum computers and why Quantum Computing is still in the realm of science fiction.

But I do see a trajectory to getting the errors rates down. This will allow the industry to take advantage of these exponentials rather than fighting them, and focus on building small machines that can outperform a supercomputer.

# So What’s the Correct Approach to Tackle This Problem?

If you’re thinking, **“well, just build better qubits,”** you’re absolutely right! IBM, SeeQC, Google, and all the other incumbents talk about building millions of qubits, because they *cannot* build better qubits.

In 2013, superconducting qubits looked really good on paper, because they promised to scale a la Moore’s Law. Superconducting Qubits are manufactured using the same processes as today’s classical silicon chips. Therefore the promise was that, qubits scale the same way transistors scale. But then physics got in the way, and noise started to scale as well. Now the industry seems to only want to throw money at the problem and not go back to the drawing board.

**One candidate that is showing a whole lot of promise are Ions.** Trapped Ions are more stable and have better connectivity to other qubits than their superconducting counterparts. As a whole, they have lower error rates, and you can keep them in a cohered quantum state at orders of magnitude longer than superconducting qubits.

Many of the problems we want to solve today only require a few hundred high quality qubits. It is abundantly clear that the solution MUST focus on building better qubits, rather than filling warehouses with power hungry devices.

The day of reckoning is finally upon us and the battlefront of Quantum Computing is forever changed. It’s time to go back to the drawing boards and back to basics. The future of Quantum Computing is quality over quantity, and only then will we see Quantum Computers small enough to fit on the surface of a postage stamp, and not filling up football fields.