What Are Quantum Computers Good for?

Mike Olson
The Startup
Published in
10 min readDec 17, 2020

--

This is part one in a series I’m writing on quantum computing. Part two is here.

I’ve spent some of my pandemic time thinking about quantum computers.

Researchers in academia and at companies like Google, Microsoft, Amazon, IBM, Rigetti and D-Wave have been working on them for some time. They’ve made enough progress that you can actually swipe your credit card and rent time on a genuine quantum machine in the cloud, where you can write and run an honest-to-goodness quantum computer program.

Despite that, if you pick a computer programmer at random and ask her what quantum computers are for, she’ll hem and haw and then venture, hesitantly, that quantum computer programs can find prime factors of big numbers pretty well, and that that’s going to totally break encryption on the internet. If you press her, she’ll admit that, even so, encryption on the internet still seems to be working okay.

So what’s the deal? Now that we have all this hardware, where the heck is the software?

We aren’t smart enough

I wrote my first computer program in 1977. At that time, I had nearly forty years’ worth of tooling, experience and advice to build on. Beginning in World War II, people made computers and used them to break German ciphers and plot out bomb trajectories. They figured out shortcuts and made themselves tools to make those jobs easier. The process snowballed as governments and industries built and bought computers to count the census and manage inventory. More shortcuts, more tools made the computers easier to use, and capable of more.

Those decades of experience allowed 1977 me to write code on my Apple II that played music, and that allowed betting on simulated horse races with graphics. I even added music to the horse racing game.

The German cipher people and the bomb trajectory people probably didn’t think much about a Midwestern teen toggling a speaker, using pseudo-random numbers, updating arrays or painting pixels in a screen buffer. My code relied on clever hacks, algorithms, data structures and ways for humans to interact with computers. Those had accreted slowly over the four intervening decades.

Most of all, I had concepts and programming languages that let me think about software architecture in order to build my game. I could borrow ideas from other systems. Those intellectual frameworks just didn’t exist in the 1940s. They had to be invented.

We’re at the dawn of the quantum computing era today. It’s the equivalent of 1942, and the earliest quantum practitioners are laboriously writing their equivalents of deciphering code and trajectory computation, stringing together low-level operations on these new machines. We lack the high-level tools and abstractions that have made us so good at using digital computers.

I say that we aren’t smart enough, and I mean it, but I mean mostly that we aren’t smart enough yet. We need many more smart folks working with these machines early in their careers, to develop years’ worth of intuition and experience to guide them as they build quantum software. We badly need them to invent new tools, languages and abstractions, so that the generations of programmers that follow don’t need to keep track of so much detail, to think so close to the hardware, when designing their programs.

Quantum computers are weird

I’m reasonably confident we will do that learning and inventing, but it’s a whole bunch of new work. The abstractions we’ve invented for traditional computing are of little use in quantum programming.

I’ll spare you a tour of quantum mechanics. It’s hard to be correct about the science and still interesting to non-specialists. But quantum computers are profoundly different from the kind we are used to.

Normal computers work on bits, which can be one or zero. We have a ton of experience working with bits.

Quantum computers work on qubits. Qubits aren’t just different from bits — they’re different from everything else of which you have direct personal experience. Actual qubits in a quantum computer are electrons, oscillating in a superconductor cooled to close to zero degrees Kelvin, or ytterbium atoms, bounced around in a vacuum by lasers, or something else you can’t actually pick up. They behave according to the laws of quantum physics, which I have already promised not to bore you with.

Unlike bits, qubits are inherently continuous. They encode “relative phase,” not one or zero, but anyplace in between. They are inherently random — if you mess with one in the right way, you have no idea what value it stores. You can control probabilities as precisely as you like, but you have to wait until you look to know what’s inside. A qubit has both a relative phase and a value; a bit just stores one thing.

And to bend your mind the most, qubits can be entangled. Two or more can be forced into likenesses with one another. If you then separate them, and manipulate one of them on its own, all the entangled others are affected. This was a sufficiently offensive idea when it occurred to Albert Einstein that he declared it “spooky action at a distance,” and insisted that quantum mechanics was impossible because of it.

Continuous values, inherent randomness, entanglement — not at all clear how you program music or horse racing out of pieces like that. We need a generation or two of day-in, day-out work by smart people to get used to them, and to figure out what the new abstractions for quantum should be.

GPUs

Even though we aren’t smart enough, yet, there is an example of special-purpose computing that’s making a difference in the world already, and that I believe is a good model for how we’ll use quantum computing.

Graphical Proccessing Units, or GPUs, appeared a few years ago in order to make the mayhem in your game of Grand Theft Auto more realistic. The technology wasn’t weird, the way quantum computers are. GPUs are good at doing lots of medium-precision math very very quickly. That helps if you are trying to draw shapes on a screen fast enough to fool human eyeballs. It has turned out, serendipitously, that GPUs are also good at a kind of math that current-generation AI systems need to do.

Nobody, though, builds whole computers just out of GPUs. They’re generally added, in modest numbers, to existing traditional machines. Most of the code that developers write runs on the regular computer. When the program needs to paint a car crash in a hurry, it shoots a request off to the GPU, which does its electrical magic and delivers a bunch of polygons back, stat.

I expect that we’ll see quantum computers appear first, and perhaps only, as “quantum processing units” attached to existing computer architectures in much the same way. Most of software will still be the kinds of algorithms we are used to already. During their work, though, they’ll need to ask a quantum question, compute a quantum answer, and they’ll staff that work out to the QPU.

Beyond NISQ

While it’s true that real and interesting hardware is available today, it’s really still the early days for that, too. Current-generation systems are called “noisy intermediate-scale quantum” computers, or NISQ, by practitioners in the field.

It’s noisy because there are lots of things that interfere with qubits. A passing cosmic ray can mess with them. They’re quantum, and if you know anything about that cat in the box, with the radium and the cyanide, you know that when quantum stuff interacts with classical stuff, quantumness goes away and the cat dies about half the time. Quantum computers are made of lots of classical-mechanical pieces; keeping qubits isolated is hard.

They’re intermediate scale because the biggest ones today have only fifty or so qubits. That’s partly because supercooled superconductors are bulky and complicated and fiddly to work with, but it’s also because when you get a whole bunch of quantum particles together, they sort of ease into being classical physical systems.

Researchers are pretty sure that they can get to a hundred, a thousand and a million-qubit machines over the course of the next ten years or so. That’s a considerable hill to climb, but the people making the promises are the same smart engineers who have to do the work, so I am inclined to believe them.

A million qubits is a good number. First of all, a million entangled qubits would give you some amazingly complex parallel computational capacity. Besides that, though, with a million qubits, you could start to spend some of them on redundancy and error correction — watching for the effects of those cosmic rays. A million qubits would get past intermediate scale, and error correction would get past noisy, and suddenly you wouldn’t be NISQ, you’d just be Q.

The very first digital computers were built out of vacuum tubes, relays, magnetic donuts and wires. They sucked down massive amounts of electricity, failed due to things like friction and heat, and occupied whole rooms.

We got lucky, though. We discovered semiconductors and invented transistors, and those two advances let us get rid of a huge amount of other gear. They let us build digital computers in a new way. That’s why you have a phone that is actually a supercomputer that fits in your pocket.

No one is forecasting any similar advance in materials or electronics for quantum machines. High-temperature superconductors might help, if we got lucky, but they’re not essential to get to a million qubits. No miracles are expected, but none are required.

Nevertheless…

We are not, as I say, smart enough to use quantum computers, yet. Folks are working on the tools and languages and abstractions while you read this. I am optimistic that, in forty years — even in ten! — we’ll be much smarter.

In the meantime, though, we do know enough to describe the rough shape of a quantum problem.

Qubits are continuous, random and can be entangled. We can imagine algorithms that would benefit from those properties, even if we are not yet sure of how to compose the actual program.

In 1982, the physicist Richard Feynman wrote:

Nature isn’t classical dammit, and if you want to make a simulation of Nature you better make it quantum mechanical, and by golly it’s a wonderful problem because it doesn’t look so easy.

Feynman was right most of the time. This seems like a safe bet. Using the quantum properties of qubits to build models of quantum systems in nature ought to work. We can expect new insights and big advances in quantum chemistry, for example; protein folding and photosynthesis are just two physical processes that depend on quantum behavior.

Theoretical computer scientists talk about “NP-hard” problems — problems that are sufficiently hard to solve that our classical digital computers and algorithms take far, far too long (like, lifetime-of-the-universe long) to find answers.

Quantum computers may give us a way to attack some of those problems, either on a solid theoretical basis, or merely in practice. If we can model the system we want to analyze with a collection of entangled qubits, and if we can use inherent randomness to sample possible solutions, we might be able to guess at a correct answer, or even an approximate answer, quickly.

This seems abstruse, and it is, right now. We need to know if the guess about using quantum algorithms on NP-hard problems is right, and in what cases, subject to what constraints. But if it turns out to be true, then there will be whole new areas in which we can use computers where they simply don’t work, now.

Finding the prime factors of large numbers is a special case of this sort. You’ll remember that our imaginary programmer heard that quantum computers were good at this. That’s true. If we have million-qubit machines finding prime factors in ten years, then the encryption that the internet relies on will stop working.

Don’t worry about that, though. We have other kinds of encryption that don’t care about prime factoring, and we’ll switch to them safely ahead of time. In fact, we already know that quantum systems give us new tools to encrypt and secure data and communications, and that’ll be a rich area of research and practice.

The field of machine learning is already an area of intense interest. Current-generation ML systems often rely on a (repeated) step called optimization, which uses some moderately complicated matrix arithmetic. There are some cases where current-generation quantum machines are able to do that math well. I am confident that some of Google’s and Amazon’s recommendation engines touch their quantum computers in order to suggest web sites or tennis shoes to you.

Quantum computing researchers have introduced a new notion, quantum supremacy. How big does a quantum system need to be, before you can prove with solid theory that it’s better than a classical system for a given problem? Current-generation systems have reached quantum supremacy for a small number of the problems we’ve thought of.

We need to find more problems where we can prove quantum supremacy. We need to figure out if solving those problems is actually useful.

Modeling nature, attacking intractable classical computations, encryption and machine learning and more are all likely to rely on quantum computers, ten years hence. We’ll certainly know more.

I’m keen to see what those ten years teach us. I look forward to reading the algorithms book, learning the language and firing up the text editor that were all created exactly for quantum programming.

Acknowledgments and further reading

I read a ton of stuff over the last months, but won’t try to provide a comprehensive bibliography here. Instead, I’ll acknowledge the people and the specific media that taught me the most.

Ashlee Vance wrote an article about a quantum computer company that got me wondering what quantum computers were actually good at. I suggested he write an article to explain it to me. He declined, so I launched my own effort. Could have saved me some trouble, man.

Mike Miller pointed out that if you want to understand what quantum programs are for, you should learn to write quantum programs.

That led me to Programming Quantum Computers by Eric Johnston, Nic Harrigan and Mercedes Gimeno-Segovia. This was a great text, and the cloud simulator was tremendous as a learning tool. I confess it took me an embarrasingly long time to get through the material. Quantum computers really are weird.

My pal Eli Collins at Google pointed me at key publications and projects. Those were directly useful and good jumping-off points into the literature broadly.

The best general introduction I found to the topics I’ve covered here was John Preskill’s Quantum Computing in the NISQ Era and Beyond. This paper was first delivered as a keynote address at the “Quantum Computing for Business” conference in December 2017, and I wish I had been in that room.

Hartmut Neven’s presentation at Google’s 2020 Quantum Summer Symposium is a fantastic follow-on to the Preskill paper. It captures the advances in the state of the art in the ensuing two and a half years, and lays out the roadmap to a million qubits that I described, above.

Updates:

Several copy edits for grammar and clarity. Turns out you shouldn’t proofread your own text.

Added link to part 2 in the series.

--

--

Mike Olson
The Startup

Berkeley-based techie with an interest in business. Worried about the world.