Quantum Computing

Jlspursfan
The Startup
Published in
7 min readOct 28, 2019

An Introduction for Programmers

Today’s computers don’t increase efficiency by solving complicated problems better than humans. They work by breaking down complicated tasks into lots of simple tasks; the advantage computers have is that they can complete these minute tasks much more quickly than us. The limitation of classical computers is that these tasks have to occur in sequence, so, as a problem scales in complexity or a database in size, the time it takes to reach a solution scales with it. In many cases (which often represent major thresholds in scientific study or technological innovation), the scale of the problem is mathematically impossible for even the most powerful of supercomputers to solve within several lifetimes. Quantum computing has the ability to surpass these limitations created by sequential tasks by taking advantage of some of the more interesting aspects of quantum mechanics: superposition, entanglement, and interference.

— HOW IT WORKS —

To explain those phenomena, let’s first take a step back. When we say a computer divides complicated tasks into simple ones, what’s the simplest possible task to complete? Choosing between two options (‘A or B’, ‘True or False’, ‘Heads or Tails’), which are binary problems. In computing, binary code (represented as ‘1 or 0’) literally communicates to switches in a computer’s circuitry as ‘On or Off.’ Now, although these binary ’solutions’, or bits of information, can be communicated in incredibly rapid succession, they still must be read one after the other. A quantum computer is able to use a more efficient approach. The quantum equivalency to a bit is the qubit, essentially a particle that is ‘loaded’ with information which can be measured.

Classical Binary Bits vs Quantum Bits

While a bit (or classical particle) must exist in one binary state or the other, a qubit can exist in a quantum state called superposition, in which it essentially exists in both states at the same time. Now, because quantum mechanics is largely a game of probabilities, the chances of the qubit being in State A or State B may be 50/50, but it may be 70/30, 10/90, or any pair of percentages. Because of this, you could also imagine the qubit’s position existing on a spectrum or on the surface of a sphere with State A and B at opposite poles. Either way, superposition is a quantum property that allows a particle to be in multiple positions at once, i.e. multiple possibilities exist at the same time. In terms of searching for a solution to a problem, this means a qubit is able to follow multiple routes at a time, whereas a bit must follow one at a time.

Dijkstra’s algorithm in action

So, in the case of Dijkstra’s algorithm for finding the most cost-effective route to a destination, instead of exploring each possible path separately (which classical computers are restricted to doing), a qubit in superposition can analyze multiple possible paths simultaneously, reaching the best solution much faster. As the solution to a problem becomes more complex, or the number of inputs becomes larger, the runtime for a classical computation increases dramatically. However, a quantum calculation is able to scale with the complexity of the problem and becomes more efficient comparatively.

But time is a major issue when it comes to taking advantage of the superposition of quantum particles, as the property breaks down when the qubit comes into contact with any aspect of the device(s) used to measure it. This is due to a physical law known as the observer effect. Although a particle exhibits both wave- and particle-like behavior simultaneously, when measured, it only records one behavior or the other. Also, which of the two gets recorded is dependent on what observation is being made. This creates a bit of an issue when it comes to finding out what information a qubit is carrying in its quantum state.

The second physical property of quantum mechanics, and probably the weirdest, is one that we can utilize to overcome the issue of the observer effect: entanglement. Physicists have observed (and proven) this phenomenon where two particles can be inexplicably correlated to each other, regardless of distance. In other words, they are communicating and mirroring each other’s quantum properties instantaneously, without any known medium or signal. Using this mirroring, the computer’s observational component can simply observe one particle that is staying at home and exhibiting correlated information from its entangled partner(s) that is exploring possible solution paths unmolested. Now, manipulate several (or dozens) of qubits into a single entangled state, and we have a network that can explore 2^n (where n is the number of qubits in the network) possibilities at once while collaborating their findings instantaneously to one another.

So, what can this network do with the shared information it carries to reach the solution? The third physical aspect of quantum mechanics, helps answer that. Interference is a property of the wave-life behavior of particles. When two waves are ‘in phase,’ i.e. their crests and troughs line up, they complement each other, and amplify the shared wave pattern. This is constructive interference. Destructive interference occurs when two waves are at opposing positions within the wave pattern (out of phase), thus dampening or even cancelling out each other. So when more than one qubit is on the correct path to a solution, their states will amplify each other’s behavior, which communicates that the information they carry is the solution the system is looking for.

— WHERE ARE WE NOW? —

There are still major obstacles to overcome to maximize the potential of these quantum networks. While quantum computers of sufficient size can solve computing problems much faster than classical computers (known as the quantum advantage), at the current size of the biggest stable quantum systems, these solutions are commercially useless. So, just scale up, right?

Adding qubits to an entangled system is actually very difficult because the network is so fragile. Entangling the first pair of qubits in a quantum computing system was accomplished in 1998 in a joint effort by IBM, Oxford, Berkley, Stanford, and MIT. Twenty years later, Google holds the record for largest operational quantum chip at only 72 qubits.

Although entanglement gives us a solution (at least, in part) to the observer effect, disruption of the necessary quantum state still occurs and limits the useful life span of quantum properties. Unless a quantum system can reach the solution before it falls out of superposition, into a state called ‘decoherence,’ the whole process fails. So, while potential computing power increases with the number of qubits in the network, that potential is useless unless we can decrease the likelihood of outside elements knocking the qubits out of their superposition state. Current attempts to lower this ‘error rate’ are using lasers, magnetic fields, and superconductors to create environments which extend the life span of the quantum state (a life span that is measured in microseconds). As error rates are improved, certain thresholds will allow for further breakthroughs in observing system behaviors and developing quantum algorithms based on those observations. A few of the key players in the industry already are allowing general access to their quantum computing networks via the cloud in order to democratize the R&D process of these systems. Once enough progress is made, and a stable qubit network of sufficient size and low error rate is accomplished, quantum computers will (in theory) be able to not only solve classical problems much quicker, they will have the ability to solve problems that classical machines simply cannot solve at all. This milestone is referred to as quantum supremacy and is thought to be the point at which the proverbial floodgates open for the field and theoretical computer science is turned on its head. However, there are some claims that supremacy is impossible because certain physical laws and theories simply don’t allow scaling to that extent.

— WHAT’S POSSIBLE? —

Once supremacy is reached, which may still be several years away (assuming it’s possible at all), the computational power achieved will open insights to complex problems across many fields of research and technological advancement. Not only will running queries across massive and complex data sets be far more efficient, advancing the field of machine learning at an unprecedented rate, but the vastly improved ability to run simulations for certain complex molecular structures and their behavior can revolutionize medical breakthroughs. Such simulation power can also be used to maximize energy efficiency across industries and technologies; this benefit comes in addition to quantum computing itself being more energy efficient for solving classical problems at scale. Quantum computers won’t replace classical computers (nor many classical programming needs); instead they will mostly be used in conjunction with today’s modern machines. A couple of fields, though, are likely to change dramatically. The progress rate and future possibilities for development of AI and machine learning are likely to expand drastically when combined with the power of quantum computing. Cybersecurity will almost certainly be shifted largely (if not completely) into quantum technologies, as even the best classical encryption techniques available today are thought to be easily crackable by the quantum systems we expect to see created in the next decade or so. This particular advantage (although there are many) is probably the biggest driving factor in today’s race to reach quantum supremacy.

--

--