Driving factors of Quantum Computing | Quantum Computing Primer — Chapter2
There are physical limitations such as the transistor size which has already reached atomic level and the theoretical limitations in terms of algorithms that prevent us from solving complex real world problems. These are the key factors driving Quantum computing. In this chapter we are gonna look at these in detail (again in a curated fashion)
If you’re new to Quantum computing please go through chapter 1 to get a handle of the terminologies.
Table of Contents
- Physical Limitations
- Moore’s Law
- Theoretical Limitations
A changing computing landscape
Before we can understand quantum computing and its applications, we must take a look at how its predecessor — classical computing (transistor-based computing) — has reached its limits.
Note, classical bits (stored on transistors) are the basic units of information processing in a classical computer.
They are basically electronic on/off switches embedded in microchips that alternate between 0 or 1 to process information. The more transistors on a chip, the faster the chip can process electrical signals, and the better a computer becomes.
COMPUTING BEYOND MOORE’S LAW
In 1965, Intel co-founder Gordon Moore observed that the number of transistors per square inch on a microchip had doubled every year while the costs were cut in half (since their invention in 1958). This observation is known as Moore’s Law.
Moore’s Law is significant because it means that computers and computing power both get smaller and faster over time.
However, Moore’s law is slowing down (some say to a halt), and consequently, classical computers are not improving at the same rate they used to.
Intel, unsurprisingly, has relied on Moore’s Law to fuel chip innovation for the last 50+ years. Now, Intel, along with other computer manufacturing giants, has suggested that transistor-based computing is approaching a wall.
Transistor-based computing is approaching a wall. — Intel
Sometime in the 2020s — if we want to continue to reap the benefits of exponential growth in computing power — we will have to find a fundamentally different way of processing information.
Theoretical Limitations
In 1981, the Nobel laureate Richard Feynman asked, “What kind of computer are we going to use to simulate physics?” In his speech,
Nature isn’t classical, dammit, and if you want to make a simulation of Nature, you’d better make it quantum mechanical, and by golly it’s a wonderful problem, because it doesn’t look so easy.
Let’s elaborate this difficulty with a real example. The fertilizer production accounts for approximately 1.2% of the world’s energy consumption. The current Haber-Bosch chemical synthesis process uses a metal catalyst to couple hydrogen with nitrogen at high temperature to form ammonia.
Unfortunately, this process consumes a lot of energy. Bacteria in the soil produces nitrogenase enzymes to pull nitrogen from the air to make ammonia at normal temperature. Scientists have been looking into this nitrogen-fixing process for a while.
However, while our computers can process petabytes of information (1,000,000 GB), it barely simulates the chemical interaction for the F cluster of the nitrogenase enzyme above. This is a very small portion of the enzyme which contains four atoms of iron and four atoms of sulfur only. As the number of atoms increases, the interactions grow exponentially and becomes too difficult for the classical method. This problem exposes a fundamental issue in our current computers, how can we speed up computation?
For example, A.I. training processes a huge amount of data. We want to parallelize the data processing as much as possible. For a double-core CPU below, we can process two tasks at a time. But even with an 8-core CPU, this is far from adequate for many AI problems.
Schedule tasks on a dual-core CPU
Therefore, A.I. models are trained on GPUs (for example, the computer’s graphics card). For a high-end GPU, there are over 4K+ cores that can parallelize the same pipelines of operations over 4K+ data simultaneously.
Source: Nvidia
But this strategy still has a significant limitation. For many real-life problems, the complexity grows exponentially (rather than linearly) with the number of input.
This is the reason behind the scalability issue of the fertilizer example.
If it takes an exponential amount of data to solve a problem, it is bad. Because it may need an exponential number of operations to manipulate them.
As Feynman put it:
If doubling the volume of space and time means I’ll need an exponentially larger computer, I consider that against the rules.
Data parallelization improves performance linearly so it is not a cure for problems with exponential complexity. We need new concepts to break the curse.
Enter quantum computing.
THE RISE OF QUANTUM COMPUTING
Quantum computers could offer a huge efficiency advantage for solving certain types of computations that stump today’s computers — and would continue to stump them even if Moore’s Law were to carry on indefinitely.
For starters, think about a phone book, and then imagine you have a specific number to look up in that phone book. A classical computer will search each line of the phone book, until it finds and returns the match. In theory, a quantum computer could search the entire phone book instantaneously, assessing each line simultaneously and returning the result much faster than a classical computer.
These problems, which require the best combination of variables and solutions, are often called optimization problems. They are some of the most complex problems in the world, with potentially game-changing benefits.
Imagine you are building the world’s tallest skyscraper, and you have a budget for the construction equipment, raw materials, and labor, as well as compliance requirements. The problem you need to solve is how to determine the optimum combination of equipment, materials, and labor, etc. to maximize your investment. Quantum computing could help factor in all these variables to help us most efficiently plan for massive projects.
Optimization problems are faced across industries including software design, logistics, finance, web search, genomics, and more. While the toughest optimization problems in these industries stump classical computers, they are well-suited for being solved on a quantum machine.
Quantum computers differ from classical computers in that improvement for the latter mainly relies on advancement in the materials that make up transistors and microchips.
Quantum computers do not use transistors (or classical bits). Instead, they use qubits.
Qubits are the basic units for processing information in a quantum computer.
Qubits can be any value from 0 to 1, or have properties of both of these values simultaneously. Right away, there are a whole lot more possibilities for performing computations.
Additionally, quantum computers rely on naturally occurring quantum-mechanical phenomena, or two important states of matter known as superposition and entanglement. These states of matter, when harnessed for computing purposes, can speed up our ability to perform immense computations.
The most advanced quantum computing chips available today, under development by Berkeley-based startup Rigetti Computing, can make use of up to 19 qubits, although the company is in the process of creating a 128 qubit chip by late 2019.
However, the race to build the most powerful quantum computer with the most qubits has been underway since at least the late 1990s.
In 1998, Oxford University researchers in the UK announced that they had made a breakthrough with the ability to compute information using two qubits. Fast forward to 2017, and IBM proved the ability to compute on 50 qubits. Quantum computing power increased by 25x in 20 years — a seemingly slow start compared to today’s pace of advancement.
In 2018, Google demonstrated 72 qubit information processing. In August, Rigetti Computing announced plans for a 128 qubit quantum chip.
Steve Jurvetson, managing director of the investment firm Draper Fisher Jurvetson and an investor in the quantum computing company D-Wave Systems (an early leader specializing in hybrid-quantum and classical machines), dubbed the phenomenon of the increasing capacity of quantum computers as “Rose’s Law.”
Rose’s Law for quantum computing parallels the idea behind Moore’s Law for semiconductor processor development. In short, quantum computers are already getting really fast, really quickly.
This concludes chapter 2 where we quickly went through the factors pushing quantum computing forward.
In Chapter 3, we are gonna look at the types of Quantum computing.