The ice-cream sales rep problem
Imagine you want to set up a new business for next summer. You have made a first investment in buying an ice-cream truck and now you are organizing the routes that you will follow. How do you design the optimal route that passes through all neighbourhoods of your city?
The first thing that you may think about is what is the shortest path. If your city has 3 neighbourhoods, this is a straightforward calculation. You just have to compute the distance of all possible paths and choose the shortest one. Labelling the neighbourhoods from 1 to 3, your computation reduces to checking the distance from your home to 1 to 2 to 3; or to 1 to 3 to 2; or to 2 to 3 to 1; or to 2 to 1 to 3; or to 3 to 1 to 2; or to 3 to 2 to 1. There are no more possibilities.
But maybe your city has 5 neighbourhoods. That means more possible paths, in particular 125. For 20 you have 20!=20x19x18x…x3x2x1=2,43x10²⁸ different paths. As you can see, this problem gets complicated very quickly… In fact, this is a well known problem, the travelling salesman problem (TSP), and it belongs to the NP-hard complexity class, which are problems that cannot be solved efficiently.
There are classical algorithms that try to find an optimal solution to this problem but, in order to reduce the number of operations needed, these algorithms find the correct solution with a certain probability, which is OK, better than nothing. But what if we add more variables to the problem? For instance, besides the shortest distance, we may be interested in finding the fastest route. This will be related to distance and to roads quality. If the shortest path takes us up a mountain road, we may not drive faster than a certain velocity, which will increment the time of our route.
Even harder: what happens if after making the effort to compute the best route, the city council decides to cut off some road for works? We should have an algorithm that given the possible paths, distances, times, etc, finds the optimal route that passes through all neighbourhoods.
Encode solutions into physical systems
Physical systems tend to be in the lowest energy state, called the ground state. It is easy to imagine this fact: what is easier for you, to climb up or to go down a mountain? Going up is costly for us, since we are increasing our potential energy (the energy related with gravity), so our physical system, our body, prefers to stay in a lower energy state: going down.
So we can encode the information of our optimization problem into a physical system in a way that its ground state is the solution of our problem. Physical systems are described by a mathematical function called the Hamiltonian. From the Hamiltonian we can derive the properties of the system: its energies, equations of motion, etc.
As an example, let’s consider the Hamiltonian of one of the simplest models in condensed matter physics: the one-dimensional antiferromagnetic classical Ising model. This model describes the physics of a chain of spins, which we can interpret as microscopic magnets that can be in one of two states: +1 or -1. It is the simplest model to explain magnetism. The Hamiltonian of this model consists of the sum of the interaction between all nearest neighbour spins, that is
where each spin S can have two values, +1 or -1. This Hamiltonian corresponds to the energy of the chain. Each combination of values of all spins has an energy value. In particular, the ground state of this model corresponds with all spins antiparallel, that is
This state — ( +1)(-1)(+1)(-1)… or in binary notation 010101… — has energy -N where N is the number of spins in the chain. Any other values for spins lead to a higher energy state.
We can complicate this model by changing the interaction strength between spins (for instance, we can decide that spin 1 and spin 2 interact with twice the strength of all the others, introducing a factor of 2 in front of the S1S2 term in the Hamiltonian above) or by introducing other terms in the Hamiltonian that can change the energy.
Coming back to the ice-cream sales rep problem. We can introduce some constraints in this problem: each time these constraints are not satisfied we can add a term in the Hamiltonian that increases the energy, a penalty.These constraints are related with the problem. For example, the interaction between spins could be proportional to the distance travelled: less distance, less energy. Therefore, the optimal solution of the problem will be the lowest energy state of the system and this state corresponds to the spins values that leads to this energy.
This is precisely an example of a simulation: instead of going through all the possible paths one by one, we look for other ways to simulate the same behaviour.
The physical system that we can construct artificially to simulate the possible solutions of a problem have states with different energies and some of them could be very close to the lowest energy state. We should then elucidate how to “drive” the system to the lowest energy state, where the optimal solution is codified. To do so, there are many classical strategies: random search, simulated annealing, gradient descent, etc. But, as happens in other cases, quantum strategies can play a relevant role.
Quantum annealing and adiabatic quantum computation
Let’s summarize what are we trying to do. We have codified an optimization problem of several variables (distance, time, etc) into a physical system by constructing an artificial Hamiltonian, which is the function that describes it and includes all constraints of the problem. The optimal solution of our problem corresponds with the minimum energy state of the system. We should therefore find a method that drives the system to its minimal energy state.
The space of energy states of a physical system resembles a landscape formed by mountains and valleys. Each valley is an energy minimum and all are separated each other by peaks that represent energy maxima. The solution corresponds to one of these valleys, the lowest one. How can we arrive there? How can we bypass the intermediate peaks that separate us from the lowest valley solution?
A classical computation method tries to solve this problem by “climbing” the high energy solutions. The method starts by increasing the energy (temperature) and letting the system cool down gradually in order to find the path to the minimum. This is called simulated annealing in analogy with annealing in metallurgy. However, the method could be easily stuck in some local minimum, that is, some solution with a local lowest energy, but not the real absolute minimum. Here is where the quantum version of this method could be very convenient.
Let’s prepare the system in a quantum superposition of many possible solutions of the problem. We manipulate the system in a way that the probability of each solution evolves with the annealing process, with the lower energy solutions becoming more and more likely until the optimal solution has the higher probability of outcome after the measure. The quantum tunnelling effect allows to “pass” the high energy “peaks” through tunnels instead of climbing them as in simulated annealing. Quantum annealing is driven with an external magnetic field which plays the role of the temperature in simulated annealing: the quantum annealing starts with high magnetic field (high temperature) and ends up with a low magnetic field (low temperature).
However, this method has also a non vanishing probability of getting stuck in some local minimum or even worse: how do we even know when we have reached the lowest energy solution? Is there any way to check that we found the lowest energy solution? There is, but we should use another quantum method to do so — the quantum adiabatic algorithm.
In adiabatic quantum computation, we start in the ground state of some well known physical system, one which is easy to prepare, that has a Hamiltonian H0. Then, we evolve adiabatically (very very slow) the Hamiltonian of this system until it transforms into the problem Hamiltonian, H1. In mathematical terms, we construct the following Hamiltonian:
Initially, we compute the ground state of this Hamiltonian with s=0, so H=H0. Then we increase a little this parameter s and compute again the ground state of H. We repeat this process until we arrive to s=1 and, therefore, H=H1. If the algorithm is performed slowly enough, the adiabatic theorem guarantees that the ground state at the end of the computation is the optimal solution.
The term “quantum annealing” is used indistinctly to describe the quantum annealing process that we have explained above, adiabatic quantum computation, or some process that involves both, annealing and adiabatic.
Equivalence between annealing and gate-based quantum computation
In all of our previous articles we have talked about quantum computation in terms of quantum gates, in analogy with classical computation that works with logic gates. Now, we have just described other methods to compute using the properties of quantum mechanics. Which method is better? Can we solve an optimization problem using a gate-based quantum computer? Can we run a quantum algorithm in a quantum annealer? The answers of these questions are subtle and a little bit technical: specialized readers can check the Adiabatic Quantum Computation review by Tameem Albash and Daniel A. Lidar (arXiv link).
The short answer is that adiabatic quantum computation is (polynomially) equivalent to gate-based computation if it uses what is called non-stoquastic Hamiltonians. And gate-based quantum computation can simulate any Hamiltonian with a finite number of quantum gates and, therefore, perform adiabatic quantum computation. So yes, both ways to compute quantum mechanically are equivalent but under some circumstances that are not always fulfilled.
It is important to point out the word “polynomially”. Yes, both types of quantum computation are equivalent but a polynomial number of operations are necessary to find this equivalence. Polynomial is much better than exponential and, for that reason, we understand it as something efficient, but it is not for free. If we want to run a quantum algorithm such as Shor’s factorization, we can technically do it with an adiabatic quantum computer, but it will be more difficult in terms of energy and steps than with a gate-based device. On the contrary, if we want to solve an optimization problem by finding the ground state of some artificial physical system, we can do it using quantum gates, but in general the effort will be greater than with an adiabatic quantum computer.
However, with all this said, that does not mean that we cannot use a gate-based quantum computer to solve optimization problems. There are quantum algorithms that can solve some of this kind of problems, in particular the variational quantum algorithms, that were introduced in the previous article.
Pros and cons of quantum annealing
We shouldn’t forget that we are talking about a future technology still under development, so many pros and cons could change as research moves forward.
An obvious pro of quantum annealing is the kind of problems that it can solve. Optimization problems are probably the majority of problems that affect us nowadays. It is not a coincidence that NASA, Google or Volkswagen are investing in this kind of technology. This is about solving multitasking, machine learning problems or traffic jam flows.
Another pro is that it is apparently easier to perform a computation with a higher number of qubits, since the qubits control is not as strict as in gate-based quantum computation. For that reason, there exist quantum annealers of more than 1000 qubits.
Although we have defined this fact as a pro, it could transform into a con. It is still not clear how to prove quantum advantage in this kind of quantum computation. In fact, it hasn’t been proved yet that quantum annealing gives an advantage over classical optimization algorithms. One of the reasons is because error correction protocols have not been developed for these devices, so the advantage as the number of qubits increases is not clear.
Only few companies are investing in quantum annealing or adiabatic quantum computation. Interestingly, the first company that launched a quantum computer, D-Wave, is developing this kind of technology. They currently have quantum annealers with more than 2000 qubits. After D-Wave announced their first products, other companies became interested and bought a quantum annealer. These were NASA Ames, Google, and Volkswagen, as well as some universities and research centers. It is interesting to note that Google is developing their own quantum annealers, at the same time that it is constructing gate-based quantum computers.
Bottom line: quantum annealing is an attractive field with few players at this point. It therefore offers a great opportunity for new entrants, such as Qilimanjaro, the European based adiabatic quantum computer start-up in Barcelona.
After this article in the QWA series on quantum, we are ready to discuss a very important issue in quantum computation that has been mentioned in this and previous articles: quantum advantage. But before,we will introduce in our next article a potential application of quantum computing: predicting economic crashes. Stay tuned!
QWA is helping to bridge the gap between quantum and business. The go-to for providing suggestions, feedback and questions is our email address firstname.lastname@example.org.