It’s time to start preparing for Y2Q

Quantum computers are coming, we need to be prepared before they arrive

Jirka Bulrush
5 min readJun 24, 2022
IBM Quantum computer. Image source: IBM Research, distributed under a CC BY 2.0 Generic license.

Y2Q is an acronym coined to represent the approaching day zero for modern cryptographic security, the date when the primary methods used to secure communication over the internet become obsolete, and it’s coming soon.

“Y2Q” stands for “years to quantum” and is an explicit reference to Y2K. The difference is that unlike Y2K, there is no predetermined date that it will occur. The point at which it occurs is dependent upon building a capable-enough quantum computer. Once that happens, we must have already converted all our encryption to methods which are unsolvable with quantum computers. Otherwise, it will literally lead to a societal collapse.

What is a quantum computer?

Left: The “bloch sphere” is an abstract representation of a qubit’s state which exists in a superposition of one and zero until it is “measured” which collapses it to a single value of one or zero. Quantum computers are probabilistic; thus, strictly speaking, each output has a likelihood of being the correct answer. Source: Cf nmr, release under a CC BY-SA 3.0 license. Right: a quantum circuit diagram. Like classical computers, quantum computers use logic gates to perform calculations from input. Quantum computers can utilize different types of logic gates than classical computers. Source: Omnissiahs Hierophant released under a CC BY-SA 4.0 license.

A quantum computer is a machine that is able to exploit the unique quirks of quantum mechanics (namely superposition and entanglement) to solve problems. Quantum computers use the fundamental particles that make up the universe, such as electrons and photons, (or quasi-particles) to find answers to problems. Rather than using bits representing a single one or zero, they use qubits (quantum bits) that represent a probabilistic superposition of one and zero. Utilizing this and the entanglement, when a qubits state is correlated to another, quantum computers are able to perform different types of calculations to solve problems, including some problems that are virtually impossible to solve on a contemporary computer, retronymically called a “classical computer”. (Technically any problem can be solved on classical computers, or any Turing-complete machine, but the likelihood of solving them within a reasonable time is extremely low and therefore virtually impossible).

This is problematic to us since modern internet communications rely on these types of problems to encrypt information: specifically problems involving integer factorization, elliptic curves, and discrete logarithms.

Whilst quantum computing was initially thought by some scientists to be impossible after it was theorized in the early 1980s, the first demonstration took place at the University of California, Berkeley in 1998. Currently, companies such as IBM, Google, and Honeywell have working quantum computers with hundreds of qubits, and governments such as those of the United States, Russia, and China have working ones as well. So far, these computers are limited by the number of qubits and a limited coherence time (the amount of time that the qubits can remain isolated from interactions with the environment). However, as soon as one quantum computer with sufficiently enough computing power is developed, information sent over the internet can no longer be trusted to be secure. Before that happens, we must switch all encryption methods on the internet to ones which are quantum-proof (problems which cannot be reasonably solved either by classical computers or quantum computers).

What is encryption

Encryption is built upon information theory. Cryptographic algorithms can convert a string into an apparent random string which cannot be decoded without knowing the key. Image source: Johannes Landin, released under a CC BY-SA 3.0 license.

Encryption is a method which takes an encoded representation of a message and converts it into a different, apparent random representation. Anyone who sees the encrypted representation cannot make out the meaning of it and has no way of determining what the original was unless they have the encryption key. This allows us to send messages over the internet securely: nobody can see that message and know what it means. It is used for all modern internet communications. One use is internet banking. If the encryption method used by a bank were to become insecure, then the money in that bank would become insecure. It would be like having a bank building where people’s cash was stored in a safe which everyone knew the password to. This is why we must begin converting all encryption methods to ones which cannot be solved by quantum computers.

The most common form of encryption on the internet is prime number factorization, where a computer can only decrypt information by knowing the prime number factors of a very large number. For very large numbers, there is no known method for determining its prime factors in a reasonable amount of time (and it’s generally thought, but not proven, that no such algorithm to this problem exists). Therefore, determining what the prime factors are with a classical computer essentially requires a “guess-and-check” approach (actually there are some algorithms for solving this problem, but they all take exponentially longer time to solve as the numbers grow larger; more technically, the best algorithm takes “sub-exponential” time to solve). So using a small number, the prime factors can be found quickly, but as the numbers get larger, the time it takes grows more than linearly; for any problem where this is the case, selecting a large enough object can guarantee virtual unsolvability on a classical computer). Therefore, we can exploit the difficulty of this problem to encrypt information: we can select two prime numbers, multiply them and know that as long as we selected large enough numbers, they cannot reliably be determined from the product within the lifetime of the universe.

Quantum computers work differently and they can solve certain types of problems in a much more reasonable amount of time. For the number factorization problem, there is an algorithm, called Shor’s Algorithm, that can solve it in a reasonable amount of time (using it, as the product gets bigger, the time it takes the quantum computer to solve the problem only grows logarithmically; there’s also a better algorithm called GEECM which is even faster than Shor’s). This invalidates the method for cryptography because anyone that wants to decode the message can run the algorithm on a quantum computer to find the factors.

How long do we have

Quantum computing is still a nascent technology. There’s still a long way to go until they are useful at any capacity. However, the technology has grown quickly over the past decade. Estimates are that a quantum computer will need to have at least a few thousand logical qubits (meaning qubits capable of using for calculations; many more qubits are necessary as some are needed for error correction) and much longer coherence times than currently available before they will be able to break encryption.

Moore’s law is a term used to describe the heuristically observed stat that computing power about doubles every year. Plotted here are the number of qubits for the largest quantum computer demonstrated for each year. However, quantum computing power is more complex than simply counting the number of qubits; coherence time and the amount of qubit interaction. Image created by author, distributed under public domain.

The problem is well known and several organizations have been discussing solutions and implementation strategies, such as RSA Security, Microsoft, NIST, the United States government, and others. A memorandum by the Biden Administration last month acknowledged the need to build quantum-resistant cryptography.

The Cloud Security Alliance has launched a countdown to Y2Q clock. They estimate about 8 years until quantum computers can break modern cryptography. Though this is just a guess. Given the time it would take to implement and the uncertainties around the date, it’s better to start sooner than later. The consequences would be much worse than Y2K.

--

--