Quantum Computing’s Implications on Current and Future Cryptographic Implementations
Authors: Andrew Emmel and William Liu
Quantum computing is a major breakthrough in the field of computer science and has experienced tremendous growth in the past 5 years, as evidenced by its relevance in companies such as IBM and Google. By harboring this technology, tasks such as molecular modeling, quantum physics models, and even classically challenging tasks like the traveling salesman problems could see many optimizations. However, one must consider some of the potential security implications along with this new field. With quantum computing, our current cryptosystems will experience drastic changes to evolve to ensure cryptographic safety. In this blog post, we will look into which classical cryptographic implementations could be trivialized by quantum algorithms and potential new cryptographic solutions that will protect information in the post-quantum era.
Unsafe Classical Cryptosystems
One of the most commonly used schemes for asymmetric cryptography is RSA. To summarize, here is a quick overview of the setup:
The main difficulty here is that integer factorization cannot be solved in polynomial time; otherwise, it would become quite easy to recover plaintext by public knowledge. Currently, the best known classical algorithm is the general number sieve with a Big O that is approximately sub-exponential.
In quantum computing, this situation changes by Shor’s Algorithm, which factors integers in O(log(N)² log(log(N)) log(log(log(N)))). To give a quick overview of this algorithm, there is both a classical and quantum element. The classical component is composed of some number theory that relies on the period of a function, which the quantum component can quickly compute with only the base 2 log(bits of number) due to the superpositional nature of quantum computing.
For the classical component:
For the quantum component:
This easy runtime to find the period is the key to breaking the difficulty in integer factorization, which in turn breaks RSA, leading to serious implications in security due to the importance of asymmetric schemes.
Other commonly used asymmetric methods that are in danger include Diffie Hellman (DH) and Elliptic Curve Cryptography (ECC). These two methods do not rely on the classical difficulty of integer factorization, but rather on the difficulty of solving the discrete logarithm problem. For DH, the goal is to find x given p, g, and g^x mod p to break it. For ECC (which actually relies on Elliptic Curve Discrete Logarithm Problem [ECDLP]), the goal is to find x given points Q and P (which is equal to xQ) on a given elliptic curve modulo a given number to break it. Both of these forms of DLP however can be easily solved by a variant of Shor’s algorithm using the same concept of discovering periodicity.
One way of solving DLP with Shor can be the following:
Because of how quantum computing can easily calculate the periodicity of functions, these common cryptosystems will be broken. However, contrary to a common fear, quantum computing does not break all of classical cryptography. Symmetric schemes such as AES and various hashing algorithms are all generally safe, or could at the very least be strengthened.
“Safer” Classical Cryptosystems
A classical approach would involve brute-forcing (which becomes very infeasible with longer key lengths or long passwords) to break the aforementioned systems. The quantum algorithm known as Grover’s Algorithm speeds up a search in a database of unsorted items from O(N) to O(sqrt(N)). This algorithm operates by first initializing a set of qubits, and then applying an oracle operator over it (and having them in uniform superposition with the Hadamard operator); this oracle operator usually reflects the desired state in the opposite direction (like marking a winner). Now, we will repeatedly work with an amplitude amplification operator, which is:
This reflects states across the overall average amplitude. After about sqrt(N) steps, all the other amplitudes will become close to 0, allowing us to easily find the desired state.
Before the Oracle:
After the Oracle:
By applying this speedup to a search in a brute-force, any currently vulnerable algorithms can be remediated by just having longer key lengths or more bits. Simply double the lengths, and Grover’s Algorithms runtime will be the same as a classical brute-force algorithm. This is much better than the issue of transforming a non-polynomial runtime algorithm into one with a polynomial runtime. Regardless, readers can rest easy for now, as none of these attacks are feasible until we have more powerful quantum computers with many more qubits.
Although quantum computers do indeed break a lot of our existing cryptographic methods, there are still some limitations to quantum computing that allows for some incredibly difficult math problems to remain difficult. This ushers in the era of post-quantum cryptography, which is different from quantum cryptography in that it focuses on creating algorithms that can be used on classical computers and can’t be defeated on a quantum computer. With this in mind, the National Institute of Standards and Technology has already started a process in considering algorithms to be used should quantum computers become viable to break encryption and signatures. This process has already reached round 3, where possible candidates have been narrowed down to just 15 strong methods, with the most prevalent types being lattice and code-based.
Lattice-based cryptography methods are based on the mathematical concept of a lattice, an n-dimensional grid of regularly spaced points that stretches into infinity. Think a piece of dot grid paper, but infinitely big and you’ve got a lattice!
Since lattices are infinite, they can’t ever be represented in computer memory. The solution, since lattices are regular in terms of spacing, is that lattices can be represented as an n-sized set of n-dimensional basis vectors. These basis vectors, can, therefore, if combined in various ways, can represent any point on the original lattice. One thing to note is that lattices aren’t just limited to one unique set of basis vectors, a single lattice can be represented using different basis vectors. Now, since lattice-based cryptography methods exist, there has to be some fundamentally hard mathematical problem, which for lattices, the most famous is the Shortest Vector Problem (SVP).
Illustration with blue signifying basis vectors and red for the shortest vector achieved by those basis vectors
SVP starts with a set of long basis vectors (i.e. basis vectors that point far away from the origin point), and with those basis vectors, find some combination, where each vector is scaled by an integer, that is really short and is itself an integer point (i.e. close to, but not on the origin).
Example of an SVP solution with a set of long basis vectors.
This problem scales in hardness with the dimensions of the lattice, where with n dimensions, it becomes 2^n longer to solve and cannot be cracked by quantum computing (as of yet…). Its applications in cryptography are useful. For the sake of simplicity and to stoke the flames of ingenuity, let’s describe a naive approach to using SVP (or rather, its very similar derivative, Closest Vector) for symmetric-key encryption.
Alice wants to send a secret message to Bob. First, she publicly shares a lattice in the form of some terrible (long, non-orthogonal) basis vectors (B) for some lattice L, but privately, she and Bob know the preferable basis vectors b (short, orthogonal). Alice then encodes her message as an integer vector (v) in the lattice. She then multiplies v by B, and then randomly perturb the resulting point just enough such that an adversary knowing only basis B cannot decode the message to its original form and Bob can decode it to the closest vector, which is the original message. Alice can then share this result with Bob knowing that Bob can effectively decrypt it without fear of an adversary knowing its contents.
The only fault in this is if the adversary has a really good Lattice reduction algorithm — something that takes terrible basis vectors and turns them into something more preferable. However, Lattice encryption scales exponentially with a higher dimensional lattice and remains unaffected by quantum advancements. One more additional note is that with Lattice-based cryptography is categorized as “worst-case hardness,” meaning that attempting to solve any Lattice problem is essentially the same as solving the hardest instance of a Lattice problem. This is in comparison with factorization, which is on average, hard, since you have the possibility of having a really easy number to factor.
Code Based Cryptography
Code-based cryptography, the other most prominent type for post-quantum cryptography, instead utilize the same linear codes as error-correcting codes (EC code), like the Hamming code, to encrypt messages as part of an asymmetric key scheme. This basis on linear codes lends itself to difficulty in decoding (which gives it quantum resistance), as well as speed compared to RSA. One such example, and the most famous method due to its long history and its unique property of randomness as a foundation, is the McEliece cryptosystem, named after Robert McEliece, who first wrote about it in 1978.
Breakdown of the McEliece cryptosystem
With the McEliece method, there are three initial public parameters: n, k, and t. n and k determine the size of the matrices used and t is the number of errors corrected by the chosen EC code. These parameters also determine the size of the public key, which is k * (n — k) bits long. For this to be safe against quantum computers, it is recommended that n: 6960; k: 5412; and t: 119. To generate keys under this method:
- Alice must first generate a [n,k] matrix C of codes derived from the EC code; this EC code must also have a decoding algorithm that requires knowing only a generator matrix, we’ll call this algorithm A.
- With this, Alice should also know G, which represents any generator matrix for C, and keep it secret.
- Alice then selects a randomized [k, k] invertible binary matrix S (i.e. there exists some other matrix that when combined, creates the identity matrix)
- As well as a random [n, n] binary permutation matrix P (which contains exactly one 1 for each row and each column, but zeroes everywhere else).
- Alice then computes the [k, n] matrix Ĝ = SGP.
- With these 5 matrices and decoding algorithm A, Alice now has her public key: (Ĝ, t), and her private key: (S, P, A).
For example, let’s say Bob wants to send her an encrypted message:
- He must first encode his message, m, as a binary string of length k
- Then, compute vector c′=mĜ
- After that, he generates a random n-bit vector, z, of weight t, and randomly place t 1’s in a zero vector of length n, e.
- By summing c′, z, and e, Bob gets the encrypted message he can send to Alice.
For Alice to decode it:
- Multiply the encrypted message, c, with the inverse of her permutation matrix, P, to get ĉ=cP-1
- Using her decoding algorithm A, turn ĉ into m̂
- She can then get the original message by multiplying m̂ with the inverse of S
A key pillar with the McEliece method is choosing the right EC code, and fortunately, the random binary Goppa codes used in his original paper have been able to withstand intense scrutiny for the past 40 years and have been recognized as being fairly quantum-resilient. These binary Goppa codes, though complicated in their formations, have incredibly high error correction capacity, which removes the perturbations created by the random P and S matrices upon decryption. The modern understanding of the McEliece cryptosystem relies on the binary Goppa codes and the inherent randomness as a part of it, which allows this system to not be efficiently broken by quantum computers, even though we’ve had more than a couple of decades to figure it out.
As quantum computing grows in popularity, the security risks and benefits it can bring will become an important issue. As this brief post demonstrates, quantum algorithms such as Shor and Grover allow us to simplify classical computing tasks, making cryptosystems such as RSA, DH, and ECC vulnerable and requiring other cryptosystems such as AES (though not all forms) and various hash functions to increase their complexity. To adapt to this changing field, there has been significant development and mathematical research into post-quantum cryptosystems that will be resistant to quantum attacks. As mentioned above, such systems include lattice-based and code-based cryptography. There are also several other resistant mathematical based methods like multivariate and hash-based. Perhaps the most famous example is the Quantum Key Distribution Scheme known as BB84 (as discussed in lecture), which can detect when tampering has occurred based on a statistical analysis of the correctly guessed bases between the communicators and prevents copying by the no-cloning theorem. However, even this example has a vulnerability in the form of a photon splitting attack, where an eavesdropper can split away extra photons that occur during the key exchange, and hence will be able to defeat the scheme once the sequence of bases is revealed. Overall, the future for cryptosystems, whether it is dealing with implementation or security, will become extremely fascinating as the field of quantum computing continues to develop.