Lately, the question of how close we are to quantum computing (QC) becoming mainstream have been gaining momentum. Clearly, as we get closer, it is reasonable and even urgent to beef up our systems’ resistance to post-quantum attacks, some of which, as we can now envision, would leverage future QC’s capability to decrypt the data collected today. With that, Lattice-based cryptography has been in the news a lot. So what is it and what is its potential for our post-quantum future?
Cryptography and Security Proofs
Cryptography comes in different flavors. For the longest time, at least as far back as Julius Caesar, encryption schemes and other cryptographic tools mostly followed ad-hoc designs. Security was mainly based on intuition and heuristics. However, in the late 1970s — early 1980s, a new, more principled and rigorous methodology emerged.
The key idea is the following. The field of mathematics is full of very-very difficult problems which humanity has collectively failed to solve despite many decades, centuries or even millennia of trying. Now imagine that we could build an encryption scheme in a way that breaking its security requires solving one of these very hard problems. This would turn all the seemingly wasted efforts devoted to that problem by mathematicians present and past into evidence of the security of our newly proven encryption scheme. In fact, all those failed attempts can now be seen as attempts to break the encryption scheme’s security.
Of course, this is easier said than done. Indeed, one of the most difficult tasks here is to find a suitable unsolved mathematical problem. On the one hand, to give us the desired confidence in its difficulty, the problem should be as widely known and well-studied as possible. But it must also be flexible enough so we can turn the problem into something useful.
Factoring, Discrete Logarithms & Quantum Computers
It turns out that a great source of difficult problems is a branch of mathematics called number theory. For one thing, it is amongst the oldest branches of mathematics. For example, Euclid, Pythagoras and other ancient greeks were avid number theorists; even the Babylonians worked on some basic number theory.
Despite its age, number theory is full of very simple and highly flexible (and thus easy to work with), yet still unsolved problems. In fact, a small collection of closely related problems have played a crucial role in modern public key cryptography. Probably the simplest of these problems that can be described in basic school math terms is the Factoring problem.
In the last four decades, many widely used public key cryptographic schemes have been designed based on the difficulty of factoring and other similar problems. That includes RSA, the Diffie-Hellman Key Exchange, ECDH, ECDSA, Ed448, Ed25519. These schemes are now used by most of our day-to-day higher level security applications including Wickr, SSH, TLS, HTTPS, Bitcoin, PGP and VPNs.
Each of these basic schemes comes equipped with a “security proof” that is a formal mathematical proof showing that breaking the scheme’s security is at least as hard as solving one of the fundamental math problems like factoring. Such proofs play a crucial role in helping us build the necessary confidence in the security of these most basic of cryptographic building blocks. As the security of much of our digital infrastructure amongst other things rests on the watertight security of those building blocks, confidence in its proof is vital
But here’s the problem: if someone ever builds a large scale quantum computer, this entire narrative collapses. Although we haven’t quite built one yet, we already know how quantum computing could be used. What’s more, one of the first new things we realized we could do with QC is to solve factoring problems using a technique called Schorr’s Algorithm. In fact, pretty much every other number theoretic mathematical problem we have been using over the last 40 years as a foundation for our security turns out to be easy to solve given a large-scale quantum computer, usually using some variant of an algorithm called Grover’s Search. In other words, if an adversary has access to such a device, the security proofs become completely vacuous. After all, whats the point in proving that a scheme is at least as secure as some mathematical problem if that problem is now easy to solve?
Now you might be thinking: “wait, just because the proof is vacuous doesn’t mean that there isn’t some other new proof that still shows the schemes to be secure.” Allas, for pretty much all cases, it is actually quite easy to see just how a quantum computer could really be used to break the security of the cryptographic scheme. In other words, there never will be a security proof because schemes simply aren’t secure against attackers with quantum computers.
Here is what this boils down to: what the world needs now is a new crop of mathematical problems and matching cryptographic schemes which are hard to solve (and thus secure) even in the face of quantum computing.
Lattices and Short Vectors
Fortunately, over the last decade, cryptographers and mathematicians have been hard at work finding such problems and turning them into useful cryptographic schemes. In fact, there are several different families of math problems currently being investigated with each taken from a different branch of mathematics.
For the purposes of this post, I will focus on the problem family used in what is usually referred to as lattice-based cryptography. The name is derived from the fact that crypto schemes in this area come equipped with security proofs that relate their security to hard math problems around lattices. Let me describe the two most important problems in lattice-based crypto. With that, I’ll also share a couple of useful terms.
Lattices: So what are lattices? A lattice can basically be thought of as any regularly spaced grid of points stretching out to infinity. For example, here are 2 different, 2-dimensional lattices.
Vector: A vector is nothing more than a fancy name for a point, a tuple of numbers called the coordinates of the vector. So (2,3) is a particular 2-dimensional vector as it has 2 coordinates, and a lattice is a collection of evenly spaced vectors. A special vector of interest is the origin which is the vector with all coordinates set to 0. For example, in 3 dimensions, the origin is (0, 0, 0). We say that a vector is long if it is far away from the origin. Conversely, a vector is short if it is close to the origin.
Basis: Lattices are infinitely large objects but computers only have a finite amount of memory to work with. So we will need a succinct way to represent lattices if we are going to use them in cryptography. For this, we use what is called a basis of a lattice. A basis is a small collection of vectors that can be used to reproduce any point in grid that forms the lattice.
Let’s look at the case of a 2-dimensional lattice, a grid of points on a flat surface like a piece of paper. First, choose two more points that don’t happen to lie on a single line going through the origin. For example, we could choose (2,0) and (0,2). Here is how these points can be used to generate a third point:
- First we choose two whole numbers; say, 3 and -1.
- We multiply the coordinates of the first point by 3 to get the point (6,0) and the coordinates of the second point by -1 to get (0,-2).
- Now we add the results together to get our new point (6,-2).
Using this method, we can use (2,0) and (0,2) to generate an entire grid of evenly spaced points, namely all those with coordinates (x,y) such that both x and y are even numbers where we also count 0 as an even number. In other words, the basis consisting of vectors (2,0) and (0,2) generates the lattice points with even coordinates. The idea is that by choosing a basis we have actually chosen an entire lattice, namely the one whose points are generated by the vectors in the basis. Crucially, in contrast to an infinite grid of points, a basis is a simple finite object which we can represent in a computer’s memory.
An important thing to notice is that any given lattice doesn’t have just one basis. In fact, it has many basis. For example, instead of using (2,0) and (0,2) above we could have just as well chosen (-2,0) and (0, -2) or even (4, 2) and (2, 2). In each case they end up generating the exact same grid of points.
We say that a basis is short if it consist of short vectors (See the blue basis in the Figure 1 below). Conversely, we say that a basis is long if it consists only of long vectors (See the green basis in the Figure 2 below). It turns out that short basis are much more useful than long ones when it comes to solving the types of hard lattice problems cryptographers are interested in.
So what would a hard lattice problem look like? It turns out that given what we learned so far, these problems are now very simple to understand. For example, here is the “Short Vector Problem” which is probably the single most important problem in lattice-based crypto.
At first glance, this might seem like a relatively easy problem to solve. However, notice that the basis we are given consist of long vectors. So it’s not immediately clear how they can be combined to generate a point with only small coordinates, namely a short vector. Moreover, when it comes to cryptography, we’re really talking about lattices in much higher dimensions (e.g. 10,000 instead of just 2 as in our examples above). That means that our points actually have 10,000 coordinates, not 2 coordinates. So finding a combination of the basis vectors that simultaneously makes all 10,000 generated coordinates small turns out to be quite hard, so much so that we don’t know how to solve the problem quickly even with a quantum computer, let alone a conventional one.
Here are two more important and very closely related hard lattice problems used by cryptographers to base security on:
Its worth noting that each of these problems is actually quite easy to solve quickly even without a quantum computer if the lattice is described using a short basis instead of a long one.
So Why Lattices?
So what is so great about Lattice-based cryptography anyway? Well, out of the various candidate families of hard problems, lattice-based crypto presents several important and very attractive features.
First, it is fair to say that these are the most well understood and widely studied family of hard math problems. Lattices, have been studied by mathematicians going back at least as far as the early 1800s when they were considered by Johann Gauß, one of the greatest mathematicians of all times. Of course, Guaß, and others had no concept of a quantum computer and so were hardly trying to develop a quantum algorithm to solve the lattice problems we are interested in today. Yet these older mathematical works still serve as sources of deep understanding and powerful insights into what we can and cannot do when it comes to working with Lattices. After all, although Euclid had no concept of a modern computer, his 250 year old works on number theory still form the core of some very important algorithms we use today (e.g. the aptly named Euclid’s Algorithm).
Second, Lattice problems are proving to be incredibly versatile in terms of the types of cryptographic schemes they allow us to build. In fact, not only are we able to replace essentially all of our currently endangered schemes, but lattice problems even allow for entirely new classes of extremely powerful cryptographic tools for which we simply have no analogs based on factoring or any other hard mathematical problems. And that includes the competing families of hard problems which we believe are still unsolvable with a quantum computer.
Third, Lattice-based cryptographic schemes make up the lion’s share of the scientific publications in the field of so called “post quantum” cryptography. That is the field of cryptography concerned with building schemes secure against attackers armed with powerful quantum computers. Indeed, if you look at the entrants to the “post quantum” international competition run by the US National Institute for Standards in Technology, which is focused on standardizing new post quantum secure cryptography, you will notice that the largest family of submissions consist of lattice-based schemes.
Finally, the property of lattice-based problems worth highlighting is somewhat technical in nature. For many cryptographers, it is perhaps the strongest argument in favor of using lattice problems for cryptography.
To understand their property, we must think about different kinds of assumptions. Stating that mathematical problems used by cryptographers are hard to solve is a type of assumption since we have no hard proof of such claims, just a lot of failed attempts at solving them. Moreover, if we are going to base our security on an assumption, then it makes sense to use the weakest, hence most plausible assumption possible. For example, a system that is secure based on the assumption that at least one participant is honest would be preferable to a different system which is secure only if we assume a majority of participants is secure. Assuming there is at least one honest participants is a weaker assumption (i.e. easier to satisfy or more plausible to hold) than assuming a majority of participants are honest.
Normally, in crypto, we assume that our underlying math problems are hard to solve on average. That means that we assume that if you are given a randomly chosen instance of the problem, then it is highly probable that it will be hard to solve. In the case of factoring, for example, the assumption is that it is hard to factor an integer n=p x q if p and q were chosen at random. This type of average-case hardness stands in contrast to what is called worst-case hardness. A problem is worst-case hard if at least one instance of the problem is hard to solve.
The important point here is that the average-case hardness is a much stricter, thus a more difficult to satisfy, type of hardness than worst-case hardness. For example, it is possible that there are some exceedingly rare corner case instances which are hard to solve, while the vast majority of instances are trivially easy to solve. 
Such a problem would still be worst-case hard but not average-case hard because a random instance would almost never end up being one of those corner cases. However, any problem that is average-case hard is most certainly also worst-cast hard. The crux of the matter is that assuming a particular problem is worst-case hard is a much weaker and thus a more plausible assumption to make than to assume that is average-case hard. It turns out that for the type of lattice-based problems we use in crypto, we are usually able to prove the security of our schemes based only on the worst-case hardness of the problems rather than their average-case hardness. In fact, it was exactly this insight by Miklós Ajtai in 1996 that kicked off the entire field of lattice-based cryptography in the first place.
It is important for all networks and technology platforms to consider and build the safeguards to protect today’s data against future post-quantum attacks. The team at Wickr is actively investing in research and development today to assess the existing lattice-based algorithms on their efficiency and implementation. Stay tuned for more on this.
 For example, in computer science when we say that NP is not equal to P what we are really mean its that for any NP problem and any algorithm for solving the problem there is always at least one instance of the problem that isn’t solved by the algorithm in polynomial time.