Taking into account quantum encryption (part 1)
With a lot of recent developments in quantum computing, I thought it would be interesting to write an article on quantum encryption and how it would affect communications in world building. This first part will cover how quantum computing works, using a little bit of math that is also introduced.

What are you talking about?
Classical computers encode information using bits, which can take the value of 0 or the value of 1 — much like flipping a coin (heads or tails), or turning a lightbulb on and off. In a computer, bits are often represented by voltage — a higher voltage corresponds to 1, and a lower voltage corresponds to 0 (though of course there are other possible representations).
Quantum computers, though, are fundamentally different. They encode information using quantum bits, or qubits. Qubits can hold many different values — 0, 1, or anywhere inbetween. You can picture this by thinking of flipping a coin. Instead of just heads or tails, imagine rotating it:

It can still be tails or heads, but can also be various degrees of rotation. It turns out that this difference can mathematically be represented using vectors and linear algebra.
Linear algebra
Imagine I showed you an arrow and asked you to describe its position to me.

It can be kind of hard to describe. What if instead you and I agreed upon some sort of system for describing the position of this arrow. Maybe something where we can describe its position with just a couple of numbers — how far to the left/right it is and how far up/down it is.

René Descartes is one step (okay, a lot of steps and a couple of centuries) ahead of us. He created the Cartesian coordinate system, which we can use to describe the position of our arrow. We’ll place the vector on the coordinate plane such that the tail of the vector is at the origin, and we’ll define the vector by the coordinates of where the tip is located.

From here, it’s fairly easy to define simple operations — adding is done by adding the components of the vectors. For example, (2,3)+(4,2) = (6,5). You can think of this as moving the tail of the second vector to the tip of the first, and drawing the new vector from the origin to the tip of the second vector. Subtracting is done in much the same way. Multiplication by a number, not a vector (called a scalar — this can be integers, real numbers, complex numbers, or anything really) stretches or shrinks the length of the vector, and multiplying by a negative flips the direction of the vector — for example, 5*(2,3) = (10,15), or -2*(2, 3) = (-4,-6). Interestingly, vectors can be represented in a “simplified” format by using the techniques of multiplication and addition. We do this by using unit vectors.

As the image above shows, these are vectors that go one “grid” in a single direction — i.e., one grid block in the x-axis, or (1, 0), (this is called i-hat, thusly unit vectors are often written with a small caret above them), one grid block in the y-axis (called j-hat), or (0,1), and so forth into higher dimensions. Using these, we can rewrite any vector as multiplication of a scalar by the unit vectors and then addition of those unit vectors. For example, I can write (2, 3) as 2(1,0)+3(0,1). (This is writing the vector as a linear combination of the unit vectors with scalars.)
Qubits can be represented as vectors like this (we’ll get to that in a little bit) but what about gates? Do they have an analogue? The answer is of course yes. Vectors can be transformed by something called a matrix, or matrices, plural. The basic idea is this. Imagine instead of a normal Cartesian coordinate grid, we wanted to use a different grid — maybe something like the one below.

What we then do is lay it on top of our original grid, and see where the unit vectors fall in the new system. These new unit vectors form the columns of a matrix, and when we multiply this matrix by a vector, it ends up being basically like remultiplying the components of the vector by the unit vectors, transforming it into the new coordinate system (also known as a basis).

When one multiplies together multiple matrices, you are then combining two transformations into one. With these two components, matrices and vectors, a lot of the basics of quantum computing can be understood (interestingly enough, this system of matrices and vectors can also be used to solve linear equations with many variables, and is useful in many other aspects of physics and math). If you’d like to learn more about linear algebra, as well as see some great animations that explain the above in more depth, see 3Blue1Brown’s video series on linear algebra.
Theory
The question, now, is how exactly this all connects with quantum computing. For that, a wee bit of history. When quantum mechanics was first being discovered (or invented, depending on your philosophical attitude), there were two main formalizations of the theory. One, using primarily calculus, was wave mechanics, discovered by Schrödinger. Another, using primarily linear algebra, was matrix mechanics, discovered by Heisenberg, Born, and Jordan.
Both were shown to be equivalent, but the point here is that linear algebra can describe how small particles and atoms work (and, indirectly, us, though the effects aren’t noticeable at our scale). Quantum computing, which uses small particles, atoms, and whatever else may be able to exhibit the necessary quantum effects, therefore can be described using linear algebra.

A qubit, then, can be represented by a vector and a gate by a matrix. While there a few requirements that we need not get into here to make sure that this method also follows the laws of quantum mechanics, this is the basics of the math behind the system. Just as all gates in classical computers can be built up from combinations of NAND gates, all gates in quantum computers can be built up from single qubit gates and the CNOT gate, or the controlled not gate (more on that in a minute).
Bra-ket notation
Here’s the thing. We’ve been talking about vectors using component notation, as in (5, 1, 2) or (0, 1). But this means we’re writing the vectors with respect to a particular basis, that is, we’re assuming that someone else knows our definitions of i-hat and j-hat, or whatever our basis vectors might be. The same goes for matrices — when we write them in component notation, we assume a basis. The other problem arises when one is dealing with vectors and matrices of high enough dimensions that it becomes a pain to write them out.
Instead, we can use bra-ket notation (the term, strange as it may seem, I believe comes from ‘bracket’). In bra-ket notation |x> represents a column vector, like we’ve been discussing. A letter, like a, can be used to represent a scalar quantity. Capital letters, like H or A or U, can be used to represent matrices. In this way, the “standard form” of a qubit (in a single qubit system) is often written

Here, we’re using a single-qubit system. If the qubit was equal to 1|0>+0|1>, the qubit would be in the zero state, and if the qubit was equal to 0|0> +1|1>, the qubit would be in the one state. If there was some number multiplied by both, the qubit would be in a superposition of the two — in one of those inbetween states.
Quantum gates
There are several different quantum gates we’ll look at to get a feel for how the system works. The first is the NOT gate, or the Pauli-X gate. (We’ll be using component notation now, to see how gates affect the qubits a little more easily.) The NOT gate is defined as

Try working out what it does yourself before you scroll down. (Try multiplying it by a couple different vectors, like (0,1) or (1,0).)
Okay, basically, it acts a lot like a classical NOT gate. If the qubit is in the one state, it flips it to the zero state, and vice versa. But what if the qubit is in a superposition of the two? Remember the superposition can be written a|0> + b|1>. In this case, the constants a and b are flipped, so the qubit is in a new superposition: b|0> + a|1>.
Another standard gate is called the Hadamard gate. The Hadamard gate is useful in a couple of ways. First, applied to a qubit in the zero state, it leaves the qubit exactly inbetween the zero state and the one state. Second, when applied to a qubit with a CNOT gate, it creates a Bell state, which is useful for many reasons.
It is represented by the matrix

The last major gate we’ll look at is the CNOT, or controlled-NOT gate. Note that there are many other gates, but for space purposes we’ll only cover these here. The CNOT gate, unlike the other gates we’ve looked at, is a two-qubit gate. It is represented by the following matrix:

Basically, it applies the NOT gate to the second qubit (the target qubit) if the first qubit (the control qubit) is in the one state. Using this gate plus single qubit gates, one can construct any other gate.
Circuit notation
All these gates can be represented easily using something called circuit notation. All gates are represented using blocks, which are placed on lines representing qubits. Double lines represent classical bits. For example, here’s a simple circuit:

All qubits start in the zero state. The first qubit has a Hadamard gate applied to it. The first qubit is treated as the control qubit for a CNOT gate, with the target qubit being the second. The second qubit is then treated as the control qubit for another CNOT gate, with the third qubit as the target.
You can create some of your own circuits using simulators like Quirk (my personal favorite; also on Github so you can look at the code if you wish) or IBM’s quantum computing experience (which also has some nice explanations of gates and algorithms to read, and an accompanying forum.
Measurement
I lied earlier when I said the last gate we were going to be covering was the CNOT gate. Measurement can technically be called a gate, though it can’t be represented by a matrix. You may be wondering how you can’t just store an infinite amount of information in a single qubit because you can reach basically an infinite number of states, not just two. The answer is that you can store the information, you just can’t access it. Measurement is to blame.
Basically, measurement sends the current state to the zero state or the one state, depending on probability and how close the current state is to the one or zero state. If the state is really close to the one state, there’s a high probability that after measurement it will end up there, though there’s a chance, however slight, that it will end up in the zero state. So not only do you get just zero or one when you read it, measurement permanently changes the state of the qubit to zero or one.
It’s the same sort of idea as in Schrödinger’s thought experiment. The cat is in a superposition of two states — alive and dead — until someone observes (measures) the cat’s state. The superposition then ‘collapses’ and we observe either a dead cat or an alive cat. Of course, the idea of the superposition collapsing is called the Copenhagen interpretation, and it’s only one of several, but this isn’t an article about the philosophy behind quantum mechanics. Suffice it to say: measurement does weird stuff.
Entanglement
The last thing I’ll talk about here is entanglement. Einstein called it “spooky action at a distance”, and it is a really weird phenomenon. Entangled particles seem to be able to tell each other their states faster than the speed of light, and do so no matter how far apart they are. Changing the state of one particle in the entangled pair changes the state of the other.
Why is this relevant to quantum computing? Well, two qubits can be entangled using a Hadamard gate and a CNOT gate (creating a Bell state). That’s useful for quantum communications, cryptography, and a super-dense encoding scheme, all of which we’ll get into in part two.
In part 2 of this article, we’ll talk about the usefulness of quantum computing, how quantum computers are constructed, and tie it all together for quantum cryptography and communications. There will also be some good resources listed if you’re interested in learning more about quantum computing.

