Published in

The Startup

# What Is a Qubit?

Welcome to Introduction to Quantum Computing. I am your guide, Associate Professor Chris Ferrie, a researcher in the UTS Centre for Quantum Software and Information. This is Lecture 2. It would probably be a good idea to have read the previous Lectures before continuing.

`What did you learn last week?Last week you reviewed the necessary maths to get started on quantum computing. You also had a high-level introduction to the field. Most importantly, you had your first taste of quantum programming by creating “hello quantum world” in several quantum programming languages.What will you learn this week?This week you will meet the qubit — the basic unit of quantum information. You will perform computations using Dirac notation, the secret weapon of the quantum ninja. By applying gates and measurements to your qubit, you will know how to perform a single qubit quantum computation.What will you be able to do at the end of this week 2?At the end of this module, you should be able to answer these questions:What is quantum information and how is it represented?What is a quantum state?How are quantum logic operations represented?What is a quantum circuit and how is it analysed?How do you “read” quantum information?`

# Information is not physical

Information is an overloaded word with many meanings depending on context. Here we will take a computational view. An abstract model of a computation looks like this: Input → Compute → Output. The input is one from a finite set of possibilities. We can always uniquely identify which it is with bits, answers to yes/no questions. A sequence of n bits can represent 2ⁿ different possibilities. So, no matter what the input to a computation is, we can always think about it in terms of bits without losing anything. The same goes for the output. There are many concrete representations of bits, the most common being sequences of 0’s and 1’s.

We can phrase computational questions within this model. For example, we can ask what sorts of operations preserve bits. This would tell us all the “allowed” computations that turn sequences of input bits into output bits. But we might also ask which, if any, algorithms can be implemented efficiently. If every algorithm required a number of steps that grew exponentially as the input size grew, computer science would be a non-starter. But even if there were algorithms that can be implemented efficiently (there are of course), it would probably still be the case that performing the computation by hand would be impractical. No one wants to add two 1 million digit numbers, even if “addition” is efficient. So we might then ask if a physical machine can be built to perform very large computations for us. And indeed they can — that’s digital electronics.

Now let’s ask a similar set of questions, but not of bits. Let’s imagine an abstract model of computation where the inputs are sequences of complex vectors. Let’s take the simplest non-trivial case of 2-dimensional vectors.

The notation |𝜓⟩ anticipates qubits, but for now they are just abstract vectors, like our abstract bits from before. So we ask first, what sorts of operations preserve these vectors — that is, take input vectors and produce output vectors. Let’s constrain the search to those things that are linear and preserve the length of the vector as well. Now we ask if any of these operations can be implemented efficiently in the sense that the number of elementary steps doesn’t grow exponentially as the number of input vectors increases. Of course we can create our own efficient algorithms by design (they might not do anything interesting, but we can make them). Even then, though, the fact remains that as the input size increases, our capacity to calculate things “by hand” becomes limited. In this case, those same digital machines could do the computation for us, but it turns out that they cannot do so efficiently. So, the question becomes, can a physical machine be built to perform the computation for us efficiently?

And, indeed, it seems the answer is yes! That machine would be a quantum computer. The 2-dimensional vectors can be physically represented by quantum mechanical systems, such as atomic energy levels, spin, polarisation, and many more. But that is in some sense beside the point. The point is, whereas we think of bits as “information”, this new model of computation with vectors has a new type of computational information. Since it can be represented with 2-dimensional quantum systems, we call them “quantum bits”, or qubits.

Wow. That was a long way to go just to get to qubits, which I’m sure you’ve already heard of. The reason I want you to think about this perspective is that it avoids the usual unnecessary attachment of qubits to physical things, which very quickly leads to confusion since we cannot really think concretely about “quantum things” having definite and objective properties. In any case, if you have been following, “quantum information” is specified in qubits — 2-dimensional complex vectors with unit norm.

Next time you hear bits are 0 or 1 and qubits are 0 and 1 at the same, think about how illogical that statement is and how utterly useless it will be if you tried to use it in your task to program a quantum computer.

Bits are variables that can take one of two values. The state of bit is what particular value it has. Similarly qubits are 2-dimensional complex vectors with unit norm. The state of a qubit is the particular vector value the qubit has. Sometimes we are sloppy with the language and use these things interchangeably. Sorry about that..

# Dirac’s legacy: would the real quantum bits please stand up

When you ask a quantum information theorist what powers quantum computation you’ll get a myriad of answers — superposition, parallelism, entanglement, coherence, yadda yadda yadda. I will say, what power quantum computation — at least as far as it can get now — is Dirac notation.

Dirac notation, also called bra–ket notation, is a way to write objects in linear algebra. We already used it above when we introduced qubits with |𝜓⟩. This is called a “ket”. The vertical bar “|” and the right angle bracket “⟩” come together to form the ket. The thing on the inside is just a label. The ket | ⟩ represents a vector in a particular complex linear space. Other notation used for the same thing uses underlines, arrows, or bold notation.

However you denote your vectors, you need to be able to distinguish them from the coefficients. Kets are perfect for this. They also make it convenient to enumerate bases in binary. We choose binary labeling to associate with the standard one when taking the concrete vector space of column vectors with complex valued entries. This is called the computational basis in quantum computing. When we expand vectors in this basis and find more than one non-zero component, we call that linear combination of basis vectors a superposition. In addition to |0⟩ and |1⟩, we have labels for some other special vectors, we call them the “plus state” |+⟩, “minus state” |−⟩ and some others.

In this canonical representation, used extensively in introductions to linear algebra, row vectors are created with the conjugate transpose, which swaps rows for columns and complex conjugates each entry of the vector. These are not vectors, but can be placed in one-to-one correspondence with them. In Dirac notation, they are called “bras”, and are written as ⟨ |, with an appropriate label as needed.

Abstractly, ⟨𝜙| maps any vector |𝜓⟩ to the inner product between |𝜙⟩ and |𝜓⟩, which in Dirac notation is ⟨𝜙|𝜓⟩. The computational basis is an orthonormal basis, which facilitates quick calculations in Dirac notation. For example, the norm of the vector |𝜓⟩ can be calculated as follows.

# To change a qubit

Those pictures we drew above are called circuits, both in classical computation and quantum computation. Each line, or wire, represents a qubit which can be labeled at various points to identify what the state of the qubit is. Circuits represent how the qubit state changes, one step at time, from left to right. The “steps” are unitary matrices, those that preserve the norm of the vector. In the context of computation, these are called gates.

There are lots of important examples and you will get to know them quite intimately. Some quantum gates act just as logical operations would act on the binary digits representing the computational basis states. In some sense, these don’t add much beyond classical computation — though, they are important parts of more complicated quantum circuits.

Other gates, like the relative phase gate, modify superpositions. By adding minus signs, components of vectors can start to cancel. This phenomenon is called inference and without it, the computation could be easily simulated with a classical algorithm. Sometimes it is said that quantum computation is just classical probabilistic computation with negative numbers.

None of the above gates generate superpositions starting from basis states. Probably the most important gate is the Hadamard gate, which generates the states |+⟩ and |−⟩ from |0⟩ and |1⟩. A great deal of quantum algorithms, and all that you will meet, start by changing the canonical |0⟩ state to a “uniform superposition”. We will see next week how this generalises to multiple qubit states.

# Born to rule

The last topic we need to discuss is the infamous quantum measurement. Regardless of what the state of the qubit |𝜓⟩ = 𝛼|0⟩ + 𝛽|1⟩ is in, “looking” at the qubit results in only ever |0⟩ or |1⟩. In physics, “looking” is called measurement, and is the result of fundamental probabilistic nature of quantum mechanics, which continues to cause much consternation to philosophers. For us, this is the model we must use, and so we will use it without much fuss. Of course, the result isn’t completely random and depends on the current state of the qubit we intend to measure. If the qubit state is |𝜓⟩ = 𝛼|0⟩ + 𝛽|1⟩, then the probability that |0⟩ will be observed is Pr(0|𝜓) = |𝛼|² and the similarly Pr(1|𝜓) = |𝛽|². Of course one of the alternatives must occur and so |𝛼|² + |𝛽|² = 1, which is where the unit norm and its preservation come from. When analysing a general quantum circuit, the numbers you need to know at the end of calculation are these probabilities.

The very last point is that the state of the qubit after measurement is definitely in the outcome observed. For example, if a qubit in state |𝜓⟩ was measured and |0⟩ observed, then the state after the measurement is |0⟩. This is independent of the initial state |𝜓⟩ and thus “looking” at quantum information is not a reversible process. This abrupt change is sometimes called collapse. And that is what I’m going to do if I write one more wor…

# Resources

A good reference for all the standard quantum gates used in quantum information is the Wikipedia entry on Quantum logic gates.

We have played a little fast and loose with “linear algebra”. The topic is much deeper than matrices and vectors. Research in quantum information science often requires a more abstract understanding of linear algebra. If you are up for a challenge, John Watrous’s text The Theory of Quantum Information is an excellent resource for learning the tools of the quantum information scientist.

--

--

## More from The Startup

Get smarter at building your thing. Follow to join The Startup’s +8 million monthly readers & +760K followers.

## Get the Medium app

Quantum theorist by day, father by night. Occasionally moonlighting as a author. csferrie.com