Tensor Products — Linear Algebra for QC

In the last post about “Linear Algebra for QC”, we explored the important concepts surrounding eigenvectors and eigenvalues. This post is dedicated to understanding the tensor product and finally see its application in Quantum Computing. Let’s get started!

Harshit Gupta
Quantum Untangled
12 min readApr 13, 2021

--

The Tensor Product

This article revolves around how we look at things when quantum systems go beyond a single element. Before starting with the intuition behind tensor products, let us take a small mathematical detour and have a quick look at things called bases. The readers comfortable with vector spaces and basis vectors, feel free to skip ahead but in my opinion, recapitulation always helps!

Basis Vectors

If you have followed our Linear Algebra series, you may well be familiar with the concept of vectors, scalars and matrices. Let us also look at what a vector space is and what constitutes a basis.

Simply put, a vector space is a collection of vectors, and basis vectors are the fundamental constituents of that collection. While not going in the concrete definition of about how vector spaces are defined, they can be imagined as a “container” of vectors. There are certain properties that its elements must satisfy (you can take a look at the full list here), but for right now just knowing that they contain vectors will suffice.

The basis is something fundamental. If each element v ∈ V (vector v belongs to space V) can be defined in terms of a linear combination of scalars of some vectors belonging to the same space V, then that set of vectors is called the basis set. Moreover, every element in the basis set is mutually orthogonal which just means that for any pair of vectors in that set, their dot product is 0. Let us see an example —

Let us take a 2-dimensional complex vector space V where the vectors can be of the form

Vector belonging to space V

such that each a, b ∈ C. Now, imagine that you write them as

Different representation of vector

Since the initial vector was a general 2-D vector, this representation would be valid for all vectors of the 2-D space V. So, let us take

Basis vectors for V

as our basis vectors for now.

Also, both of the above vectors belong to the space V. So, the first condition is fulfilled. If you remember how dot product works, we have ,

Dot product of bases

which proves that it is a valid basis set for the vector space V. You may also easily prove that

Basis for 3-D vector space

forms a basis set for a 3-D vector space.

Following is a useful analogy to think of the basis vectors. Imagine that a country only has the currency available in the denominations of 50 and 100 units. So although money can only be dealt in forms of —

Money being dealt in currency basis

it reveals that whatever transaction happens can be expressed as a linear combination of 50 and 100 units of currency. This is like saying that 50 and 100 form a basis for the 2-dimensional currency space!

Now, why do we care about basis vectors?

This is because every quantum state, by definition, belongs to a vector space. Since each quantum state may have any arbitrary value, it can be difficult to accurately record their unique state. A basis provides a way to write down quantum states in a defined manner and thus only look at how close that state is to a certain basis vector.

With the above knowledge we can now look how we represent quantum states in a basis and then ultimately go on to combine them. Since quantum states belong to a 2-D complex vector space, they can be represented as a linear combination of its basis vectors (as we proved above). Reason for that is a convention that is followed in quantum mechanics and is actually based on empirical evidence.

Representing a quantum state in the 2-D basis

Now that you are familiar with how quantum states are represented in a basis let us move on to how we may combine two or more states. Let us start with an analogy.

Suppose you have a quantum die and a quantum coin. We know that a normal six sided die can have only one of the six possible outcomes when rolled and same is the case for the coin i.e. it can only have two outcomes — heads or tails. Their quantum versions are quite similar.

The states of a quantum coin can be represented in the basis —

Basis states for quantum coin

where the basis states are represented in Dirac notation — which is just a convenient way to write quantum states.

The states of the quantum die on the other hand can be represented as a linear combination of the following six basis states —

Basis states for quantum die

So we say that any state of the quantum coin can be represented as a linear combination of |H> and |T> viz — α|H> + β|T> where α,β ∈ C.

The states of the die on the other hand can be represented as a linear combination of its six basis states viz —α|1> +β|3> +γ|4> where all coefficients are complex and some may even be zero.

This analogy can help you understand bases and how they help in the representations of quantum systems.

If you have been paying close attention, you’ll notice that we have only taken a single system into account during our discussions. You may think that “wait, if a die has 6 possible states, shouldn’t that be six systems?” The point is that a die required 6 basis vectors to define its one state. The die is what constitutes a system, not the basis vectors.

This is all fine and good but where are tensor products and how do they fit in everything we discussed ? Let us find out.

The intuition behind Tensor products

What if we want to combine systems?

Think of tensor products as a way of capturing all possible combinations of basis vectors. Bases of what? Different systems. As we saw above, basis vectors are representatives of how we define a system. So, it makes sense to define combination of systems as a combination of the constituents of the system since any other component (vector) can be described in terms of those fundamental components. The following analogy would make it clearer what we mean by combinations.

Suppose you wanted to eat a meal consisting of 3 things — a fruit, snack and a dessert. The starter has a choice of 3 fruits: apples, oranges or bananas. The snack has options of a burger, pizza or pasta, and finally dessert has 2 options, ice cream or custard (quite fulfilling I suppose!)

In how many possible ways can you eat your meal?

Let’s define an order for our three courses as follows —

An analogy for how tensor products work where items are the different bases.

How can you go on selecting a meal then ? You first select a fruit, then a snack and lastly a dessert. Referencing the image, one option may be that you eat an apple, a pizza and then ice-cream or it may be that you like pasta more and you go on eating that instead of the pizza.

This is how we can visualize tensor products. The three courses that you had just eaten can be thought of as 3 different vector spaces each with its own set of choices or basis vectors. The tensor product of the courses would capture all the possible ways you may be allowed to eat the meal given that you follow a horizontal ordering(fruit, snack, dessert) and a specified vertical ordering inside your system.

The Mathematics

The term tensor product contains a tensor. What is that? A Tensor is a container which can house data in N dimensions. Tensors are in fact generalizations of matrices to N-dimensional spaces, which is why they’re often used interchangeably with the matrix, (which is specifically a 2-dimensional or Rank2 tensor) .

Let us formally define the tensor product now and dissect it bit by bit, following with some examples.

The tensor product V ⊗ W of two vector spaces V and W is a vector space, containing a bilinear map { (v,w) -> v⊗ w} from the Cartesian product V × W to V⊗ W.

The above statement means that any two vector spaces V and W can be combined with the tensor product to generate a new vector space VW. Now, this vector space (which we’ll call Z for now) contains vectors which are formed by taking all the possible combinations of vectors from V and W, and generating a new vector through the operation ‘⊗’ between them.

Without going into the mathematical notations a lot, just remember that another way to think of tensor products is through the combination of basis vectors. The new tensor product space generated is essentially a cross product of the basis sets of the vector spaces V and W. Let us see a concrete example to clear things up a bit.

We bring back the quantum die and the quantum coin to illustrate the combination of multiple systems. The first system is our die and the second, the coin. Let the vector space C represent the vector space of the coin with the basis set {|H>,|T>}. Also, let the die be represented by the vector space D which has its basis set — {|1>,|2>,|3>,|4>,|5>,|6>}.

Now, given that we have the individual systems, let us look at what the combined system would look like. Following up from our above discussion, this system is defined as the vector space C D. Some of the basis states for the space would look like |1>⊗ |T> or |2> ⊗ |H>. The following table captures the possibilities quite nicely —

The combined basis vectors for the coin-die system

Note that the dimensionality of this product space is m x n where m is the size of the set of basis vectors of first vector space & n, the second.

Tensor product is also defined on elements of a vector space & note that unlike matrix multiplication, the elements’ dimensions do not have to commute. This means that a 4 x 1 vector would be able to ‘tensored’ with a 3 x 1, 5 x 1, 6 x 1 … vectors too. In fact, any tensor of shape (n, m) can be ‘tensored’ with any other tensor of shape (p, q) to produce a new tensor of shape (np x mq). Also, like matrix multiplication, a thing to note is that a ⊗ b ≠ b ⊗ a . The following example would make it clear how the workings of calculating a tensor product go about.

Suppose you have two vectors —

Vectors a and b

and you wish to calculate the tensor product of these two vectors i.e. a ⊗ b The way to capture all the combinations is, as we defined above, making pairwise multiplications of the elements in the vectors. Another useful way to represent that is highlighted in the following procedure.

The b vector is actually treated as an ‘element’ and then multiplied by each of the element of the first vector. Now, this element is expanded and the components of the first vector are multiplied by the second vector element wise. This image would clear the above steps further —

Step 1 — Treat the second vector as an element
Step 2 — Expand the vector
Step 3 — Multiply and get the result

The final results for tensor products can get a bit scary, especially when matrices get involved, but always remember to treat the second vector as an element and the rules are pretty straightforward, which would lead you to the correct result.

Now that we have the knowledge of tensor products under our belt, let us go look into how they are employed in quantum computing.

Application in Quantum Computing

Superposition, entanglement, and quantum interference are the phenomena which actually make the quantum model a more powerful model of computation than classical computing. How do we leverage these tools? Unless we expand our systems to multiple qubits, we can’t really employ them to our advantage. How do we manipulate or study multi-qubit systems?

The answer to that is tensor products.

This is actually one of the fundamental postulates of quantum systems where if you have n systems with individual quantum states:

Multiple quantum systems

The combined system describing these n systems as one unified quantum state is defined by the tensor product of all the states —

The combined quantum system

Think of it like having n particles with you in a closed box. How do you capture every possible combination of that group of particles? This echoes the very definition of tensor products and their capacity for describing multiple combinations of systems, and is thus the reason as to why physicists used tensor products to characterize a multi-qubit system.

For an example, consider two qubits in the |0> and the |1> states —

How do we combine the state of these two qubits? We take the tensor product of the two and if, say, the first qubit is in |0> and the second qubit is in |1> state, we get —

The final state vector

This state vector actually signifies that this system is described by a 4-dimensional basis. Why? Well there can be 4 possible combinations for a 2-qubit system — both |0>|0> , both |1>|1> or 2 different states where either first is |1> and second |0> or vice versa. We can also have more than 2 qubit systems where the system is just represented as the 2^n sized vector but for simplicity we look at 2 qubits.

Finally, let’s look at how the operations done on qubits are transformed when systems go beyond single element. If you are familiar with the basic gates of quantum computing, you would be familiar with the bit flip gate or the X gate. For those of you who aren’t, just think of it as —

What if we wanted to apply gates to our multi-qubit system? Let’s take the above system only — first qubit is in |0> state and the second in the |1> state. If you want to apply an X gate on the first qubit only, how would you go about it? Again, using tensor products.

Any combination of gates being applied individually on a multi-qubit system is again defined as a tensor product of the gates applied to the whole system. Why this is true can be seen by this identity which is —

Identity for tensor product of products

The proof for that identity can be done by simply evaluating both of the sides and comparison of corresponding elements, for general vectors and matrices. While not going into that, what we’ll see is an example to clear how X gate can be applied to a single qubit in a multi-qubit system.

Assume that the X gate was to be applied over the first qubit and the second qubit were to be left as it is. We could model it as-

Applying X gate on the first qubit
Composed circuit in IBMQ experience
The tensor product for the matrices

as the identity (I) simply models the ‘do nothing’ result we want on the second qubit. With a little bit of calculation you may confirm that it results in the following state-

Resultant State

Although we admit that the actual intuition behind ‘combinations’ gets muddled up quickly as and when we go into more number of elements, the basic rules to evaluate, work out, & play around with tensor products is what is really important in the initial stages.

To conclude, the tensor product is a tool to breakthrough into the domain of multiple elements & thus expand our abilities to model and study larger systems. In this post, we looked at what tensor products mean intuitively, dug into the math and finally explored how multiple qubits are represented mathematically. I hope the essence of what tensor products mean was not lost in the mathematics and you had fun reading it.

Stay tuned for more by Quantum Untangled and keep exploring!

--

--