Linear Algebra for Quantum Computing — Part 1

Navaneeth Dinesh
Analytics Vidhya
Published in
8 min readMar 24, 2021
Photo by J. Kelly Brito on Unsplash

The primary objective of this article is to get you familiarised with the basics of Linear Algebra to have a better understanding of how qubits behave. Covering all the concepts of Linear Algebra is beyond the scope of this article, so we would limit our discussion to the key concepts necessary to get started with quantum computing.

Vectors and vector spaces

Let’s start by exploring vectors. As most of you might have learned in high school, a vector may be defined as a physical quantity that has both magnitude and direction. Graphically, a 3-dimensional vector (1,4,5) from origin can be represented as in the figure below.

Image courtesy: Mathisfun

Several vectors constitute a vector space. A vector space ‘V’ can be formally defined as a set of vectors satisfying two conditions:

  1. For any two vectors |a⟩, |b⟩ ∈ V, their vector addition should yield another vector in the same vector space,i.e, |a⟩+ |b⟩ = |c⟩, where |c⟩∈ V.
  2. For some vector |a⟩ ∈ V and some scalar n ∈ F, their scalar multiplication denoted by also belong to the same vector space V, i.e, n|a⟩∈ V.

As far as Quantum computing is concerned we are interested in state vectors. As their name suggests state vectors are basically vectors that point to a specific point in space that determines the state of a system, in this context a quantum state. Each orientation of the state vector corresponds to a state and several state vectors constitute what is known as a state space. Such a state space, with all possible orientations of a state vector, can be represented as a block sphere as shown below.

Note that a Bloch sphere has a radius of 1 unit. We will soon explore why this is the case when we talk about Hilbert Space.

Matrices and Matrix operation

A Matrix can be defined as a rectangular array of numbers arranged into rows and columns. For a matrix with ‘m’ rows and ’n’ columns, the dimension of the matrix is given by m x n. They are generally used as operators that transform a vector into another vector. To understand how they are applied to vectors, let’s define some operations on the matrix.

  1. Matrix addition: This operation is straightforward and performs element by element addition of two matrices. The dimension of both the matrices should be the same for this operation.
  2. Matrix Multiplication: For performing a valid matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the second matrix. The resultant matrix will of the dimension r1 x c2, where r1 is the number of rows in the first matrix and c2 is the number of columns in the second matrix. This operation is illustrated in the figure below.
Image courtesy: Matthew Scroggs

3. The inverse of Matrix: The inverse of a matrix ‘A’, is another matrix that satisfies

Calculation of inverse of a matrix is seldom done in quantum computing but you can learn more about it here.

4. Transpose of a matrix: It is a matrix that is formed by flipping a matrix across its diagonal, turning all the rows into columns, and vice versa.

Image courtesy: Penn State Science

Now that we have explored some basic operations on matrices let’s now take a look at Hermitian and Unitary matrix. While performing quantum computation we usually come across these two matrices.

A hermitian matrix is a matrix that is equal to its conjugate transpose. So the obvious question is what is the conjugate transpose of a matrix? The conjugate transpose of an m-by-n matrix is obtained by transposing it into an n-by-m matrix and then taking the complex conjugate of each entry. (Note: complex conjugate flips the sign of the imaginary part of a number).

A hermitian matrix. Image courtesy: ShareTechnote

Hermitian matrices have more of their application in quantum mechanics. But there is another variety of matrices, called Unitary matrices, which is equally relevant in both quantum mechanics and quantum computing.

A unitary matrix is similar to a Hermitian matrix. In the case of a unitary matrix, the inverse of the matrix is equal to its conjugate transpose.

The basic idea is that evolution of a quantum state by application of a unitary matrix “preserves” the norm (magnitude) of the quantum state.

Due to the above-stated reason we use unitary matrices to represent quantum gates.

Vector space and associated terminologies

A field is a set (often denoted F) that has two binary operations defined on it: + (addition) and · (multiplication). So for any a, b∈F, a+b, and a · b are elements of F.

A vector space over a field F, or an F-space, is a set (often denoted V ) which has a binary operation + (vector addition) defined on it, and an operation · (scalar multiplication) defined from F × V to V. So for any v, w ∈ V, v+w is in V, and for any α ∈ F and v ∈ V, α·v ∈ V. In this context, an element α of F is called a scalar.

Consider the vectors |v1⟩, |v2⟩ …, |vn⟩. Linear combination of these vectors, defined as the sum of scalar multiples of the vectors, gives another vector, say |v⟩, which can be written as:

Now let us assume that we have a vector space V over a field F. Let S be a set of vectors belonging to a subspace Vₛ such that Vₛ⊂ V. We say that the vectors of S span the subspace Vₛ if every other vector in Vₛ can be written as a linear combination of the vectors in S. Such a set S is called a Spanning set.

A set of vectors |v1⟩, …, |vn⟩ is said to be linearly dependent if at least one of the vectors in the set can be written as the linear combination of the other vectors. In other words, there should exist corresponding coefficients for each vector, bi ∈ F, such that:

where at least one of the bᵢ coefficients is non-zero. If no vector can be written in this way then they are said to be linearly independent.

In quantum computation, we might often encounter |0⟩ and |1⟩ qubit states. In fact, all other qubit states can be represented as a linear combination of these vectors. These vectors hence form one of the bases. Formally, a Basis is a linearly independent spanning set. The coefficients of their linear combination are referred to as components or coordinates of the vector with respect to the basis.

A vector space can have several bases; however all the bases have the same number of elements, called the dimension of the vector space.

Hilbert Spaces

As Wikipedia formally defines it,

A Hilbert space is a vector space equipped with an inner product, an operation that allows defining lengths and angles.

In the context of quantum mechanics and quantum computation, the inner product between two state vectors is an extension of the dot product that returns a scalar quantity representing the amount to which the first vector lies along the second vector.

Before we move onto the representation of the inner product, let us briefly understand the Dirac notation for representing the vectors of a Hilbert space. These are generally used in quantum mechanics to represent quantum states. Consider that we have two vectors a and b. In Dirac notation, we use “ket” to represent the column vector as follows:

In addition to “ket”, we also have the “bra” which is obtained by forming a row vector and complex conjugating the entries (this is equivalent to taking the conjugate transpose).

Now the inner product of two vectors a and b can be represented intuitively using the bra-ket notation as ⟨a|b⟩.

One of the most important conditions for a Hilbert space to represent a quantum system is that for any vector its inner product must be 1. If ψ represents a quantum state, then we can mathematically frame the above statement as ⟨ψ|ψ⟩ = 1. This condition is popularly known as the normalization condition. In other words, all quantum states are normalized.

We had already seen the Bloch sphere in the previous section for representing a state space. There we had mentioned that the radius of the Bloch sphere is one. This is because a Bloch sphere, along with the inner product between qubit state vectors, represents a Hilbert space. Here all possible normalized pure states are illustrated on the surface of the sphere. Since they are normalized the radius of such a sphere must be 1.

Now that we have seen the Hilbert space, it's time to see their relationship with the unitary matrices. An interesting property of unitary matrices as we had discussed earlier is their inner product preservation, meaning that no matter how you transform a vector under a sequence of unitary matrices, the normalization condition still holds.

This means that unitary evolution sends quantum states to other valid quantum states. For a single-qubit Hilbert space, represented by the Bloch sphere, unitary transformations correspond to rotations of state vectors to different points on the sphere, not changing the length of the state vector in any way.

I think we have reached the saturation point. That’s enough information to cover in a single article. We will continue our discussion on linear algebra in the second part of this article. Until then stay hungry stay foolish!

References

  1. Qiskit textbook — https://qiskit.org/textbook/ch-appendix/linear_algebra.html
  2. Wikipedia — Hilbert space, basis

--

--

Navaneeth Dinesh
Analytics Vidhya

Artificial Intelligence | Machine learning | Quantum Computing