Computational Linear Algebra: Trace and Determinant

Unraveling the Mysteries of Trace and Determinant in Computational Linear Algebra

Monit Sharma
7 min readJun 12, 2023

Welcome back to our Computational Linear Algebra blog series! In this article, we will explore two important concepts: Trace and Determinant. These mathematical operations provide valuable insights into the properties and behavior of matrices. Understanding the Trace and Determinant allows us to analyze matrix transformations, compute eigenvalues, and solve a variety of computational problems. This is the lightest chapter so far, only simple definition of Trace and Determinant. So, let’s dive in and unravel the mysteries of these fundamental concepts.

The Trace Operator

The trace is the sum of all values in the diagonal of a square matrix.

A square matrix and how to find it’s trace.

numpy provides the function trace() to calculate it:

A = np.array([[2, 9, 8], [4, 7, 1], [8, 2, 5]])
A

array([[2, 9, 8],
[4, 7, 1],
[8, 2, 5]])

A_tr = np.trace(A)
A_tr

14

Trace can be used to specify the Frobenius norm of a matrix. The Frobenius norm is the equivalent of the norm for matrices. It is defined by:

Frobenius Norm

Trace can be used to specify the Frobenius norm of a matrix. The Frobenius norm is the equivalent of the norm for matrices. It is defined by:

We can check this by np.linalg.norm()

np.linalg.norm(A)

17.549928774784245

The Frobenius norm of A is 17.549928774784245, with the trace the result is identical.

np.sqrt(np.trace(A.dot(A.T)))

17.549928774784245

np.sqrt(np.trace(A.dot(A.T))) == np.linalg.norm(A)

True

Since the transposition of a matrix doesn’t change the diagonal, the trace of the matrix is equal to the trace of its transpose:

Property of Trace

Trace of a Product

Property of Trace in products

Example 1

Let’s see an example of this property

The three matrices that we will use in the example.


A = np.array([[4, 12], [7, 6]])
B = np.array([[1, -3], [4, 3]])
C = np.array([[6, 6], [2, 5]])

np.trace(A.dot(B).dot(C))

531



np.trace(C.dot(A).dot(B))

531


np.trace(B.dot(C).dot(A))

531

The above code this.

The trace provides several important properties and applications:

  1. Sum of Eigenvalues: The trace of a matrix is equal to the sum of its eigenvalues. Eigenvalues represent the scaling factors by which eigenvectors are transformed when multiplied by a matrix. The trace provides a quick way to obtain the sum of eigenvalues without explicitly calculating them.
  2. Similarity Invariant: The trace of a matrix remains the same under similarity transformations. That is, if two matrices A and B are similar (i.e., B = P⁻¹AP, where P is an invertible matrix), they have the same trace.
  3. Matrix Operations: The trace can be used to simplify matrix operations. For example, the product of two matrices A and B satisfies tr(AB) = tr(BA). This property helps simplify computations involving trace and matrix products.

The Determinant

We saw previously that a matrix can be seen as a linear transformation of the space. The determinant of a matrix A is a number corresponding to the multiplicative change you get when you transform your space with this matrix.

A negative determinant means that there is a change in orientation (and not just a rescaling and/or a rotation). A change in orientation means for instance in 2D that we take a plane out of these 2 dimensions, do some transformations and get back to the initial 2D space.

The determinant of a matrix can tell you a lot of things about the transformation associated with this matrix

Let’s do basic imports and make the plotting function.



import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns



# Plot style
sns.set()
%pylab inline
pylab.rcParams['figure.figsize'] = (4, 4)
def plotVectors(vecs, cols, alpha=1):
"""
Plot set of vectors.

Parameters
----------
vecs : array-like
Coordinates of the vectors to plot. Each vectors is in an array. For
instance: [[1, 3], [2, 2]] can be used to plot 2 vectors.
cols : array-like
Colors of the vectors. For instance: ['red', 'blue'] will display the
first vector in red and the second in blue.
alpha : float
Opacity of vectors

Returns:

fig : instance of matplotlib.figure.Figure
The figure of the vectors
"""
plt.axvline(x=0, color='#A9A9A9', zorder=0)
plt.axhline(y=0, color='#A9A9A9', zorder=0)

for i in range(len(vecs)):
if (isinstance(alpha, list)):
alpha_i = alpha[i]
else:
alpha_i = alpha
x = np.concatenate([[0,0],vecs[i]])
plt.quiver([x[0]],
[x[1]],
[x[2]],
[x[3]],
angles='xy', scale_units='xy', scale=1, color=cols[i],
alpha=alpha_i)

Example 1.

To calculate the area of the shapes, we will use simple squares in 2 dimensions. The unit square area can be calculated with the Pythagorean theorem taking the two unit vectors.

Let’s start by creating both vectors in Python:

orange = '#FF9A13'
blue = '#1190FF'

i = [0, 1]
j = [1, 0]

plotVectors([i, j], [[blue], [orange]])
plt.xlim(-0.5, 3)
plt.ylim(-0.5, 3)
plt.show()

We will apply

to i and j. You can notice that this matrix is special: it is diagonal. So it will only re-scale our space. No rotation here. More precisely, it will re-scale each dimension the same way because the diagonal values are identical. Let’s create the matrix A:

A = np.array([[2, 0], [0, 2]])
A

Now we will apply A on our two unit vectors i and j and plot the resulting new vectors:

new_i = A.dot(i)
new_j = A.dot(j)
plotVectors([new_i, new_j], [['#1190FF'], ['#FF9A13']])
plt.xlim(-0.5, 3)
plt.ylim(-0.5, 3)
plt.show()

As expected, we can see that the square corresponding to and didn’t rotate but the lengths of and have doubled. We will now calculate the determinant of A

The unit square transformed by the matrix

np.linalg.det(A)

4.0

Example 2.

Let’s see now an example of negative determinant.

We will transform the unit square with the matrix:

its determinant is -4

B = np.array([[-2, 0], [0, 2]])
np.linalg.det(B)

-4.0

new_i_1 = B.dot(i)
new_j_1 = B.dot(j)
plotVectors([new_i_1, new_j_1], [['#1190FF'], ['#FF9A13']])
plt.xlim(-3, 0.5)
plt.ylim(-0.5, 3)
plt.show()

We can see that the matrices with determinant 2 and -2 modified the area of the unit square the same way.

The absolute value of the determinant shows that, as in the first example, the area of the new square is 4 times the area of the unit square. But this time, it was not just a rescaling but also a transformation. It is not obvious with only the unit vectors so let’s transform some random points. We will use the matrix:

that has a determinant equal to -1 for simplicity.

# Some random points
points = np.array([[1, 3], [2, 2], [3, 1], [4, 7], [5, 4]])
C = np.array([[-1, 0], [0, 1]])
np.linalg.det(C

-1.0

Since the determinant is -1.0 , the area of the space will not be changed. However, since it is negative we will observe a transformation that we can’t obtain through rotation:

newPoints = points.dot(C)

plt.figure()
plt.plot(points[:, 0], points[:, 1])
plt.plot(newPoints[:, 0], newPoints[:, 1])
plt.show()

You can see that the transformation mirrored the initial shape.

The determinant possesses the following significant properties and applications:

  1. Matrix Invertibility: A matrix is invertible (or non-singular) if and only if its determinant is non-zero. Determinants help identify whether a matrix can be inverted or not. If det(A) ≠ 0, then A has an inverse, and det(A⁻¹) = 1/det(A).
  2. Volume Scaling: The determinant of a matrix represents the scaling factor by which a linear transformation changes the volume of a region in space. If the determinant is positive, the transformation preserves orientation and volume. If it is negative, the transformation reverses orientation.
  3. Linear Independence: The determinant helps determine whether a set of vectors is linearly independent. If the determinant of the matrix formed by these vectors is non-zero, they are linearly independent.
  4. Eigenvalues: The determinant of a matrix A is equal to the product of its eigenvalues. It plays a crucial role in calculating eigenvalues and eigenvectors.

Conclusion

The Trace and Determinant are fundamental concepts in Computational Linear Algebra that provide valuable insights into the properties and behavior of matrices. The trace captures the sum of diagonal elements and offers connections to eigenvalues and matrix operations. On the other hand, the determinant provides information about matrix invertibility, volume scaling, linear independence, and eigenvalues.

Understanding these concepts equips us with powerful tools to analyze matrices, solve systems of linear equations, compute eigenvalues, and perform various transformations. As we progress further in this blog series, we will continue to explore more fascinating topics in Computational Linear Algebra.

We have seen that the determinant of a matrix is a special value telling us a lot of things on the transformation corresponding to this matrix. Now hang on and go to the next chapter on the Principal Component Analysis (PCA).

--

--