Python and Physics: Matrix Math Cheatsheet
As one advances through their STEM courses, a common a math topic keeps appearing and that is, matrices. For this article, we will go through some examples of how to use Python to solve some common matrix math problems.
Straight Into the Code
To start off, we should import some packages that will help us with some of the more complex topics, such as eigenvalues and determinants. While they can be solved by lengthy calculations, these packages will help us go through things much quicker.
import numpy as np
from numpy.linalg import det
from numpy.linalg import eig
from numpy.linalg import inv
Scalar Multiplication
First, let’s start off with some simple multiplication, such as changing from positive to negative.
a = np.reshape(([2, 3, 4, 5]), (2,2)) #Making a 2x2 matrix
print(a)##Output
[[2 3]
[4 5]]
To multiply, we just use the asterisk.
symbol_change = -1 * a
print(symbol_change)##Output
[[-2 -3]
[-4 -5]]
To multiply by a constant, we do the same thing.
constant_multiplication = 2 * a
print(constant_multiplication)##Output
[[ 4 6]
[ 8 10]]
Matrix Multiplication/Dot Product
Now we start with matrix multiplication/dot products. This time we will try with matrices with different dimensions.
b = np.reshape([1,2,3,4,5,6],(2,3))#Matrix with 2 rows and 3 columns
print(b)
print('----')
c = np.reshape([7,8,9,10,11,12],(3,2))#3 rows and 2 columns matrix
print(c)##Output
[[1 2 3]
[4 5 6]]
----
[[ 7 8]
[ 9 10]
[11 12]]
While Numpy does have a matmul function, Python uses the @ symbol to signify matrix multiplication.
matrix_dot_product = b@c #@ symbol does our matrix multiplication
print(matrix_dot_product)##Output
[[ 58 64]
[139 154]]
Matrix Inverse
Using the package inv that we previously imported, we can easily inverse matrices. To verify that it works, we will multiply our inverse with the original matrix to see if it forms the identity matrix. Word of caution, at times, the values of the identity matrix can be non-zero due to the precision handled by Python, so be a bit mindful if you are doing certain calculations that needs precision.
print(a)
inverse = inv(a)
print('---')
print(inverse)
print('---')
identity = a@inverse
print(identity)##Output
[[2 3] #Original
[4 5]]
---
[[-2.5 1.5] #Inverse
[ 2. -1. ]]
---
[[1. 0.] #Verified that it does output our identity matrix
[0. 1.]]
Determinants
With another package that we imported, det will help solve for the determinants. For this example, we will go through a 2x2 matrix and a 3x3 matrix.
b = np.reshape([4,6,3,8],(2,2))
print(b)
determinant = det(b)
det_format = float("%.2f" % det(b))
print('---')
print(determinant,'/',det_format),
c= np.reshape([6,1,1,4,-2,5,2,8,7],(3,3))
print('---')
print(c)
print('---')
det_c = det(c)
print(det_c)##Output
[[4 6]
[3 8]] #Original 2x2 matrix
---
14.000000000000004 / 14.0 #Determinants, second value formatted
---
[[ 6 1 1]
[ 4 -2 5]
[ 2 8 7]] #Original 3x3 matrix
---
-306.0 #Determinant
As mentioned before, at times, Python’s precision gives us values that we don’t expect, as the case for this determinant. The line of code det_format, I just used Python’s string format to keep things under two decimal points, along with float to make sure it doesn’t stay as a string value.
Solving for Ax = B
A more practical use of the functions we’ve been using is to solve for a system of equations. By transforming them into the Ax=B form, we can solve for x by multiplying both sides by the inverse of A. Since A by its inverse is just the identity matrix, to solve for x we just need to multiply the inverse of A by B.
a = np.reshape([1,1,1,0,2,5,2,5,-1],(3,3))
print(a)
print('---')
b = np.reshape([6,-4,27],(3,1))
print(b)
print('---')
solving_x = inv(a) @ b
print(solving_x)##Output
[[ 1 1 1]
[ 0 2 5]
[ 2 5 -1]] #Our A
---
[[ 6]
[-4]
[27]] #Our B
---
[[ 5.]
[ 3.]
[-2.]] #Our X
To verify, we multiply A by x to see if we get our B.
print(a@solving_x)##Output
[[ 6.]
[-4.]
[27.]] #Works!
Transpose and Conjugate
For the transpose, we take advantage of Numpy’s built in command T.
print(a)
print('---')
transpose = a.T #All we need to get the transpose of a Numpy array
print(transpose)##Output
[[2 3]
[4 5]]
---
[[2 4]
[3 5]] #Switched out values over the diagonal
For the conjugate, we use Numpy’s conjugate command, we can either type out conjugate completely or just use conj. To show our imaginary numbers, we use j instead of the most commonly known i.
#j for our imaginary number, 1j is equal to i.
a = np.reshape([1+1j,-2+1j,3+-1j,4+-1j],(2,2))
print(a)
print('---')
conjugate = np.conj(a)
print(conjugate)##Output
[[ 1.+1.j -2.+1.j]
[ 3.-1.j 4.-1.j]]
---
[[ 1.-1.j -2.-1.j]
[ 3.+1.j 4.+1.j]] #Changed the sign of our imaginary numbers
Eigenvalues and Eigenvectors
Furthermore, we will be using the last package that we imported, eig. Eigenvalues are given pretty straightforward but for the eigenvectors, we will have to play around with it a bit.
a = np.reshape([-6,3,4,5],(2,2)) #2x2 matrix
print(a)
print('---')
values, vectors = eig(a) #Values and vectors given by eig function
print(values)
print('---')
print(vectors)##Output
[[-6 3] #Original Matrix
[ 4 5]]
---
[-7. 6.] #2 eigenvalues
---
[[-0.9486833 -0.24253563] #2 Vectors, each vector is a column
[ 0.31622777 -0.9701425 ]]
If we were to use vectors[0] to get our first vector, we will be getting the first row, which will not be the first vector. So to get the first vector, this is what I did.
#Assigning the first values of the rows as the vector
vector_one = np.reshape([vectors[0][0],vectors[1][0]],(2,1))
print(vector_one)##Output
array([[-0.9486833 ],
[ 0.31622777]])
Checking to see if they are the eigenvalues and eigenvectors, we first multiply the original matrix by the vector and see if it matches up to the vector given by multiplying the eigenvalue to the vector.
print(a@vector_one) #Matrix multiplied by vector
print(values[0]*vector_one) #First eigenvalue by the vector##Output
[[ 6.64078309]
[-2.21359436]]
[[ 6.64078309]
[-2.21359436]] #They match!
And there you have it! A nice collection of examples in Python to help out those who need to create programs that deal with matrices. There’s still plenty of other concepts that need to be visited, such as trace and orthogonality to name a few, but this should be a step in the right direction for most needs! Hope you enjoyed this one!