Finding Inverse of a Matrix using Gauss-Jordan Elimination and Adjoint Matrix Method

Pollux Rey
8 min readDec 17, 2019

--

Foundation

Skip this part if you think you’re already confident of your knowledge about matrices prior to inverse of a matrix.

Suppose that we have a system of n linear equations with n unknowns of the form

where a’s are the coefficients, x’s are the unknowns, and b’s are the constants, we can represent this system into a matrix. The matrix form of the system would be

Let’s name the first matrix to the left A, then the next ones X and B. A is called the coefficient matrix. The coefficient matrix A is square since it has n by n entries. The n-by-1 matrix X is called the solution vector and if we multiply X to A, it would result to an n-by-1 B, which is the constant vector.

The system of linear equations can have unique solution, infinitely many solutions, or no solution, at all. Graphically, we’ll get a unique solution if all the lines intersect in exactly one point; infinitely many if they all represent one line; and no solution if the lines are parallel from each other. The figure below illustrates this:

Graphical representation of unique, infinitely many, and no solution, respectively.

On the other hand, let’s look at the number of solutions to a linear system using matrices. Let’s explain it one by one using examples.

Unique Solution

If we have this augmented matrix (meaning, the coefficient matrix and the constant vector are attached together with a line as their separator),

and the resulting row reduced matrix, using Gauss-Jordan Elimination, is

then the solution is unique because the number of pivots (the 1s on the left side of the augmented matrix) is equal to the number of columns in the matrix (see Rank of a Matrix). If we plug in the solution to the system, the left hand side checks out with the constants in the right.

Infinitely Many Solutions

Given the matrix

its reduced row form is

As you can see, the resulting matrix has a zero row which is an indication that the system has infinitely many solutions. Also, the number of pivot is less than the number of columns. The solutions we got are,

just a little bit of adjustment we’ll get

where x_{3} is a free variable and can be any real number.

If we let x_{3} = 1, x_{1} = -2 and x_{2} = -1. If we let x_{3} = 3, x_{1} = 0 and x_{2} = 1. Plug in the values one at a time, you’ll get the same values of the constants in the given matrix.

No Solution

Let’s have a matrix in RREF, say,

Notice that the last row has a non-zero value on the constant vector but has zeros on the coefficient matrix.

Yes, it’s illegal because

Determinant of a Matrix

As we have mentioned earlier, if the rank of a matrix is equal to its number of columns, then the system has a unique solution. But besides that, another way to find out the number of solutions of a linear system is by its determinant (see the formal definition of the Determinant of Matrix). Basically, you’ll choose a row or a column in the matrix. Then, get the determinant of each entity’s submatrix — pretend that the row and column where the entity belong does not exist. After you get the values in all entities of your chosen row/column, multiply each determinant to their corresponding entity, add them up (mind the signs!), and you’ll get the determinant of the matrix.

If the determinant of the matrix is not equal to zero, it means that the matrix is invertible.

Inverse of a Matrix

We know that

If the determinant of the coefficient matrix A, det(A), is non-zero, then A has an inverse. If A has an inverse and if we multiply it to the equation above, it follows that

This means that we can find the solution for the system using the inverse of the matrix provided that B is given. In this article, we will present two techniques to get it: Gauss-Jordan Elimination and Adjoint Matrix Method.

Gauss-Jordan Elimination

We have seen above that when A is multiplied with its inverse, it would result to an identity matrix I (bunch of 1s on the main diagonal of the matrix and surrounded with 0s). Mathematically,

If we exchange the position of A and its inverse,

If we think harder, it’s like solving for X. From AX = B, A is still A but B is now I. Instead of getting X, we are now solving for A^{-1}. What we did earlier was we augment A and B, and use Gauss-Jordan elimination to get X. That’s what we’re also going to do here, we put A and I together and row-reduced it to get the inverse of A.

Adjoint Matrix Method

Let’s first lay down some terminologies.

We have mentioned before how to get the determinant of a matrix, but we’ll now dive deeper into what determinant is composed of.

Minor of a matrix

The minor of a matrix, M_{i,j}, is the determinant obtained when you ignored the values in the ith row and jth column of your matrix.

Cofactor of a matrix

The cofactor of a matrix, C_{i,j}, is a signed minor. It follows the formula

Determinant of a matrix

Suppose that we have a square matrix

The determinant of A is

if we want to expand along the ith row, or

if we want to expand along the jth column. Expanding C_{i,j} would result to

or

To further understand the three concepts, let’s answer the following

(Elementary Linear Algebra by Ron Larson, 8th edition, page 116)
Find the minors and cofactors of the matrix

Adjoint of a Matrix

The transpose of the matrix of cofactors is called the adjoint of a matrix — that is when you create a new matrix and the values for that matrix are cofactors for each value in the old matrix, then, you place the values of the first row of your resulting matrix to the first column, and so on. Mathematically speaking,

How to get the inverse of matrix for this method?

Consider the product of A and adj(A),

If we multiply the highlighted row and column seen in the photo below,

you’ll get

Now, let’s highlight the middle row in A and the middle column in B

The only way to get det(A) again is when i=j. Otherwise, it equates to 0 (see Laplace’s Expansion of Determinants). So, if that’s the case, then the main diagonal of the product of A and adj(A) is

using the above equation we’ll get

So, for this method, to get the inverse of a matrix, we must get its adjoint and divide it with its determinant.

Example

Find the inverse matrix, using the two methods, and use it to solve the following system of linear equations

Gauss-Jordan vs. Adjoint Matrix Method

For 3-by-3 matrix, computing the unknowns using the latter method might be easier, but for larger matrices, Adjoint Matrix method is more computationally expensive than Gauss-Jordan Elimination because creating new matrix for cofactors is inefficient. Other than that, there are better ways to solve for the unknowns such as LU Decomposition, or just use Gaussian Elimination.

Sources

Elementary Linear Algebra by Ron Larson

Advance Engineering Mathematics by Alan Jeffrey

--

--