# [2/3] A Complete Guide to Gaussian Elimination

## Part 2 of 3: How we express elimination and row exchanges as matrix multiplications.

We covered all the base concepts for Gaussian Elimination in the first entry in this miniseries on Gaussian Elimination. This article covers how we utilize Gaussian elimination to turn our coefficient matrix A into an upper triangular U so that we can employ back-substitution to solve the system. We also explored the singular cases that can come out of it.

This article series can be thought of as a companion to prof. Gilbert Strang’s course 18.06 on Linear Algebra — more specifically, the first four lectures.

## What are the Operations of Elimination?

There are two main operations in elimination. These are the *elimination *operation (the combined multiplication/subtraction step) and *row exchanges.*

What we are trying to do is express these operations as matrix multiplications. To be specific, what matrix E do we have to multiply A by to do elimination? What about a row exchange? This will help us get closer to a consistent formula to carry out Gaussian Elimination.

## Elimination as a Matrix Multiplication

Let’s look at how we can express elimination as a matrix multiplication using a simple (2 x 2) system.

What matrix E takes our A to the next step of elimination? Here, what matrix multiplies the first row by two and subtracts it from the second row?

I’ll supply the answer and then walk through it. We call this matrix, who’s purpose is to make *4 *into a 0 on* A *(and thus, turn it into the upper triangular) E_21, since it makes the element in space (row 2, column 1) into a 0. The E is short for *Elementary Matrix —* by definition, an elementary matrix is a matrix that differs from the identity in only one value. As you will see, that is exactly what these are.

Let’s look at how this multiplies with a matrix A to do a step of elimination.

Since the top row stays the same after elimination, the first row of E is the same as the identity matrix. In the second row, we put -2 in the (2, 1) space. When we multiply this out, it multiplies each item in row 1 by 2 and subtracts it from the items in the second row, which is the same as subtracting the entire row.

This matrix, when multiplied by our right hand-sides b, also has the same effect of subtracting the top row from the bottom row. Thus, we multiply both A and b by E21 to get U and x. Algebraically, *Ax = b → EAx = Eb → Ux = c*

Let’s look at a more detailed (3 x 3) example that requires three different elimination matrices.

If we want to turn this into the upper triangular, we have to do three separate steps of elimination. Let’s do these in the way familiar to us first, and then create the corresponding elimination matrices.

I won’t worry about the right hand sides here — if you record all your eliminations on the left hand side, you can carry out those same eliminations on the right hand side after the fact — in fact, this is how most programming languages like MATLAB carry out Gaussian Elimination.

The **first step** is multiplying the first row by 2 and subtracting it from the second row. For the **second step, **we can also make the (3, 1) position 0 by multiplying the first row by 2 and subtracting from the third row. Doing both of these steps gets us to:

for the **third and final step, **we multiply -1 by -1, so when subtracted from the 1 on the last row, we get 0.

We have arrived at our upper triangular matrix U. Now, how do we do these three steps of elimination using matrices? Specifically three matrices E21 (for making the (2, 1) place 0), E31 (for making the (3, 1) place 0) and E32 (for making the (3, 2) place 0).

We would multiply these three elementary matrices in the same order that we did elimination. First, the two in the first column, then the one in the second column.

Now, what are these elementary matrices? I’ll show them all at the same time and briefly explain each of them; the idea is quite intuitive and you’ll catch on quite quickly.

These are what each of the elementary matrices are. It is hard to really explain what they do past what I have done already since the main way to understand how they work, or more, to *believe that they work, *is to multiply them yourself. So I’d heavily recommend taking the two minutes to multiply **A **by each of these three elementary matrices (in the order expressed above) and see A become the upper triangular U.

A helpful general rule, if you have not figured it out already, is that *to multiply some row i by some multiplier k, and to subtract this from some row j, the associated elementary matrix Eji should contain -k in the jth row in the ith column. *Read that over a few times (or better, follow along in an example yourself) to digest it a little better.

Since matrix multiplication is associative ((AB)C = A(BC)), we can multiply these three elementary matrices into a single matrix E that takes our matrix A to upper triangular U in a single operation.

Here, our E (and the calculation to get E) would look something like this:

The problem with calculating this E is that it’s hard to know what E will look like before multiplying it out. This makes computing it hard, and our solution to this will be presented in the final article in this series, where we propose the most computationally efficient (and often used) way to solve a system of equations.

Before that, we need to see the matrix equivalent of one other important elimination step: row exchanges.

## Row Exchanges as a Matrix Multiplication

Just like how elementary matrices carry out elimination, ** permutation matrices **carry out row exchanges. Permutation matrices all somewhat resemble distorted identity matrices — this makes sense when you think about it, as row exchanging doesn’t change any of the actual content in the matrix — just swaps their locations.

Here’s an example of a matrix that swaps the 1st and 2nd rows of a 3 x 3 matrix.

A permutation matrix must have a 1 in every row and every column, and a 0 everywhere else. You can multiply these out to see that it works ; again, it’s kind of the only way to ‘get’ these things.

The amount of permutation matrices for a n x n system is n factorial (n!)(counting the identity matrix, which is a permutation matrix that does nothing) .Therefore, the amount of permutation matrices for a 3 x 3 system is 6. Here are all of them and what they do; quick examination should make what they do apparent.

Permutation matrices have a few interesting properties. First of all, they’re a ‘family’ of matrices — any permutation matrix times another permutation matrix returns another permutation matrix within that same ‘set’. Additionally, the inverses of permutation matrices are their transposes.

That’s all for now. In the next article, we’ll manipulate these matrices to find the most efficient way to solve these systems using inverses and other tools.

*Adam Dhalla is a high school student out of Vancouver, British Columbia, currently in the STEM and business fellowship **TKS**. He is fascinated with the outdoor world, and is currently learning about emerging technologies for an environmental purpose. To keep up,*

*Follow his I**nstagram**, and his **LinkedIn**. For more, similar content, **subscribe to his **newsletter here.*