Matrix Exponential with Functional JS

Riky Perdana
Technological Singularity
6 min readApr 17, 2024

--

From 3Blue1Brown, the math prophet

There was an interesting question someone posted in Quora, he asked “What‘s the point of matrices’?”. Then someone named Alex Farrugia came with hilarous manner and gave an answer with top tier quality. This post is partly a tribute to that glorious answer. I suggest you to read it first then comeback here at once.

Suppose that we have a group of office workers where their relationship are as described in this list:

  1. Albert is close to both Bob and David (A: B, D)
  2. Bob is close to Albert, Charlie, and Elaine (B: A, C, E)
  3. Charlie is close to Bob (C: B)
  4. David is close to both Albert and Elaine (D: A, E)
  5. Elaine is close to both Bob and David (E: B, D)

As they are 5 people, it’s intuitive to represent them in a matrix of 5 squared and let the crossing cell denotes the relationship between 2 of them. Let 1 represent that a relationship exists and 0 if it’s not. So the representing matrix should be like the one below:

relations =
[// A, B, C, D, E
[0, 1, 0, 1, 0], // A
[1, 0, 1, 0, 1], // B
[0, 1, 0, 0, 0], // C
[1, 0, 0, 0, 1], // D
[0, 1, 0, 1, 0] // E
]

Why it has to be square in size? Can’t it be just 4x5? Or maybe 6x5? If it’s not square in size, then the matrix would be called non-square matrix, and as far as I’m concerned that kind of matrix wouldn’t be feasible for further matrix operation. If it’s only 4x5, then there’s someone among them not represented in the matrix, and if the relationship statements were 6 while there’re only 5 people, then someone’s statement must be redundant — or even worse — inconsistant. So the squareness isn’t there for nothing, so this is needed to be kept in mind.

The prior matrix conventionally called adjacency matrix, one that supposed to translate a graph relationship into a matrix, a form of data structure better suited for computation. According to Estrada & Hatano’s research, computing the exponentialization of that matrix should give us rich information like centrality and communicability. Centrality means that each diagonal cells represent how central is a person — or something — among the group. The higher the number, the more centered he theoretically is. Communicability means that each columns represent the “close-ness” of certain person represented by the column to others in other rows. Farrugia provided the matrix exponentialization result of prior matrix but didn’t bother to elaborate how he acquired it. Nevertheless, this is how it looks like:

expRelations = [
// A , B , C , D , E
[2.43683, 2.04721, 0.74302, 1.83401, 1.43683], // A
[2.04721, 3.13063, 1.62081, 1.48604, 2.04721], // B
[0.74302, 1.62081, 1.64460, 0.42640, 0.74302], // C
[1.83401, 1.48604, 0.42640, 2.38761, 1.83401], // D
[1.43683, 2.04721, 0.74302, 1.83401, 2.43683] // E
]

If you take a good look at the diagonal cells that runs from upper-left to bottom-right, these are supposed to represent each of their centrality among the group. And if we sort them descendingly, we should get Bob at the top of centrality list, followed by Albert, Elaine, David, and Charlie. I can intuitively agree of this measurement for according to the statements itself, Bob were mentioned most frequent compared to Charlie. According to the same theory, the column represent the measures of someone’s closeness to the other. For an example, Charlie is supposed to be closer to Bob compared to the other in the group, as Bob’s row in Charlie’s column is the highest among others. I can also intuitively agree to it as the statement itself shown that Charlie is only close to Bob.

Assuming that this whole ordeal is true, I can only imagine how powerfull this matrix exponentialization technique could be. Not only in the study of epidemiology as represented in the origin answer, but shall also work in many other fields that deal with relationships, especially those that concerned in centrality and communicability. I was awed and immediately scour everywhere on internet and math books available in my shelf. Mostly said that matrix exponentialization could be acquired through well established math softwares such as MatLab, Gnu Octave, and Wolfram Alpha. I don’t mind using any of those by the way, but I want to approach this beast myself like how I face Dark Souls bosses.

The Original Formula

Small but nasty beast to defeat

Calculating the square of a number is as easy as multiplying it by itself twice. Calculating 2⁴ is as easy as 2 x 2 x 2 x 2 = 16. And we all knew that “e” is an irrational constant, then how would we imagine powering it by a matrix? The second equation above suggest that there’s a method of exhaustion which should yield the most accurate result as the number of iteration increases. The third equation suggests that a sequence of matrix multiplication divided by factorial of its index should do the trick. Now let’s translate that method into the representing codes in JS:

matrixAdd([
matrixIdentity(matrix),
matrix / facto(1),
matrixMuls([matrix, matrix]) / facto(2),
matrixMuls([matrix, matrix, matrix]) / facto(3),
// ... so on to infinity
])

Should we manually extend the inputs so that we achive the best result? Why don’t just create a new function that would do just that?

makeIdentity = size => makeMatrix(
size, size, (i, j) => +(i === j)
)

matrixPow = (pow, matrix) =>
matrixMuls(makeArray(pow, x => matrix))

facto = n => n === 1
? 1 : n * facto(n - 1)

matrixExpo = (depth, matrix) => matrixAdd([
makeIdentity(matrix.length),
...makeArray(depth, i => matrixMap(
matrixPow(i + 1, matrix),
({j}) => j / facto(i + 1)
))
])

makeIdentity is a function whence given a number, shall yield a new squared matrix filled with 1s diagonally in required size. matrixPow is a function that multiply any given matrix by itself several times. facto is the good old factorial in recursion. matrixExpo is a function whence given a matrix, shall yield it’s exponentialization where the accuracy determined by the specified depth. Now let’s see how it deals against the original problem:

Getting more accurate with more iterations
matrixExpo(3, relationships) // more like bull's ass than bull's eye
matrixExpo(5, relationships) // pretty close already
matrixExpo(10, relationships) // already fits in merely 10 iterations

None of those three shots took more than a second, which mean that the algorithm is not that computationally heavy in the first place, compared to Laplace Extension of matrix inversion with O(n!) time complexity. Not all matrix exponentialization cases would be satisfied with 10 iterations, probably the requirement for specified depth might grow along with given matrix size. At least it proves that the codes worked.

I found an online matrix calculator that also provides matrix exponentialization at EMathHelp sample. According to the result they shown, they appear to be using different algorithm, yet our result is closely similar. There you go, your new weapon called matrixExpo. Thanks for the read.

Codes Recap

/* All the required functions from prior article */

deepClone = obj => JSON.parse(JSON.stringify(obj))

makeArray = (n, cb = i => i) =>
[...Array(n).keys()].map(cb)

makeMatrix = (len, wid, fill) =>
makeArray(len).map(i => makeArray(
wid, j => fill ? fill(i, j) : 0
))

matrixMap = (matrix, cb) =>
deepClone(matrix).map((i, ix) => i.map(
(j, jx) => cb({i, ix, j, jx, matrix})
))

matrixSize = matrix =>
[matrix.length, matrix[0].length]

matrixAdd = matrices =>
matrices.reduce((acc, inc) => matrixMap(
acc, ({j, ix, jx}) => j + inc[ix][jx]
), makeMatrix(...matrixSize(matrices[0])))

matrixMuls = matrices =>
deepClone(matrices).splice(1)
 .reduce((acc, inc) => makeMatrix(
acc.length, inc[0].length,
(ix, jx) => sum(acc[ix].map(
(k, kx) => k * inc[kx][jx]
))
), deepClone(matrices[0]))

/* New codes in this article */

makeIdentity = size => makeMatrix(
size, size, (i, j) => +(i === j)
)

matrixPow = (pow, matrix) =>
matrixMuls(makeArray(pow, x => matrix))

facto = n => n === 1
? 1 : n * facto(n - 1)

matrixExpo = (depth, matrix) => matrixAdd([
makeIdentity(matrix.length),
...makeArray(depth, i => matrixMap(
matrixPow(i + 1, matrix),
({j}) => j / facto(i + 1)
))
])

All the codes are also available here at my Github Gist.

--

--