DotJS 2019 — Code complexity by Vladimir Agafonkin

Kaja Polkowska
DAZN Engineering
Published in
3 min readJan 20, 2020

Going with a spirit of performance, I came back from dotJS 2019 in Paris revisiting what I know about improving code based on its complexity. dotJS is one of the largest JavaScript conferences discussing various topics including frameworks and performance (see more https://www.dotjs.io/)

One of the optimisation focused talks on the conference was elaborating on what we all know but sometimes forget, that simpler code is not only more accessible but quite often more efficient.

Vladimir Agafonkin, the author of the Leaflet JS library used in most modern maps, argues that considered and well designed JS implementation can be almost as fast as many middle-level languages like C++ or Rust.

DotJS, Paris, 2019

At DAZN we currently have the opportunity to redesign the code the way we would like to work with it in the future. My team is in the process of creating a new version of DAZN with a significant focus on TV streaming devices. However, working on TVs come with many challenges. One of the most obvious ones are their browsers and memory or to be frank, almost complete lack of both in so many cases.

Thus anything beyond linear complexity becomes a potential performance threat. Algorithms are one of the tools helping to control code complexity, they can recognise performance characteristics of your code once you have identified over-coded issues.

As Vladimir said, slow code is not bad code, it is a code that does more work than needed. He also suggested a couple of simple to implement solutions.

First step is to investigate all of your dependencies to check what is going on with your code. This is a key as you can have perfectly designed components that are being dragged down by another dependency doing the exact opposite.

The next step is understanding how performance scales with input size:

Constant complexity — Preferred solution and rarely a bottleneck. With constant time complexity, no matter how big your input is, it will always take the same amount of time to calculate.

function constantComplexity(n) {
return n + 1;
}

Linear complexity — You can find it anywhere you have a loop. In this scenario, the time to complete increases but not as fast as the size of input. This type of complexity can easily be developed into higher-level complexity.

function linearComplexity(n) {
for (var i=0; i<n; i++) {
console.log(i)
}
}

Quadratic complexity — Really slow. Everywhere you see two loops.

Usually means that your code can be improved.

This kind of complexity can be entirely accidental as your dependencies and loops could be based in different files (read more: https://accidentallyquadratic.tumblr.com/)

for (int i = 0; i <n; i += c) {
for (int j = 0; j < n; j += c) {
// + some constant complexity expressions
}
}

Cubic complexity — Is what a black hole is for the Universe:

if (i < 0) {
if(storage.length === 0) storage.push([x]);
return;
}
for (let n = storage.length, a = 0;a < n; ++a) {
let sa = storage[a];
if (sa[0] === i) {
for (let b = a +1; b < n; ++b) {
let sb = storage[b];
if (sb[sb.length -1] === x) {
storage[a] = sa =sb.concat(sa);
return;
}
}
sa.unshift(x);
return;
}
}

We often don’t realise that many of the common methods can significantly decrease the speed of our code due to their complexity:

Vladimir Agafonkin, 2019, Slides from dotJS

Here comes another wonderful realisation:simplified code becomes accessible in places where internet connections and devices might not always be what we are used to.

Working with low capacity TVs, gives us an incredible opportunity to design less over-coded application which will be so much more accessible all over the world. Thus carefully designed and well-refactored code is going to radically improve the accessibility of applications we are working on.

Link to Vladimir’s slides(Vladimir Agafonkin, dotJS 2019 )https://speakerdeck.com/mourner/fast-by-default-algorithmic-optimization-in-practice-dotjs-2019?slide=27

--

--