Isomorphic Bias: Browser & Node.js Performance

Some time ago I saw a claim that a particular library was the fastest at what it did. Having some spare weekend time, I decided to take up a challenge and see if I could write something faster. Two hours later I had something faster and smaller … or so I thought. I delivered it to the claimant and was promptly delivered a response with test results showing it was not faster. The results were puzzling because the claimant’s results for his own library were similar to my results for his library; whereas he showed mine as much slower.

After a little digging I discovered the claimant had not provided the correct initialization flag to my code. In response, he effectively said that his position on testing was that things should be sufficiently simple that a naive programmer would get the best performance. I took the lesson, changed the design of my code and re-submitted it. Once again, his tests showed my code was slower.

Two days later I woke up in the middle of the night, “Aha! I may be isomorphically focused, but I am browser biased!” I made my way swiftly to the keyboard and 20 minutes later had confirmed my hypothesis. I was running my performance test in a browser and the claimant was running them in Node.js.

Indeed, I am isomorphically focused and start almost all development of core functions in a browser because it is more constraining that Node.js. But, over the next week I modified my testing framework for several projects so that I hadindex.html and index.js files in both my functional and performance test directories. In both cases, loading index.html in the browser loads the index.js file to provide functional and performance perspectives. Running mocha index.js in the functional test directory or node index.js in the performance test directory provides proof of isomorphic functionality and comparables for performance in Node.js.

What I discovered was quite surprising. I expected Node.js and Chrome to be similar in performance, after all they are both driven by the V8 engine. Truth is, I actually expected Node.js to be faster since it lacks UI overhead and I think of it as “closer to the hardware”. Unfortunately, if seems to be far slower! Chrome and the other web browsers tend to be faster. This lead me to thinking about browser differences. All of the performance tests I found focused on higher level test suites intended to simulate something akin to real-world behavior. Nobody seemed to be focusing on lower level functions. This is fine for application builders where evidence is that micro-optimization has little payoff. But, it seems to me that library builders have a responsibility to ensure that application builders are relying on the most performant code possible.

Over time I had taken up the habit of replacing almost all code blocks usingfor(<counter>;<limit>;<increment>), for ... in, for ... of with their equivalents using forEach, map or reduce depending on what I was trying to accomplish. I had also willy-nilly been converting var to const and let . This generally produced safer and more readable code. I began to wonder if this was wise from a performance perspective. So, I started to explore the performance of lower level routines like for(<counter>;<limit>;<increment>), for ... in, for ... of, while, do, map, forEach etc.

I found that results vary dramatically from browser to browser and no browser is consistently faster, although Chrome and Firefox are usually the best bets. I also found that some of the built-in functions like map or forEach are not as optimized as one would think. In some cases re-writing built-ins results in faster code, even if functions calls (which always incur a cost), continue to be used.

Here are some findings:

  1. forEach, map, and reduce generally run faster if you define the callback before you call them, i.e. outside their lexical scope.
  2. forEach and map add overhead unless you need the index or iterable provided as the second and third arguments to the callback. Consider writing you own fastForEach or fastMap if you want the protection of a closure but don’t need multiple arguments, e.g.
const fastForEach (array,f) => {
let len = array.length; for(var i=0;i<len;i++) { f(array[i]); }

3. let is slightly slower than var … at least in some situations. It is hard to verify all situations.

4. Write your initial code using forEach and then back-down to faster code once you are through initial testing … a browser specific optimizing down compiler would be nice ;-).

To see a graphical display of some of the performance findings for intersection union, memoization, looping and lookup across browsers visit JSBenchmarks.

The above optimizations and more have been applied to the most recent release of intersector. These were made to better support the Hypercalc multi-user, multi-dimensional spreadsheet engine. The intersector library now seems to be the fastest JavaScript array intersection available (pretty important for big data analytics).

Finally, along the way we discovered that the Google closure compiler does not produce the smallest code when it comes to un-transpiled ES6. I suggest trying JSCompress.