Experiments in optimising three.js

For today’s Cardboctober hack, I decided to do some investigation instead.

Square roots

The first thing I wanted to test is the .length() method. The computation for the length of a vector uses Pythagoras’s theorem; which you may remember requires a square root.

Calculating a square root is much more complex than similar operations because the result has to be reached iteratively; the basic principle is to make a guess, and then repeatedly refine it by squaring that guess and comparing it to the original number. Many developers try to avoid square roots because of this complexity.

One example of optimisation is when we want compare two lengths. Because the square root function is monotonic, you can avoid it when comparing numbers. Take the following example, where we want to find out of the length of a vector is less than a certain radius:

var radius = 50;
if (someVector.length() < radius) { ... }

You can instead compare the length squared to the radius squared. three.js has a nice utility method lengthSq() for exactly this:

var radiusSquared = Math.pow(50, 2);
if (someVector.lengthSq() < radiusSquared) { ... }

Well that was the theory… I ran some tests on jsperf and found that square roots in javascript are blindingly fast. My machine can run about 1 billion square roots per second.

Looking into this further, I found that Math.sqrt is as fast as Math.pow and basic multiplication: Squaring vs. square rooting. So there’s really no need to try to optimise square roots in Javascript, good news for three.js development.

Object instantiation

The next thing I wanted to look at is object instantiation vs. reuse. Many modern Javascript APIs have adopted a functional style, where each method creates a new instance of an object.

For example Sylvester.js is a vector maths library, each method you perform creates a new instance. In the following example, c is a new Vector, and a and b remain unchanged:

var a = Vector.create([1, 2, 3]);
var b = Vector.create([4, 5, 6]);
var c = a.add(b);

This style of API is generally very helpful, it prevents confusing side effects and promotes a clear coding style. However in intensive applications this can cause problems in Javascript. Constantly creating objects takes processing time and memory, which ultimately leads to the garbage collector having to clear out more data — a process which can be really disruptive for real time applications.

For this reason three.js has an API which mutates objects rather than creating new instances. It actually has a lot of helpful utility methods for working with existing instances, for example a.copy(b) copies the data ofb into a. If you want a new instance, you have to explicitly tell three.js to .clone() the object.

I created a test with some simple three.js operations, one test case created several new data objects, while the other only reused objects. The reuse case came out 20–40% faster depending on browser. If we compare .clone() and .copy() directly, clone is about 85% slower.

My machine can still run 145 million .clone() operations per second, though, So you can get away with a few thousand in your render loop!

Lessons learned

I expected this to be a simple exercise validating my assumptions, what I got was a reminder that developers are weirdly superstitious and untested optimisation is micro-optimsation.