Chasing Numbers

How we lose our way when our only goal is a bigger benchmark

I’m regularly surprised by what people value – in a stack, a technology, or a solution. And I’m most consistently surprised by how often and to what exclusion people value performance.

It comes up in a lot of cases. I’ve seen people chasing GraphQL architectures for sites they haven’t made yet, caching for optimised performance on sites with zero users, or looking for optimum database structures for highest performance and ease of future migration, before they even start developing. It seems so futile.

People often spend too much time up front trying to solve problems they don’t even have yet. Don’t. – Basecamp

For nearly all of us, nearly all the time, the actual problem is building, and completing, and completing. Not solving performance issues. So it’s surprising to see so many people so focused on performance.

For the most part, backend languages and frameworks are decent at this. Rails, Django, and Laravel enthusiastically work towards a “good enough” mentality, willingly trading outright performance for ease of developer experience and rapid application development tooling. ORMs are a perfect example of this compromise, providing not-particularly-optimised (and often duplicated) queries with a DSL that’s consistent and easy to maintain, and quick to build.

Most backend frameworks and languages are relatively poorly optimised. That’s not a particularly controversial. For every person smugly stating that Rails wasn’t fast enough for Twitter, there are a dozen who can roll their eyes and point out that it was fast enough for them to become Twitter. Ditto PHP and Facebook.

And yet the frontend world seems to be besieged by the same people obsessing over JavaScript benchmarks. The benchmark of benchmarks appears to be Stefan Krause, whose collection of performance metrics is by far the most comprehensive and unbiased. I would not want to for a second disparage Stefan’s exceptional work. His benchmarks are unbiased, important, and (most importantly) really interesting.

But it is worrying how much the frontend community scrambles over these and other performance benchmarks as the only meaningful metric by which to compare frameworks.

For a start, benchmarks themselves need to be taken with a grain of salt. They are artificial, and do not necessarily reflect the experiences of a real-world user. They also do not necessarily reflect the requirement of a real-world app. If your app regularly adds 1000 rows to a table already containing 10,000 rows, by all means factor that in, but otherwise is it a big deal? Most apps run relatively simple crud operations, and handling XHR requests nicely and presenting intuitive animation and interface matters vastly more than what framework swaps out an arbitrarily large chunk of DOM fastest.

Additionally, the benchmarks reveal some very interesting things about smaller “library” frameworks, in particular how they scale. It’s worth noting that while React is quite fast on its own, React+Redux is a noticeable performance hit, and the use of the MobX library instead drops it considerably. In fact, to the same overall average performance numbers as the entire of “sluggish monolith” Ember. Svelte manages impressive performance, but its file size has been shown to scale quite terribly with additional functionality. It’s not shown here how some hyper fast libraries like VueJS and Inferno perform when bundled with routers, state management, and other features, but a more feature-complete app performance comparison with the Angular 2s of this market might well show a significant gap closing.

The fact is, it’s very rare for outright performance to be a major consideration in any current framework. I don’t mean to be dismissive, but there’s a pretty good chance your application would run just fine in Angular 1 or Knockout.

Developers should be considering a framework’s performance, sure. But it should be one of many more important metrics, including industry usage, developer ergonomics, architecture, availability of libraries, documentation, cost of upskilling, onsite experience, ease of onboarding, testability, new hire availability, platform maturity, and so on.

There is some codicils to all of the above that need to be mentioned.

  1. There absolutely are cases where performance is paramount and known to be a factor. This is in no way a criticism of that fact or people preparing sincerely for genuine performance considerations. This applies to existing apps with current performance known performance requirements, or to new applications with unusual rendering or display requirements.
  2. It should be acknowledged that performance is not necessarily about outright performance of the application, but about having headroom for future changes, additions, or increases in scale.
  3. Front-end JS and backend code are very different in that you can’t easily profile an individual user’s experience, and can’t easily add performance enhancements such as caching or new instances. This necessitates a low performance ceiling, especially if catering for mobile users.


By all means consider the performance of a framework. But not to the exclusion of other metrics, including your own precious mental health. A “good enough” performer that’s well documented and supported, and that focuses on the experience of the developer as much as the user, is a good option. And a better option that a cutting edge screamer with one question on StackOverflow and a single Getting Started with article on Medium.