Node.js web frameworks are slow

Current Node.js web frameworks are slower than using the bare http module by a factor of 2. That means you get less than half the requests per second compared to what Node.js is capable of, which also means each request takes twice as long (in some instances even longer).

Node.js’s web server can do so much more, but the current web frameworks are lagging behind in comparison.

The frameworks I am talking about are specifically Express, Koa, and Hapi (and of course the other frameworks that are based on them).

This seems startling, but to put some perspective on things though, we have to go with some actual numbers. If a simple request is handled in 2ms~ by Node.js http module, then Express (and Koa, Hapi) would normally take 4ms to 6ms on average.

So at minimum 2ms to 4ms overhead are added to each request by these frameworks. Does it matter? Well it’s always been understood that faster perceived page load times equal better user experience, so I think so. That’s one of the main reasons so much effort has been put into server side rendering, HTTP enhancements for servers and clients, minifying, etc.

The test setup

The purpose of these tests is to gauge minimum overhead relative to each test. So all tests are kept as minimal as possible to see the minimum cost of running the framework.

I’ve ran my test results using wrk, a http benchmarking tool. The exact settings are “wrk -d 7 -c 50 -t 8 …" which means 8 threads with 50 connections opened and for a duration of 7 seconds.

Note: The exact numbers may differ if you try to run it, but the relative performance difference should still hold true. That is, you may experience more or less requests per second, but when compared to others, the percent % difference should still be the same for the most part.

The tests are ran using Node.js 6.3.1.

All tests are based on a simple GET / request with 200 “Hello World” response with near common response headers (see notes below for differences). The node env is set to production in case it matters, all tests are warmed up first and ran several times, taking the best result of the test.

The results

Node.js (bare, no framework or libraries, just http module)
Requests/sec: 21,813.05 — Latency (Avg): 2.19ms

Express (4.14.0) single route
Requests/sec: 8,565.67 — Latency (Avg): 5.60ms

Express (4.14.0) single route with ETag disabled (not shown in graph)
Requests/sec: 9,664.54 — Latency (Avg): 4.96ms

Koa (2.0.0) single route
Requests/sec: 10,709.89 — Latency (Avg): 4.46ms

Hapi (14.1.0) single route
Requests/sec: 3,315.94 — Latency (Avg): 14.43ms

The Node.js http only test serves as our baseline test.

As you can see, the framework tests sending back a simple Hello World response all have overhead.

Which is expected, but should it be that much?

Is that just the way it is? That is, in order to use a web framework, there needs to be a huge cost involved?

Historically this seems like the answer would be yes… but…

A different outlook

I’ve been working on a new web library as some of you may know. It’s called spirit.

It is written entirely from scratch and I knew the implementation was reasonable fast when writing it, but I wasn’t sure what I’d expect (and performance was never a factor in starting the project).

Running the same tests for spirit with routing (via spirit-router):

spirit + spirit-router , single route
Requests/sec: 21,654.84 — Latency (Avg): 2.19ms

spirit + spirit-router , multiple routes
Requests/sec: 19,560.90 — Latency (Avg): 2.44ms

Breaking it down

As you can see from the results, spirit (with router) has very little overhead and keeps on par with bare Node.js http compared to all the other frameworks.

So it seems the issue is not with web frameworks as an idea, but in how they are implemented. That is, frameworks doesn’t necessarily need to incur a significant cost from just using it.

In fact the spirit router test that uses multiple routes instead of a single route should be unfair, (it has to look through multiple routes before it finds the right one) but yet it still performs significantly better than all the other frameworks with a single route and doesn’t dip too hard from the Node.js http baseline test.

The few milliseconds of extra overhead on a request is certainly debatable whether it matters. But it happens on every request, big or small, and as I mentioned, we spend a lot of time already in getting faster page load times in other areas (server rendering, minifying, etc.), some of which are just shaving a few milliseconds too.

But we are probably more willing to do those things as they are more straight forward in our choice, that is, there isn’t really a choice, it becomes more of a personal matter if we bring in frameworks.

Though it doesn’t need to be, these results show us that any framework can be lean and have minimal overhead, and different unique characteristics and be called whatever, but really what matters is the implementation.

Even taking out spirit from the results, the results of each still vary, yet they all support the same features at the end of the day. The main difference is in how they were implemented.

As the goal of spirit wasn’t ever to be the fastest, it was more about the unique features that the other frameworks didn’t have. But yet when I saw these results I was blown away by how much different they performed.

Even if web frameworks are of no interest to you, I hope this article still provides insight into a common misconception that frameworks should always be costly.

Notes:

It should be emphasized that no middleware was used for any of the tests, the exact setup files for each test can be found here, https://github.com/spirit-js/spirit-router/tree/master/benchmarks/setups.

bluebird library should be mentioned as spirit uses Promises throughout, and bluebird’s performance is amazing, and highly recommended by me.

The Express test with ETag disabled is not shown in the graph over the other Express test because it came after I had already made the graph. It is more accurate to use the ETag disabled test results, but the result is only marginally better and doesn’t skew the way the graph looks as a whole.

The Koa 2 test doesn’t actually use routing (so this should be thought of as an slight advantage to the Koa test).

Hapi with gzip disabled and setting a Content-Length makes no difference. Removing the vary header (which I couldn’t figure out how) might help.

Though in the grand scheme of things, any minor optimizations won’t make any significant difference in the results.