Can Node.js run as fast as Go? — HTTP+DB reads case

Mayank Choubey
Tech Tonic
5 min readMay 22, 2024

--

Go, being compiled, is known to be much faster than the interpreted environment like Node.js. This is definitely true for CPU intensive cases. However, for I/O intensive cases, is that really true? We’ll try to find the answer in this article.

Use case

The use case is a simple server that does the following:

  • Get HTTP request
  • Extract userEmail parameter out of the request body (JSON)
  • Perform a database read for extracted email
  • Return user record in the HTTP response

A simple but very common case.

Frameworks

On the Node side, we’ll be using the following frameworks:

  • Fastify: This is the fastest & popular production grade framework (we haven’t chosen options that are faster but not production grade)
  • Sequelize: This is one of the top choices for ORMs

On the Go side, we’ll be using the following frameworks:

  • Fiber v2: This is the one of the fastest & popular production grade framework (Gin was another option)
  • Bun: This is one of the top choices for ORMs in the Go world

Test Setup

All tests have been executed on MacBook Pro M2 with 16G RAM & 8+4 CPU cores. The software versions are:

  • Node.js v22.2.0
  • Go 1.22.3

The load tester is a modified version of Bombardier, that sends a random email in each HTTP request.

Test Data

The Postgres database is preloaded with 100K user records. The table structure and a sample record is shown below:

# \d users
Table "public.users"
Column | Type | Collation | Nullable | Default
--------+------------------------+-----------+----------+---------
email | character varying(255) | | not null |
first | character varying(255) | | not null |
last | character varying(255) | | not null |
city | character varying(255) | | not null |
county | character varying(255) | | not null |
age | integer | | not null |
Indexes:
"users_pkey" PRIMARY KEY, btree (email)

# select count(*) from users;
count
-------
99999

# select * from users limit 1;
email | first | last | city | county | age
-----------------+----------------------+----------------------+----------------------+----------------------+-----
ongbj@clb1a.com | 2f63ac8f31590d716243 | aed71a8d1868ac6eb032 | 836ddca891d3c46e24fe | bd53f8e8da17cededead | 25

Application Code

The Node.js code is as follows:

The Go code is as follows:

That’s all about the setup. Now let’s go for a couple of interesting tests.

Test run 1

In this test, we’ll run 1M requests each for 50, 100, and 200 concurrent connections.

The results in chart form are as follows:

Let’s analyze these results. First, Go offers ~2.5x performance compared to Node.js. Go’s RPS is averaging around 37K, while Node’s RPS is averaging around 16K. The difference is significant. In terms of resource usage, Go’s CPU usage is high, and memory usage is very low. Go’s high CPU usage indicates that Go is properly utilizing the available cores. Go’s memory usage stays very low (No competition here).

For this test of an I/O intensive case, we can conclude that Node.js is quite slower than Go.

Test run 2

Taking cue from previous results, we’ll do another run by having Node.js use more CPU cores and should (or hopefully) give us better performance. The updated code is shown below. The only change is in fastifyMain.mjs file which will be replaced by fastifyClusterMain.mjs.

fastifyClusterMain.mjs

import cluster from "node:cluster";
import Fastify from "fastify";
import { handleRequest } from "./fastifyController.mjs";

if (cluster.isPrimary) {
for (let i = 0; i < 8; i++) {
cluster.fork();
}

cluster.on(
"exit",
(worker, code, signal) => console.log(`worker ${worker.process.pid} died`),
);
} else {
const fastify = Fastify({ logger: false });
fastify.post("/", handleRequest);
fastify.listen({ port: 3000 });
}

In this test, we’ll again run 1M requests each for 50, 100, and 200 concurrent connections.

The results in chart form are as follows:

Note: CPU and memory usage is cumulative of all the processes in the Node cluster

Note: In cluster mode, Node.js makes up to 10 DB connections from each process, which goes to a total of ~80 connections. This is much more than what we’ve configured on Go (10). Something to be aware of, but we’ll ignore this for now.

This time, Node.js gives a good competition to Go. Node.js’s RPS (~32K) is fairly close to Go’s RPS (~37K). The CPU usage of Node.js is higher now (~400%). This is expected because 8 Node.js processes are running in the cluster. This was okay so far. The worst part for Node.js is the memory usage. The cumulative memory usage of the cluster is ~1.1G which is way too much when compared to Go’s memory usage of ~30M.

Overall

If we make a plot of test run 1 and 2 together, we’ll get the following:

Conclusion

Node.js, even in cluster mode, is ~15% slower than Go’s single process performance. However, to reach that close to Go, Node.js has to use way too many resources.

Thanks for reading!

--

--