The NPM Libraries That Will Make Your Node.js Micro Services Faster

Tamar Twena-Stern
8 min readJun 26, 2024

--

Performance is a very wide area and has range of topics. We can talk about code optimizations, DB query optimizations, scaling your services horizontally and much more. But in this article, I focused on Node.js micro service layers, and which architectural decisions you can take in order to improve performance on those layers. When writing my CRUD API in Node.js — Are there frameworks which are more efficient then others ? Architectural patterns that can give me performance gains ?

So in this article I examined the web framework that is chosen, logging and the data access layer, in order to come up with several recommendations, that will serve developers when designing their Node.js CRUD micro service.

Choose A Performance Efficient Web Framework

When writing your Node.js micro service, The first step will be to choose a web framework for writing your APIs. Your application performance will be very impacted from this choice — as the web framework is the foundation on which your code will eventually run . If your foundation will be slow — it will impact your application performance drastically.

The most popular web framework for Node.js micro services is Express. Express has around 29 million weekly downloads in NPM. But performance wise — is Express is the best option ?

Express vs Fastify

I had compared between Express and Fastify — a newer web framework which is gaining popularity in the last year and a half . Fastify already have around 1.6 million weekly downloads in NPM. Fastify declares to have much better performance then Express.

The benchmark was done using 2 servers, one of them written with Express , and the second one was written with Fastify. To make sure the benchmark is the most accurate that can be done, the servers had the same characteristics : both servers implemented the same API — inserting one record to MongoDB of a simple Person object with the same properties.

On the first image — the Express micro service benchmark results. On the second image — Fastify results.

In Express a total of 101,000 requests were sent during the benchmark, and the average requests per second was around 10,000. On Fastify we reached 193,000 requests on the total benchmark, and average requests of around 17,500.

Here is a chart that presents the performance improvement by looking at average requests per second.

According to the results, we can see that there is a significant performance advantage for Fastify, as it has demonstrated to be about twice as fast from the Express server, which is much common.

There are 2 situations on which you need to have those considerations in mind : starting a new project, or refactoring your micro service to use a different web framework. On those situations, the performance consideration should be high on your list, since the web framework can boost your application performance much more then a lot of time spent on code and DB optimizations. To summarize ,Fastify is the best when it comes to performance.

Performance Improvements On The Logging Layer

Choose A Performance Efficient Log Library

The plan is to write an application that will always be stable. In reality, there are always bugs, crashes, lost data, and more weird scenarios that we must have logs to understand the root cause in production .

So logs are a must in our application , and they save us a lot of debugging time in production — but logs cause performance degradation. If you think about it, it makes sense — you add more commands for your software to perform, and logs usually involve IO operations.

Here is a performance benchmark that was taken with 2 Node.js servers , both work with Express and MongoDB and implement the same API. The first has one log line and the second has no logs. Benchmark was performed with NPM Autocannon .

Here you can see the server without logs. The average requests per second is ~8600 and the total requests in the benchmark is 86,000.

Here You can see the benchmark results for a server with one log line. In this benchmark, the server is using winston NPM library. We can see that the average requests per second decreased to ~7800 and the total requests decreased to 79,000 requests.

The overall performance degradation was around 10%, which is demonstrated in the chart below:

When choosing a logging library, there are many considerations . but the performance of the log library itself should be an important consideration. It is recommended to choose the log library that will give you the less overhead on your code.

To understand what will be the most efficient log library that will give me the less overhead over my code, I benchmarked the same server with different NPM log libraries : Winston, Pino, Log4js and Bunyun. Here is the benchmark comparison between the libraries , and also compared to a no logging server. The first graph compares the average requests per second, and the second graph compares the total requests.

To summarize — currently NPM Pino.js is the most efficient library, and it gives the less overhead over your application code. NPM Pino.js is the most recommended library — performance wise.

Don’t Build The Log Messages On Your Application Layer — Let The Log Library Build The Messages

Another tip I would like to highlight here — is to build the log messages by the log library , and not in your application.

In the following image you can see 2 examples of message building, while in the first example, the message is built in the log library level and not in the application.

Why it is important ?

The majority of the log libraries will not build messages for redundant log levels. For example, If the application is running on Error log level, then messages for Debug log level will not be built for your code — if they are built by the log library with the api that gets parameters and assembles messages . However, if you will concat the Strings in your application then they will be built regardless of the log level, and whether the message will be actually printed.

It might looks small to you but performance wise — on application which writes even megas of logs lines per day — it adds up.

The Data Access Layer

Prefer Using Native Driver Instead Of An ORM-Like DB Abstraction Tool

I remember when I started my interest in Node.js , the first tutorial I did for writing a server contained Mongoose. Also the first several servers that I wrote used Mongoose. Mongoose is an object modeling tool, which is aims to simplify the development with MongoDB. So what can go wrong when writing your application with DB abstraction tools like ORMs ?

Data abstraction tools introduce an abstraction layer between the application code and the database, which adds overhead in terms of translation of object-oriented code to queries and vice versa. In many cases, those tools may struggle to optimize complex queries or may not support certain advanced database-specific features efficiently. In addition, The process of mapping database entities to application objects and vice versa can introduce overhead, especially when dealing with large datasets. This mapping process can lead to increased memory usage and processing time, impacting performance.

But lets compare performance. I compared 2 servers, one working with Mongoose and the other one working with native mongo driver. Both servers are working with the same web framework — Express. Each server is performing a POST request and inserting a simple object to the database. On the first image, we can see the results for the Mongoose server, and on the second image, we can see the results for the server that is working with native MongoDB driver.

We can see that removing Mongoose had improved the baseline performance of the server in around 40%. This is demonstrated in the chart below :

To summarize — We can see that even for a very simple scenario, you can improve your service performance by around 40% by removing Mongoose . I know many cases in which developers prefer to use ORM like tools thinking about development simplicity. But when your scale becomes bigger and when your data scenarios become more complex, this overhead impact your application more and more, to the point you might refactor the code to work directly with the DB.

DB Connection Pool Management On The Infrastructure Layer

When working with databases, you usually work with a connection pool. A connection pool is a fixed amount of allocated connections to the database. Every time you need to perform a DB operation , you take an available connection from the pool and use it to perform you query.

It is important to manage the connection pool on your Infrastructure layer. You need to make sure developers are not able to open connections from various layers . The best way is to create an API , that will get available connections from the pool under the hood. Also , the infrastructure layer need to terminate the pool when the micro service is shutting down.

If opening and closing DB connections is possible from every component in your Data Access layer, it can lead to a situation that multiple connections stay open, because developers accidentally forgot to close them. This can lead to a connection leak, that will eventually cause resource exhaustion, performance degradation, memory leaks and more.

To summarize — when building your micro service and thinking about its architecture , design your connection management on the infrastructure layer, disable opening new connections from your data access layer ,and close all connection when your service is shutting down.

Summary

There are several points you can adopt from this article, when designing your Node.js micro service:

  1. Choose a performance efficient web framework — currently Fastify is the leading library on this aspect.
  2. Choose a performance efficient log library that will give you the less overhead on your code — NPM Pino is leading on this aspect.
  3. Let your log library build the log messages for you.
  4. On the data access layer — prefer not to use ORM-like tools, and work with native DB driver instead.
  5. DB connection management — with a connection pool. Opening and closing connections should be done in the Infrastructure layer , to avoid connection leaks.

--

--

Tamar Twena-Stern

I am software developer, manager and architect. I have experience in various technologies: Server side, big data, mobile, web technologies, and security.