How to handle multiple API requests in your NodeJS Application

Abhinav C V
9 min readJun 17, 2023

--

nodejs

Unlike hackathons, when you’re building a NodeJS API that’s going to be used in the real world, you want to make sure that it’s capable of handling concurrency issues. Even though NodeJS is asynchronous by default, it has its limitations while handling multiple requests which require CPU-intensive tasks. By CPU intensive, I mean performing cryptographic operations, processing image/video/audio files, parsing large amounts of data such as XML/JSON/YAML, Mathematical Computations, Data Compressions, Machine Learning models, etc.

NOTE:- NodeJS is by default asynchronous which means that it is already capable of handling multiple requests at once but it’s only suitable for I/O operations like HTTP requests, File System operations, Database queries, Real-time chat applications, etc. In the case of CPU-intensive tasks, it can take a lot of time, and that’s why NodeJS provides certain packages which we will look into right now.

So without further due, here is a summary of a few of the methods you could implement in your NodeJS API:-

(wrap your seat belts, it’s gonna be a pretty long and informative one)

Redis Cache

If you have a set of data that you frequently fetch while loading your app, you probably want to cache that data instead of sending HTTP requests or running queries. Which is I recommend you use Redis. It’s extremely simple to use and pretty user-friendly. It’s basically another database separate from your main database which stores all cache data.

You will have to install Redis on your system and you can play with Redis using the following code.

user@username:/mnt/c/Users/HP$ sudo service redis-server start
user@username:/mnt/c/Users/HP$sudo service redis-server-start
user@username:/mnt/c/Users/HP$redis-cli
user@username:/mnt/c/Users/HP$set mykey "hello"
user@username:/mnt/c/Users/HP$get mykey

Now to use it in your NodeJS app, you will have to install the redis package and create an instance of the redis client in a file named redis.js (you can name it anything you wish)

import redis from 'redis'

const PORT = process.env.REDIS_URL || 'redis://localhost:6379'

const client = redis.createClient({
url: PORT
})

await client.connect()

client.on('error', (error) => {
console.error('Redis client error:', error);
});

client.on('connect', (err) => {
console.log('Connected to redis')
})

export default client

Once you’ve done that, you can import the client and set a key by simply writing the following code

await client.set('mykey', 'hello')

Node Cache

This is again, simply a caching mechanism. Before we get into how to use this node-cache package, we need to talk about why this is different from Redis cache.

Node cache is an in-memory cache whereas Redis cache is stored “externally”. This means that, once your node app is restarted it will lose its data and has to be cached again. Whereas, redis data will be stored in the network until it’s deleted. Redis cache can also be accessed from another device since it’s basically like a database, but node cache just lives in that specific nodejs app. Unless you have a GET route in your nodejs app which fetches the cache data.

Since we’ve discussed the differences, let’s see how you can implement it.

First of all, install the node-cache package and create an instance of the node-cache object to set the objects

import NodeCache from "node-cache"
const myCache = new NodeCache()

// to set one element
const success = myCache.set( "myKey", "hello");

// to set multiple elements
const obj = { my: "Special", variable: 42 };
const obj2 = { my: "other special", variable: 1337 };

const success = myCache.mset([
{key: "myKey", val: obj, ttl: 10000},
{key: "myKey2", val: obj2},
])

// to get the data
const value = myCache.get( "myKey" );

Cluster

Normally, we run a single NodeJS app that will receive any kind of request. Imagine we could run around 8 copies of your app separately and place a load balancer that will distribute the requests to the app which is available. Well, that’s exactly what cluster does!

Unlike the rest, this is a built-in NodeJS package which doesn’t require you to download any kind of package.

One thing you need to note is that, in case you are running N number of NodeJS apps, the data is not shared between any of these apps. Each of them runs separately with a process id (you’ll see this later in the code)

Let me take the example of a express based app. Normally what we do is:-

const app = express()

const PORT = process.env.PORT || 5000

app.listen(PORT, () => console.log(`Server is running successfully on PORT ${PORT}`))

But while running a cluster, you need to run N number of such servers. (N stands for the number of CPU’s in your system)

import cluster from 'cluster'

const numCpu = os.cpus().length

if(cluster.isPrimary){
console.log(`Primary ${process.pid} is running`)
for(let i=0; i<numCpu; i++){
cluster.fork()
}
cluster.on('exit', (worker, code, signal) => {
console.log(`${worker.process.pid} has exited`)
cluster.fork()
})
}else{
app.listen(PORT, () => console.log(`Server ${process.pid} is running successfully on PORT ${PORT}`))
}

The fork() function is what triggers a new worker, and if you noticed, it triggers the fork() function numCpu number of times. The Primary worker listens to all connections and then distributes the load to its other workers in a round-robin fashion. “Round robin” is simply a reference to an algorithm that distributes tasks only among a set of available resources/servers.

Worker Threads

Again, this is another built-in package provided by NodeJS.

So first of all, you need to understand that NodeJS is single-threaded by default. Being single-threaded means that your node application only has one instance of it running, the main thread. This main thread receives all requests and executes them in order. And this main thread is called the “event loop”. The event loop is what’s responsible for the asynchronous management of I/O operations such as network requests, file operations, and database queries.

Now this comes with its own advantages and disadvantages. Error handling becomes difficult when it’s single-threaded because the main thread crashed in case of an error. But if it was multithreaded, it ensures that only the thread where the error occurred crashes and the rest will continue to work perfectly.

This is why, nodejs gives us the option to include a package called worker-threads which helps us from converting our single-threaded node app to a multi-threaded application. Let’s see how we can implement this in our application:-

Main thread (calc.js):-

import { Worker, workerData } from "worker_threads"
const makeCalculation = async (req, res) => {
try {
const worker = new Worker('./worker.js', {
workerData: {
num: 10
}
})
worker.on('message', (message) => {
if(message.success){
res.send({message: 'Successfully calculated', success: true, ans: message.ans})
}else{
res.send({message: 'Calculation not possible', success: false})
}
})
} catch (error) {
console.log('Error 7: ', error)
}
}

export default makeCalculation;

Worker thread (worker.js):-

import { parentPort, workerData } from 'worker_threads';

const ans = workerData.num*10

parentPort.postMessage({message: 'Successfully calculated', success: true, ans: ans})

The way this works is, on the main thread, we create a new worker using the Worker() function which accepts a filename as a parameter. We also pass data to this file which can be accessed through the workerData object. So, the main thread waits for a message event from the worker and this makes it easy to avoid crashing the app in case of any errors in the worker threads.

Microservice Architecture

Usually what we do is use a monolithic architecture which is alright to an extent. After some point, there are projects which have multiple tables, databases, etc. The project becomes too big to handle everything in this type of architecture. This is why the concept of microservices was born.

As the word says, multiple services are created which perform a particular task and each of these services are connected to each other. Basically, we are distributing the applications where each of them is responsible to perform a specific set of actions on a specific subject.

image which shows the distribution of microservices and different databases

To give you an example, imagine you have a creating a copy of Uber. You’ll have a table for users and a table for drivers in a single database. This is what a normal developer hobbyist would do. But in the case of Uber and its large network, this would never work. This is why they need a nodejs app that focuses entirely on users and another app that focuses on drivers alone. And they may also further be divided into ever more apps in some cases. And note that, an entire database has to be made separately for users and drivers too.

Now designing such an architecture could lead to problems such as data inconsistency, inter-service communication, deployment management, etc. which should all be taken into consideration before implementing it. So very careful design and observation is required to do this.

The way to implement this would simply be to create 2 different applications which run on different ports, however, I’ll still give you the code so that you get a perfect idea:-

users.js

const express = require('express');
const app = express();

app.listen(3000, () => {
console.log('Users service is running on port 3000');
});

drivers.js

const express = require('express');
const app = express();

app.listen(4000, () => {
console.log('Drivers service is running on port 3000');
});

Redis Lock

As I said, Redis provides lots of services and lock is one of them.

First of all, locks are used when you want to prevent situations that can cause deadlocks. This occurs when you want to run process A which will have to use data that is returned by another process B that’s currently running. So process A has to wait until process B has been completed.

To implement this, we use Redis Lock. When process B starts, it acquired a lock and until its job is done, it will keep its lock. And only once it has finished executing it’s going to release it. And now process A can resume its job.

Let’s see this in code which will probably make you understand in a much better way:-

import redis from 'redis'
const client = redis.createClient();

function processA() {
client.set('my_lock', 'locked', 'NX', 'EX', 10, (error, result) => {
if (result === 'OK') {
console.log('Function A is executing...');

setTimeout(() => {
client.del('my_lock', (error, result) => {
console.log('Lock released by Function A.');
});
}, 5000);

} else {
console.log('Function A is waiting for the lock...');
setTimeout(functionA, 1000);
}
});
}

function processB() {
client.set('my_lock', 'locked', 'NX', 'EX', 10, (error, result) => {
if (result === 'OK') {
console.log('Function B is executing...');

setTimeout(() => {
client.del('my_lock', (error, result) => {
console.log('Lock released by Function B.');
});
}, 3000);

} else {
console.log('Function B is waiting for the lock...');
setTimeout(functionB, 1000);
}
});
}

processA();
processB();

So here I’m running processA() and processB(). Once processA() begins, it sets a lock using the set() function.

“NX” stands for “Not Exists”. It makes sure that the lock is only set if the lock doesn’t exist. And “EX” stands for “Expiry”. And 10 determines the number of seconds for which the lock will be set.

Since processA() has acquired the lock, the result in processB() would be null and hence it won’t be able to execute its duty and it starts to wait.

NOTE:- Javascript is single-threaded by default so each function executes in order. This means that in this example, processB() would execute only after processA() has been completed regardless of whether the lock has been acquired or not. So to completely understand the use case of Redis Lock, you have to have a distributed instance of your application. And imagine running processA() and processB() on separate applications. That is when you’d actually need to implement this.

To summarize…

We’ve looked at Redis Cache, Node Cache, Cluster, Worker Threads, Microservice Architecture, and Redis Lock. What I’ve written in this article is just to give you an introduction to these topics. You’ll have to do your own research by visiting their official documentation, reference projects, youtube tutorials, etc. to get an in-depth understanding of each topic before implementing them.

If you’ve made it till here, thanks for giving this a read, have a great day! :)

--

--