Four tips to reduce latency on real-time applications

Hasan Tuna Küçükertaş
Teknodev IT Consulting
3 min readMay 18, 2022

Creating a real-time application seems easy at first look because you can easily find a lot of tutorials, examples, etc. But if this application has high traffic, you should take care of hardware usage.

Currently, we have been developing a product named Spica Engine. It is a backend development tool for developers if I have to explain it in one sentence.

We provide Realtime Database with low latency and optimized hardware usage using MongoDB and NodeJS. But providing “low latency and optimized hardware usage” is not that easy for apps with high traffic.

Our real-life scenario was a real-time quiz game. Users should answer the questions before the timeout. The user who answered wrong is eliminated, and the opponent wins.

We keep these matches on the collection from the beginning of the game session to the end of it and listen to a collection with specific filters as long as the game session exists. Each time they answer, we add this answer to the document and then notify the pair of players. You could guess there will be a filtered change stream for each player pair.

We had to make some performance improvements because MongoDB uses the CPU so much for this case. In this article, I will share four tips for this performance improvement.

Note: I am assuming you know that you can use change streams for accessing real-time data changes and you can pass filter to these change streams to filter changes that match with it.

1- Consider filtering on the NodeJS side instead of MongoDB

Maybe it sounds not right when you first read this line because you might think that if someone can do this better than me, why would I do this instead. But as the MongoDB change stream production recommendation says:

Avoid opening a high number of specifically-targeted change streams as these can impact server performance.

Change streams are watching the collection named Oplog that is being added documents so frequently. Each different filter you created will listen to this collection and there will be so many filters that should run per change that happened in the oplog. If you see that MongoDB CPU usage increases while document changes occur often, this might be the first reason.

You better consider passing this workload from MongoDB to NodeJS. And don’t forget that we won’t filter some documents through thousands of documents. We decide whether the sent change(single) matches our filter.

2- Create one change stream per collection and attach the filters to it

As I mentioned before, creating a change stream for each filter for the same collection will cause a high CPU load. But you have a lot of unique filters and need to notify users of relevant changes.

Change streams are readable streams so you can attach another stream to them. For example, create one change stream for the collection, create a handler for each filter and attach them to this change stream. This way will reduce the number of unnecessary change streams. Each filter will decide whether send them to the listeners.

3- Add the same listeners to the existing filter if they already exist

You don’t need to attach one more filter to these change streams as long as that filter has already existed. You can add this new listener to the listeners of the filter then changes will be sent to this listener just like others.

4- Manage these change streams, filters, and listeners

Because change streams, filters, and listeners are stored and reused, they should be managed well.

For example, when disconnection happens, reduce the count of listeners of the filter, then if this filter has no more listeners, detach it from the change stream, then if this change stream has no filters, close it.

Summary

I have tried to give you some tips for reducing resource usage and providing low latency. Some of the tips are dependent on MongoDB and NodeJS but you are free to implement them to your stack. I hope it was helpful and gave you an idea at least.

--

--