MongoDB in production: How connection pool size can bottleneck application scale

Kyle Martin
Jan 7 · 5 min read

Understanding how MongoDB connection pools and pool sizing works is a fundamental part of running an effective MongoDB client application at global scale.

This article looks at what connection pool size is in MongoDB and how we can test the effect the pool size setting has on the time taken to complete queries, where if set incorrectly, can lead to hidden application bottlenecks by complex queries slowing down simple ones.

As your application needs to scale, the number of queries to MongoDB increases. The complexity of your application also grows, so query requirements become larger — so do potentially hidden bottlenecks.

Let’s say you’re working on an application with two types of users:

1 ) General users of the application, who expect fast loading times. These users account for the majority of application traffic.

2 ) Administrators of the system who need to perform tasks which can span across thousands of users and their related items — administration actions which are usage intensive and resource heavy.

The first user set(general users) are paying customers, of which there are thousands at any point in time. Any actions these users do in the application needs to be extremely fast — such as a logging in, viewing pages, and other application features. The database queries required for these features are very small, and should be very fast.

The application features used by the second user set(administrators) will generally be very working with large amounts of data, which in turn requires large and complex database queries.

After reports from users that the application has become slower, you start looking at application metrics. You find that MongoDB queries have suddenly become very slow — even for indexed findOne queries which use to be extremely fast.

To work out what’s happened here, we need to understand MongoDB pool sizing.

Example:

A full sample application below can be found on Github. This article’s code samples focus on the Node.js MongoDB driver, however the underlying concepts remain the same across the various MongoDB drivers.

In the above code sample, we have a minimal Express.js server with two endpoints, /slow and /fast . One database connection is created, which is then shared for all MongoDB queries — this is much more performant than opening and closing TCP connections for each and every query the application makes to the database.

As we are now sharing the database connection across the application, we need to understand how the pool size option of this connection comes into play. The pool size is the option which determines how many queries may be run using the same connection in parallel, similar to the number of lanes in a tunnel. An incorrectly set value for this connection can bottleneck the whole application.

The /slow route uses an example query which will take 5 seconds to return a result — this is designed to emulate a slow query in your application such as an administrator loading every user, or a todo list application loading a large number of items.

The /fast route is an example query that should return within a small number of milliseconds — this is designed to emulate a very simple, yet critical user facing query in your application, for example, a general user logging in.

In the code sample, the environment variable MONGO_POOLSIZE can be used to dynamically change the pool size on application start. With this, we can begin to benchmark our application with various pool sizes configurations. To accurately test this, we can use the application load testing tool artillery.io.

The above yaml configuration file can be used with the artillery.io cli to run a performance test. This performance test will target our running application at http://localhost:3000 and emulate a new user arriving at the application every second, for 5 seconds. Each user will visit both the /slow and /fast endpoint and wait for a response from the server.

First, let’s run the loadtest using a pool size of 3. This means that our database connection will be able to run a maximum of three queries to MongoDB at one time.

$ MONGO_POOLSIZE=3 node index (Starts the application, with a pool size of 3)

$ artillery run loadtest.yaml

With the pool size set to three, all of the tests completed successfully, with a maximum response time of 9017ms (our slow query emulates one which takes 5000ms).

What happens when we lower our pool size to 1? This means that our database connection will only be able to run one MongoDB query at one time.

$ MONGO_POOLSIZE=1 node index

$ artillery run loadtest.yaml

Though every request succeeded, the maximum response time increased from 9017ms to 21142.7ms. Why? Our concurrent requests were now blocked by the first slow query, so all of our fast queries have suffered as a result.

What happens if we increase the pool size to 10? This means that our database connection will be able to run up to ten MongoDB queries at one time.

$ MONGO_POOLSIZE=10 node index

$ artillery run loadtest.yaml

Our maximum response time is now 5019ms — the time required to run our example slow query. This shows that no other queries were blocking the database connection during this test, meaning that our /slow and /fast endpoints provided the expected result to the user without being bottlenecked.

So, what is this pool size setting, and why is it stalling my application scale?

Picture the MongoDB connection as a tunnel. The pool size determines how many lanes of traffic can enter the tunnel at a time. If you have a one-lane tunnel with a bus and a Ferrari, the Ferrari can’t get through the tunnel until the bus has. This means that sometimes, a slow bus can make even a Ferrari slow right down if there aren’t enough lanes.

When we add extra lanes, i.e. increasing our connection pool size, we can make room for the Ferrari to speed through and not get held up by the slow bus.

What should I do about it?

If you expect your application to require long running queries, or many concurrent queries you should benchmark your application by simulating expected production load with a tool such a JMeter or artillery.io to investigate the impact of increasing the pool size for the MongoDB connection. The optimal pool size for your application will also need to take into account other clients which are sharing the same MongoDB server — as resource usage on the MongoDB server should also be monitored.

Tools such as Azure Application Insights can be used to instrument the MongoDB query performance in your application as detailed in this blog post.

Kyle Martin

Written by

SRE | Full Stack Developer. Interested in web app security. https://github.com/js-kyle

More From Medium

More from Kyle Martin

Also tagged Mongodb

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade