Gracefully Scaling to 10k PostgreSQL Connections for $35/mo, Part One

Alexander H. Black
FutureTech Industries
7 min readJun 8, 2018

--

On a day to day basis, our team focuses on maintaining and designing robust, planet-scale, high availability services to fit our clients’ requirements while requiring minimal maintenance. Use cases vary, but relational databases frequently arise as the strongest candidate. An application might use an ORM like SQLAlchemy, Sequelize, or Django where the framework hides most of the details, or the code might comprise of a handful of raw SQL queries. There are several databases available, and while any may fit, we’re fans of PostgreSQL, largely because of its reliability, flexibility, performance, and for many of our clients, its ability to run on Kubernetes. Unfortunately for the aspiring web developer, or the poor soul on the DevOps team who received an incident alert at 2am, PostgreSQL optimizes for a few stateful, long-running sessions, rather than frequent connection and disconnection by hundreds of other processes in parallel, as is the case most frequently in practice.

The shortcomings of a design focused on infrequent connections/disconnections become substantially multiplied when infrastructural tools like Kubernetes create and destroy connections regularly as part of deployments or rolling restarts. A default PostgreSQL configuration may only be configured to allow about 100 connections. The first thing to try is bumping max_connections. It works for a few days or weeks, until the problem repeats itself. The business grows, peak volume traffic becomes heavier. How can infrastructure stay stable with so much overhead allocated to making and breaking connections?!

Taming The Beast

It turns out that unless an application is taking advantage of some infrequently used features of connections, connections can be multiplexed to reduce overhead substantially via central pooling. That’s where PgBouncer comes in, keeping all your cooks in line.

Django and other web frameworks can be configured to open and close PostgreSQL connections around every request, and background processes, such as Celery or RQ workers, can use tens or hundreds of connections per running instance. Though that would appear to pose a problem, connection pooling upstream can resolve these concerns without code changes.

Understanding The Solution

Most PostgreSQL queries and sessions in a well-designed application are short-lived, on the order of a couple of milliseconds, and return very small result sets. Most web apps don’t take advantage of the stateful features of PostgreSQL sessions, or always use the same stateful parameters, like setting the timezone. As a result, this model means a PostgreSQL connection pooler can open a few connections to the upstream database and multiplex those client queries together. This approach easily scales to tens of thousands of concurrent web processes and hundreds of thousands of incoming client connections while taking up a minimal footprint.

PgBouncer transparently sits between a web framework and its PostgreSQL instance. Clients connect to it exactly as though it’s a PostgreSQL database, sans the connection limit. Though this sounds amazingly easy in theory, it’s more involved in practice to get right. On top of that, deployment management frameworks such as Kubernetes add additional layers of complexity.

This series lays out everything one needs to know to get PgBouncer working flawlessly on Kubernetes. Different configurations may have different complexities, but in our experience, this encompasses the majority of issues that arise.

Calculating Your Ideal Configuration: Instance Count

In most configurations, there should be a load balancer in front of the PgBouncer instances, much like web applications and related components. To scale any piece of software, calculating the number of copies to run is of utmost importance.

PgBouncer is more than happy to allow 10,000+ incoming client connections per instance while still using only 512Mi of memory and about a third of a CPU core. Put another way, what’s important is how many queries need to run concurrently and the number of upstream connections to the database. If a connected client tries to run a query without an available connection in the pool, by default, PgBouncer prevents queries from the client until a connection becomes available. This is often the ideal configuration for production, although it can hide configuration issues in your development environment(s).

Consider the following example. Let’s say we’re writing a notification publication service. Assuming the upstream PostgreSQL server is capable of supporting 500 incoming connections, the web app is a REST API whose most common endpoints run queries that take 5 query-milliseconds (QMS henceforth), and the load equates to roughly 10K requests per second. A framework for background tasks is also in use (our preference is Celery), with the following queues in a queueing solution like RabbitMQ or Redis:

  • The notifications queue: a fast queue that sends e-mails and other notifications, and needs to run up to 1000 tasks concurrently in the worst case to meet application latency requirements. These tasks run many short queries.
  • The billing queue: a slow queue that spends 90% of the time doing nothing, but runs long-running billing reports on the order of minutes per query once a day.

Starting with the web servers:

Each application thread can handle at most 1000 QMS of query time per second because it’s not possible to spend more than a second running queries every second. This gives us 1000 QMS per second/5 QMS per request = 200 requests per application thread as a theoretical max throughput per thread. However, this is the real world, so let's cut that in half to 100 requests per thread. For the purposes of this example, 200 threads across 10 instances or 200 threads across 200 instances is unimportant; the point is that they need a database connection each.

10,000 requests per second/100 requests per second per thread = 100 threads. So we need 100 upstream connections for the web servers.

At this point, it’s sane to ask “Why not just connect directly? Each of those requests is going to open and close a DB connection for 5QMS of querying.” That’s 10,000 connect/disconnect pairs a second. PostgreSQL connection poolers such as PgBouncer are highly optimized for this use case, whereas PostgreSQL itself has to do computationally expensive work, such as starting and stopping processes.

Furthermore, consider the notifications queue. There’s no way for us to allocate enough upstream connections to connect without PgBouncer, and no simple math to determine the needs of the queue, but in this example, it’s going to spend most of the time talking to external services. That throughput is achievable on 5 or 10 connections, say 20 in the worst case.

And this is where PgBouncer comes into its own. There’s no way in the Django and Celery stack to easily control when database connections are opened and closed. However, since PgBouncer can pool them together invisibly, idle connections that the application holds open don’t utilize resources.

Finally, we must take into account the billing queue. The billing queue is problematic because of the long-running queries, not the concurrency. PgBouncer can reuse connections that aren’t in use, but the billing queue uses them for minutes at a time. While we got away with only 20 connections for the notifications queue, the billing queue needs one per report. Otherwise, it’ll begin taking capacity from the notifications queue and the web servers by using connections they need to have open to meet the application’s requirements. In practice, making sure the billing reports are staggered is a solution to this problem, but for the sake of this illustrative example, let’s assume it needs 10.

So, our conclusion is that we need 130 upstream connections.

Figuring out how many PgBouncers to split this across is more of an art than a science. Expect to need to tune your initial choices. There’s negligible harm in having too many. We have 500 database connections to play with, so we can safely use 400 of them. Kubernetes configuration can allow up to 2x traffic during a rollout; we have to keep some open for PgBouncer rollover and maintenance tasks.

For an app such as this one, 40 PgBouncer instances, each configured to allow 10,000 incoming connections and to make only 10 connections upstream, would be a good choice. If one of the PgBouncers fails, only 1/40th of the in-use database connections fail. This configuration handles 400,000 incoming connections from the app as well, meaning there’s plenty of overhead.

Wrapping Up

PostgreSQL is powerful, performant, relational, and capable of scaling to handle many use cases. Getting PostgreSQL up and running might be easy, but getting the configuration perfect to handle inbound connections for production is an art in of itself. We’re excited to publish Parts Two and Three of this series, which are going to go over reserving connections for different components of a system, building a configuration file to fit your needs, and deployment to Kubernetes via Helm.

This is Part One of our series on tuning PgBouncer for ideal configuration. Check out Part Two over here, and if you haven’t, follow us on Medium for Part Three!

Update 06/22/2018: We’ve updated the article to more clearly explain the calculation process. Special thanks to Girish Sastry.

--

--

Alexander H. Black
FutureTech Industries

CEO of FutureTech Industries, former Tinder engineer and Data Scientist with MIT. Always working on honing my tech and political abilities to benefit the world.