Gracefully Scaling to 10k PostgreSQL Connections for $35/mo, Part Two

Alexander H. Black
FutureTech Industries
4 min readJun 21, 2018

Leave a comment below for a chance at $100 worth of Factoids, Bitcoin, or an Amazon, Google Play, or Steam gift card of equivalent value, details below!

We recommend checking out Part One of this series where we discuss the PostgreSQL pooling concepts available in PgBouncer, the same design that we use to serve our clients’ planet-scale needs here at FutureTech Industries.

The architecture we designed in our last article is incredibly powerful. It’ll scale to tens of thousands of connections and processes with minimal intervention, and it accounts for the overall concept of external connection pooling. However, while that configuration’s ideal for many use cases, an application’s design may involve several distinct components. They need to be pooled independently for maximum resiliency and flexibility.

For example, from the demonstrative application we described in our last article, an excess of traffic in the notification queue could delay or prevent the billing queue from processing a query. The web frontend, presumably the most visible part of the application, may error if the backend accidentally uses all the upstream PgBouncer connections for extended periods of time. A bug in one component can take database connections from other components, which isn’t very fault tolerant. That’s what we’re setting out to tackle this time.

What if, instead of running one group of PgBouncers, we ran multiple groups of PgBouncers, isolated by component or functionality? That’d mean one for each group, such as web, notifications, and billing. Well, that strategy works incredibly well, and that means we can effectively reserve connections for specific parts of the application.

Calculating Your Ideal Configuration: Settings

Basic Configuration

In general, any PgBouncer configuration has a handful of options that are settable in pgbouncer.ini. The first step involves configuring the [databases] section, which is too specific to individual use cases for this article to provide more direction.

The most important settings we can configure, besides database connection info, is pool_mode = transaction. The default, session pooling, requires clients to disconnect before the connection can be reused by other clients. With transaction pooling, we get the behavior this article assumes: any connection that’s not in a transaction can be used by whoever needs it next. PostgreSQL statements run outside transactions are given their own for the duration of the statement; if your app doesn’t use transactions, we’re effectively multiplexing the queries.

IMPORTANT: If transaction pooling is in use, your app must not use any session features whatsoever, or must always use the same parameters within the same pool.

Next, set max_client_conn and max_db_connections to the per-instance values we calculated in the last article. In the above example we have:

max_client_conn = 10000
max_db_conn = 10

PgBouncer waits for a specified delay to pass before closing idle connections. The delay’s 600 seconds by default, but any app of significant size won’t go idle for that long. Though not necessary, adding server_idle_timeout = 60 may potentially free up resources on the database server for unusual workloads.

Platform Tuning

One of the most crucial settings we have available is listen_backlog. The PgBouncer manual refers you to the man page for the listen syscall, which is a less-than-helpful explanation. The specific underlying behavior depends on your operating system kernel and configuration, but the general idea is this: when you first connect to a server over TCP, the kernel puts your request to establish a connection in a queue. If this queue is full, it rejects the connection outright. PgBouncer eventually gets around to accepting more connections, thus removing them from the queue and communicating to the client that the connection was successful.

The problem is PgBouncer’s default configuration is a conservative 128. When PgBouncer is occupied proxying other traffic, new connection accepts can lag behind. The queue then becomes full, and clients are told their connection’s refused. This becomes especially prevalent in apps like the above, which frequently open and close connections for only a couple of queries. Set this to at least 4096, but higher is better.

A setting that you don’t need to change but which is worth mentioning is tcp_keepalive. PgBouncer enables this behavior by default. Many setups for PgBouncer involve traffic passing through multiple load balancers, like Google Cloud’s GCLB, for example. If connections go idle for long enough, the load balancer may drop the connection without notifying either side. tcp_keepalive prevents this from happening, and shouldn’t be turned off in most use cases.

Finally, the config reference mentions many settings that alter pooling behavior: default_pool_size, min_pool_size, reserve_pool_size, etc. These are worth exploring if your application has long periods of inactivity followed by extremely heavy peaks.

Wrapping Up

There are a plethora of flexible options that allow adaptability to most any use case with PgBouncer, and it can be quite overwhelming getting it right. Some of the most important options are max_client_conn, max_db_conn, and listen_backlog, which independently have minimal impact but collectively make tens of thousands of connections viable in the most minimal deployments. We’re excited to publish Part Three, where we’ll enumerate practical, high-availability deployment at scale with Kubernetes!

Thanks to everyone who made this possible! Follow us on Medium for Part Three, where we take all of the information we’ve gathered and use Kubernetes concepts to enhance and deploy our configuration!

If Factom or PostgreSQL pique your interest, you should check out our series on deploying HA Factom and article on calculating and tuning PgBouncer values.

Be sure to submit your comment here on Medium to enter into the raffle, which closes midnight PST on 06/28/2018. For a comment to be considered for raffle eligibility, it needs to adhere to Medium’s Rules and Content Guidelines and contribute to the discussion. We’ll reach out to the raffle winner during the week of 07/02/2018. Best of luck!

Co-authored by Austin Hicks, Madison Steiner, and the rest of the wonderful team at FutureTech Industries.

--

--

Alexander H. Black
FutureTech Industries

CEO of FutureTech Industries, former Tinder engineer and Data Scientist with MIT. Always working on honing my tech and political abilities to benefit the world.