Understanding NodeJS Clustering in Docker-Land
“NodeJS is single-threaded.”
Almost every conversation about Node starts with or contains this mantra. Advocates use it to describe its simplicity. Detractors use to it explain why it’s so terrible. Love it or hate it, it’s a fact of life every Node developer must understand and embrace to get the most value out of the platform.
Single-threading doesn’t mean Node can’t do many things at once — many articles have been written on taking Node to massive scales, such as Caustik’s famous “Node.js w / 1M concurrent connections!” post (and 10M followup). Experienced Node developers know all about writing asynchronous code, dealing with callback hell, using Promises, and so on. What this statement REALLY means is that, without some additional work, only one CPU core will be used per process. That’s the real boundary.
Why not just run more instances of the application on each server? Daemon management tools like supervisord can handle this quite easily. Shouldn’t that be the end of the story?
If you were writing a batch data processor, it might even be that simple — but sadly, those aren’t the standard. The vast majority of applications need to listen for client network connections, and only one process may “bind” to a port at a time. All you need to do is search for “EADDRINUSE” on StackOverflow if you want to stroll through the hundreds of developers who have run into this challenge over the years.
Enter the Cluster Module
A sometimes-misunderstood and very powerful feature in Node, this allows you to easily spawn child processes and thus use more, or even all, of the available CPU cores on a server. Already this is helpful, but that’s not the end of the story — supervisord could have done that without any additional coding, right?
What’s special is that the Cluster module ALSO monitors its children for attempts to listen to a socket. If a child process attempts to bind, say, to port 80 (HTTP) the parent will do this on its behalf, passing incoming connections to the client. If more than one children bind to the same socket, the parent will load-balance the incoming connections, spreading the workload around and making good use of those CPU cores.
Many frameworks make this a zero-effort task to set up. I’m personally a fan of ActionHero (for reasons I’ll discuss in a future post), and this is done simply by adding the command-line parameter “startCluster”. ActionHero will leverage the Node Cluster module to efficiently spread Web, WebSocket, and TCP client workload across any or all of the CPU cores you want.
Enter Docker: Exit the Cluster Module?
The world turns, and if you wait long enough, it turns all the way around. As soon as you start using Docker, you realize that suddenly the conversation shifts to a new way to manage resources.
Docker containers are streamlined, lightweight virtual environments, designed to simplify processes to their bare minimum. Processes that manage and coordinate their own resources are no longer as valuable. Instead, management stacks like Kubernetes, Mesos, and Cattle have popularized the concept that these resources should be managed infrastructure-wide. CPU and memory resources are allocated by “schedulers”, and network resources are managed by stack-provided load balancers.
In this type of environment, a process that attempts to use too many CPU cores can become a trouble-maker. Although there are some variations on this theme, standard practice is to have a process use just what it is assigned, and clustering is handled by the management stack itself by running more instances of that process. Port conflicts are not a factor because each each Docker instance receives its own private IP, and the management stack provides a load balancer that does basically the same thing (but often with much more configurability and sophistication) that Node’s Cluster module does.
Architectural decisions are complicated and very opinionated, and vary for each application and environment. To conclude this write-up, I thought it would be helpful to boil this down to a simple table to help you, dear reader, decide whether to use the Cluster module or not. This should be accurate for the vast majority of the use-cases out there. Comments appreciated!
- Raspberry Pi: Nope
- Standard Server: Probably
- Docker Container: Probably Not