How we handle deploys and failover without disrupting user experience

Mixpanel Eng
Sep 28, 2012 · 4 min read

At Mixpanel, we believe giving our customers a smooth, seamless experience when they are analyzing data is critically important. When something happens on the backend, we want the user experience to be disrupted as little as possible. We’ve gone to great lengths to learn new ways for maintaining this level of quality, and today I want to share some of the techniques were employing.

During deploys runs Django behind nginx using FastCGI. Some time ago, our deploys consisted of updating the code on our application servers, then simply restarting the Django process. This would result in a few of our rubber chicken error pages when nginx failed to connect to the upstream Django app servers during the restart. I did some Googling and was unable to find any content solving this problem conclusively for us, so here’s what we ended up doing.

The fundamental concept is very simple. Suppose that currently, the upstream Django server is running on port 8000. I added this upstream block:

So now, when we fastcgi_pass to app, all the requests get sent to our Django server running on port 8000. When we deploy, we get the most up to date code and start up a new Django server on port 8001. Then we rewrite the upstream app block to mark 8000 as down instead of 8001, and we perform an nginx reload. The nginx reload starts up new worker processes running the new configuration, and when the old worker processes finish their existing requests, they get gracefully shutdown, resulting in no downtime.

Another option to consider is using the backup directive instead of using down. This causes nginx to automatically failover to the servers marked with backup when connections to the other servers in the block fail. You’re then able to seamlessly deploy by first restarting the backup server, and then the live one. The advantage here is there’s no configuration file rewriting required, nor any restarting of nginx. Unfortunately, some legitimate requests take longer than a second to resolve, resulting in a false positive for the original server being down.

Spawning is yet another option. Spawning can run your Django server, monkeypatched with eventlet to provide asynchronous IO. Furthermore, it has graceful code reloading. Whenever it detects any of your application’s python files have been changed, it starts up new processes using the updated files and gracefully switches all request handling to the new process. Unfortunately, attempting this solution didn’t work out for us, as somewhere within our large Django application, we had some long blocking code. This prevented eventlet from switching to another execution context, resulting in timeouts. Nevertheless, this would still be the best option if you can make sure that your WSGI application doesn’t have any blocking code.

During data store failures

At Mixpanel, we employ a custom built data store we call “arb” to perform the vast majority of queries that our customers run on data. These machines are fully redundant and are queried through HTTP requests using httplib2. When a machine fails for any reason, we want to be able to seamlessly detect the failure and redirect all requests to the corresponding replica machine. Properly doing this required some modification of the HTTPConnection class.

The main problem was httplib2 only supported a single socket timeout parameter, used for sending and receiving through the underlying socket. However, we wanted initial connection timeout to fail very quickly, but still have a long receive timeout, since a query over large amounts of data could correctly take a long amount of time. Luckily, httplib2 requests allow for passing in a custom connection type, as long as it implements the methods of httplib.HTTPConnection. Armed with this knowledge, we created our own subclass of HTTPConnection that had a custom connect method. Prior to making the connection, we used settimeout on the socket object to lower the timeout to a short 1 second. If the connection was successful, we revert the timeout it back to the original setting.

This way, if we get a socket.error exception on the connection, a custom ConnectTimeoutException gets raised and the machine being connected to is properly marked as down. One small drawback is that the request takes an additional second, but this only needs to happen a small number of times before all future requests see the machine being marked as down. For the requests that timeout on connections, we simply handle the ConnectTimeoutException and retry the query on the replica machine.

The takeaway here is to take advantage of the ability to change the socket timeout to check for an unresponsive machine. Often with systems that work with large volumes of data, long timeouts are required for database queries. But this is only necessary for established connections. When the connection is initially created, failing fast results in a better user experience, avoiding long delays when a machine goes down.

Originally published at on September 28, 2012.

Mixpanel Engineering

Stories from eng @ Mixpanel!

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store