Queuing and Batching on the Client and the Server

Mixpanel Eng
Feb 15, 2013 · 2 min read

We recommend setting up work queues and batching messages to our customers as an approach for scaling upward server-side Mixpanel implementations, but we use the same approach under the hood in our Android client library to scale downward to fit the constraints-battery power and CPU-of a mobile phone.

The basic technique, where work to be done is discovered in one part of your application and then stored to be executed in another, is a simple but broadly useful; both for scaling up in your big server farm and scaling down for your customer’s smartphones.

When clients ask us for help in setting up the most effective implementation of Mixpanel on the server-side, we generally suggest they put their Mixpanel events in a queue of some kind and send them in batches outside of the context of a user interaction rather than send out each event as it happens. This queuing and batching implementation ensures you don’t keep your customers waiting while your app sends events to the Mixpanel and it lowers your IO overhead per event. As your application scales upward to serve more and more customers, this pattern becomes essential; not just for Mixpanel, but for all sorts of tasks.

The queue and patch pattern informs how we write our server-side client libraries. You can see this in the structure of our server-side Java client library API. A typical use case might be:

In server-side processes, queue and batch is all about scaling upward; allowing a single process to serve more and more customers. An individual smartphone application running on a single phone doesn’t serve more and more users as it grows- each user has their own device. However, power and bandwidth on those devices are often very precious, and applications have to stay responsive to user input. Queuing and batching can help here, too!

When you call .track() or .set() in our Android client library, the library pushes a message on an internal queue. Then the calls return immediately, and your app is free to respond to the events or input that you wanted to track in the first place. The queue is consumed by a private thread in the library which writes the messages to persistent storage. When enough messages are stored up (or when enough time passes) the worker thread pushes all of the stored messages to Mixpanel’s servers. When the worker thread doesn’t have any work to do for a while, it dies to conserve battery power. The end result is that your Android app remains responsive and uses less user data, radio power, and CPU time to send the information you want to Mixpanel.

Of course, this isn’t the end of the story of scale, either for your server-side apps or for our client-side libraries, but it’s a very simple pattern that makes a pretty big difference. Moreover, it’s a hint that the same tools we use to go big can help us out when we need to go small.

Originally published at https://engineering.mixpanel.com on February 15, 2013.

Mixpanel Engineering

Stories from eng @ Mixpanel!

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store