Queuing and Batching on the Client and the Server

Mixpanel Eng
Feb 15, 2013 · 2 min read

We recommend setting up work queues and batching messages to our customers as an approach for scaling upward server-side Mixpanel implementations, but we use the same approach under the hood in our Android client library to scale downward to fit the constraints-battery power and CPU-of a mobile phone.

The basic technique, where work to be done is discovered in one part of your application and then stored to be executed in another, is a simple but broadly useful; both for scaling up in your big server farm and scaling down for your customer’s smartphones.

When clients ask us for help in setting up the most effective implementation of Mixpanel on the server-side, we generally suggest they put their Mixpanel events in a queue of some kind and send them in batches outside of the context of a user interaction rather than send out each event as it happens. This queuing and batching implementation ensures you don’t keep your customers waiting while your app sends events to the Mixpanel and it lowers your IO overhead per event. As your application scales upward to serve more and more customers, this pattern becomes essential; not just for Mixpanel, but for all sorts of tasks.

The queue and patch pattern informs how we write our server-side client libraries. You can see this in the structure of our server-side Java client library API. A typical use case might be:

In server-side processes, queue and batch is all about scaling upward; allowing a single process to serve more and more customers. An individual smartphone application running on a single phone doesn’t serve more and more users as it grows- each user has their own device. However, power and bandwidth on those devices are often very precious, and applications have to stay responsive to user input. Queuing and batching can help here, too!

When you call .track() or .set() in our Android client library, the library pushes a message on an internal queue. Then the calls return immediately, and your app is free to respond to the events or input that you wanted to track in the first place. The queue is consumed by a private thread in the library which writes the messages to persistent storage. When enough messages are stored up (or when enough time passes) the worker thread pushes all of the stored messages to Mixpanel’s servers. When the worker thread doesn’t have any work to do for a while, it dies to conserve battery power. The end result is that your Android app remains responsive and uses less user data, radio power, and CPU time to send the information you want to Mixpanel.

Of course, this isn’t the end of the story of scale, either for your server-side apps or for our client-side libraries, but it’s a very simple pattern that makes a pretty big difference. Moreover, it’s a hint that the same tools we use to go big can help us out when we need to go small.

Originally published at https://engineering.mixpanel.com on February 15, 2013.

Mixpanel Engineering

Stories from eng @ Mixpanel! Join us at https://mixpanel.com/jobs!

Mixpanel Eng

Written by

The Eng Team @ Mixpanel! Join us at https://mixpanel.com/jobs/

Mixpanel Engineering

Stories from eng @ Mixpanel! Join us at https://mixpanel.com/jobs!

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade