How to send 1000+ updates/sec per client (using Meteor) and stay alive

Omri Klinger
3 min readJan 25, 2016

--

The solution presented below has been tested aggressively sharing single subscription between 1000+ users, while sending heavy loads of data for each.

The problem -

A meteor subscription that is being re-used by many clients is a common need for many apps. For example an uber like app, where a-lot of users see cabs moving on the map (geo-graphic related query that return 2000 results, done by pooling, each doc being updated once a second).

The optimization -

Disable merge box + “Batch publications”.

In detail -

First we disable the merge box for the specified subscription , why ?

The merge-box requires a-lot of memory. For subscription with 500 clients, the memory required to support it is 500 times the size of the data sent. Moreover, doing diff and building the message for each client, puts a heavy load on your cpu when your data is changing rapidly.

So how do we do it ? and what is the meaning of batch publication ?

On each publish function, after building the query (from the publication arguments, logic and user roles), we check whether this query is already handled. If so, we add the instance of publication to the query listeners and send the current content of the multiplexer to the clients in a batch. We cache this batch message so if another client joins between changes, we can send him the message immediately. We send the message straight over the socket using custom ddp message ‘batchUpdate’.

On each change we compute the diff once (creating a new batch message) and send it to all the clients that are listening to the subscription.

Because we are disabling the merge-box, we should control merges between different subscriptions manually in the client. An optional convention to solve this, is creating a client collection named after the subscription for each unique subscription.

For queries that depend on the polling observer there is an option to add an even better optimization — compute the batched message only once we finished the diff of all the changes. This solution requires a minor addition to the MDG mongo package (I will add this as a PR, in case my original ddp custom messages PR is merged ).

This solution works great also with publishComposite :) and can also save some computation if building of the publish function query is being done dynamically (doing memoization to the query construction).

This optimization creates a window for many other optimizations that could be built on top of it. From better representation of the batched data (Netflix — jsongraph or any other compression) to subscription dynamic routing (“SubscriptionRedirect” — Sending the user to get the subscription from a server that’s already handling it).

I see the following pros in this optimization:

  1. Reduced server cpu usage :
  • No need to compute each user diff of the merge box.
  • No need to stringify each user message times the number of users listening for the subscription (DDPCommon.stringifyDDP can takes some time in large multiplies).

2. Lower usage of server memory (disable the merge box — the memory being used is in the size of the sent data and not the sent data times # of clients)

3. Better ramp time — a new user that joins a running subscription can immediately get the full image of the subscription.

4. Optimize client re-rendering — the client can handle the batch subscription as he wants. For example, he can refresh the view after analyzing all of the batch instead of after each message.

5. It allows more creativity and it’s a base for more optimizations. For example - optimizing the batched message (json-graph or even shortening and group similar messages by order) or even redirecting subscription to the current server that holds this subscription.

But there are still some cons :

  1. No merge-box (can’t rely on Meteor to join your subscriptions in order to send only the delta — can be easily solved by creating client collection per subscription).
  2. Needs to handle client re-connections manually (remove data on subscription stop for example).
  3. Requires patches to some of the most important building blocks of meteor (ddp-client and mongo packages).

--

--