Shindig: Subscription Caching and Latency Compensation

Chet Corcos
4 min readNov 24, 2015

--

In the previous article I talked about ccorcos:any-db, a Meteor package for publishing and subscribing from any database. For many apps, this is all the reactivity you need. You can call Meteor.methods to write to the database and after a round-trip to the server, you’ll see the data pop up in your UI. But most quality apps these days have some kind of latency compensation to show data that is currently in flight to the server immediately after the a user action. Otherwise, the user doesn’t have immediate feedback whether or not their action did anything.

Another important aspect of performant app is caching data. When a user bounces around from page to page, you shouldn’t just throw away data that the user may want to reference again in 30 seconds. So rather than unsubscribe immediately when we leave a certain view, we should set a timer to unsubscribe and cancel that timer if we come back to that view soon after. This logic, however, shouldn’t live in the view, but in the subscriptions themselves.

Inspired by the Flux architecture, I created a concept on the client called a store which caches subscriptions (or HTTP requests) and holds a parallel copy the subscription data which you can manipulate to simulate changes before waiting for a round trip to the server. This is a really simple and expressive means of latency compensation giving you free reign to simulate whatever you want on the client. When new data comes in from AnyDb, subscription.onChange simply overwrites the data in the store and you’re left with the ground truth data again.

Rather than dive deep into how I wrote this package, I think its probably easier to understand if I just explain how its used.

Stores have a very simple interface that work especially well with React. First you get data from the store. If theres no data in the store, you can fetch the data. You can also watch for changes to that data and when you’re all done with it, you can clear it from the store which will set a timer to unsubscribe unless try to get the same data before the timer runs out.

AnyDb.publish 'user', ({id}) ->
Neo4j.getUser(id)
UserStore = AnyStore.createSubStore('user'){data, fetch, watch, clear} = UserStore.get({id: userId})
if data
# do something with this user
listener = watch ({data}) ->
# the user data changed!
else
fetch ({data}) ->
# now we can do something with this user
listener = watch ({data}) ->
# the user data changed!
listener.stop() # stop listening to changes
clear() # clear the subscription

There’s also a special store that handles paging for you, and companion stores for caching HTTP requests (such as Facebook API calls in Shindig).

AnyDb.publish 'followers', ({query, paging:{limit}}) ->
Neo4j.getUserFollowers(query, limit)
FollowerStore = AnyStore.createSubListStore('followers', {
minutes: 1,
limit: 10
})

The only difference in the list-store version is that fetch will be undefined if you’ve reached the end of the list and there are no more results. Otherwise calling fetch again and again will increment the limit sent to the publication and load more data.

The HTTP version is also pretty much the same but expects an async fetching function as the last argument. Here’s an example for fetching and caching user searches through facebook.

FbUserSearchStore = AnyStore.createHTTPListStore 'fb-user-search', {
minutes: 1,
limit: 10
}, ({query, paging:{limit, skip}}, callback) ->
facebook.api 'get', '/search',
fields: 'name,picture{url}'
type: 'user'
q: query
limit: limit
offset: skip
, callback

The time (in minutes) and the limit / paging amount is configurable through the second optional argument and defaults to limit=10 and minutes=1.

Now that we have a copy of our subscription data that updates when new data comes in, all we have to do to latency compensate is transform this collection however we want, and on the next update from the publication, this change will just get overwritten by the new subscription.

Store.update has two arguments which are both functions. The first arguments filters the store queries to only the queries you want to update. The second query will transform the collection. This is the a perfect place to use my favorite utility library, Ramda.js. If you haven’t heard of Ramda, then you definitely need to watch Hey Underscore, You’re Doing It Wrong.

Meteor.methods
follow: (followId) ->
check(followId, String)
check(@userId, String)
if Meteor.isServer
Neo4j.setFollow(@userId, followId)
AnyDb.refresh('followerCount', R.equals(userId))
else
UserStore.update(
R.equals(userId)
R.map(R.evolve({
followerCount:R.inc
unverified: R.always(true)
}))
)

The above example only shows how reactivity works for a users follower count in Shindig. But in reality, this actually contains a lot more code specifying in great detail how every store, every publication, and every action interact to reactively update and latency compensate data. This is a big place that could use some improvement. Rather than imperatively specifying how certain actions effect certain queries, I think it ought to be possible to declaratively specify some kind of dependency graph between queries and calculate what queries need to be refreshed and how stores need to be updated based on this graph. I haven’t had time to figure this out though, so if you’re into hardcore computer science problems, help me out with this one!

So this basically concludes how build the backend of Shindig. If you’re thinking about using these packages, I want you take a moment and check out Neo4j. Its a really awesome database from the future and the people building it are incredibly supportive.

Don’t stop reading now, the fun is just beginning. Next I’m going talk about how I built the front-end with React.

--

--