What stack are we using to power Missive? As an email/chat app where hundreds… thousands of events occur at any given time for a user… it must be crazy right? It must be magical.
I wouldn’t go as far as saying that we have another dirty little secret, but we are as conservative on the back end as we are on the front end. The back end is mostly just plain Ruby workers and a RESTful Rails API.
We’re managing emails, live chat, live read/archived status for each team member on all conversations with mostly GET requests. How can we not DD0S our own servers with hundreds of requests per second? Well, that’s the fun part: the architecture.
Keeping things fresh
One of the biggest challenge we encountered while building Missive was to find creative ways to keep the many front end clients up to date with our data store.
When using a RESTful API, if your clients don’t keep an open connection to it, they need to poll for changes every
x seconds. This strategy is good enough for a lot of use cases, but in ours, we wanted to offer a live interface. Polling did not cut it.
To simulate that open connection and notify the clients of new content, we started using the Pusher platform. Every time a resource changes, we push a small message to each concerned client. We do so by doing a unique
POST request to Pusher whenever a resource changes. Since each client has the responsibility to keep a persistent connection with Pusher, they can all receive that message instantly.
For example, when the name of a mailbox changes on IMAP, and the change is synced to our servers, we broadcast a
mailboxes-updated Pusher message to the concerned clients.
The clients react by issuing a
GET /mailboxes request. The API serializes and returns all mailboxes the user has access to, thus updating the one that changed on IMAP.
For resources that don’t change often, this simplistic approach is fun to work with. You just need to broadcast a generic message that describes the changed resource to have all clients update themselves.
It’s not that simple
As good as this strategy is for resources that don’t change often, it would be catastrophic for endpoints with continuous changes like marking an email as read, archiving, or posting chat comments. It would be unbearable if each of these
POST actions from one user resulted in 20
GET requests when 20 of this user’s coworkers are online. Plus, if each of those 20 users read and mark that conversation as read, we are potentially looking at
20 read actions * 20 users = 400 GET !
We can’t really rate limit these GET requests. Remember, we want a live app.
So to make things live without flooding our API with
GET requests we extensively use another cool Pusher feature: peer-to-peer channels. Each client establishes a persistent connection with other online members of its organization through that organization’s P2P Pusher channel.
Every time a live action is triggered, like posting a comment, the client broadcasts the action to the related organization channel. Each listening client renders the new comment in the related conversation using just the P2P-broadcasted data. When user A posts a comment, user B instantly sees it without querying the API.
The broadcasted message also contains the
action_id is unique to each action. They are stored by each client that successfully processes the action (e.g. the new comment).
Now that all clients have instantly been updated, the comment needs to be persisted on the server. The client does a
POST /comments request, appending the
action_id to the payload.
Then the API persists the comment and broadcasts a
conversations-updated message also including the client-provided
action_id. Thus, each client receiving the
conversations-updated message can test if it needs to do a
GET /conversations by looking at the given
action_id. If they already have the
action_id in their cache, bingo! They don’t have to because they already processed the action.
There are few reasons why the API broadcasts a
conversations-updated to everyone after the client has already broadcasted a peer-to-peer message. One of them being if a client has no access to the conversation yet, it needs to first fetch it from the API.
Right now “some” of you might be thinking:
Aren’t you broadcasting all comments in a single shared P2P channel, how do you manage privacy and accesses?
Good question, it’s true that not everyone from an organization has access to all of the organization conversations. To provide that level of privacy while using the public organization channel, we encrypt the broadcasted data using a secret key unique to each conversation.
That secret key is provided by the API, so to decode any P2P action, you first need to fetch the related conversation and its secret key from the API.
Only clients with access to that secret key will be able to decode and process the comment.
When a client receives a
conversations-updated and doesn’t know about the
action_id it means there something new and it needs to fetch it.
To do so the client does a
GET /conversations with a
modified_since=x param, where
x equals the last time the client successfully queried that same endpoint.
Since a conversation can contain a really high number of entries (emails, comments), it would be very inefficient to serialize/deserialize all entries every time a conversation changes. To fix this, the backend also applies the
modified_since value to the entries query so unchanged ones are filtered out.
If a conversation contains 5000 comments, and only 3 have been created/modified since the last
modified_since request, we will only serialize those last 3.
We are proud of this architecture, it has proven itself to be resilient, simple and fun to work with. Even if you are using an “old” framework not as shiny as the new kid in the block… there is always a creative way to make things work. In our case, we embraced the Pusher platform and implemented an architecture that would minimize the data exchanged between the frontend and backend.
Combining an email client and a chat app sure looks like something fun to build, but it does come with many challenges. This post explored some of them… a tiny fraction! There are a lot more and we plan to write about them too! Make sure to follow @missiveapp on Twitter to not miss our next technical story.