Realtime Apps with Om Next, Untangled, and Datomic

Curren Toor
AdStage Engineering
5 min readOct 13, 2016

Many, if not most, web applications have a concept of shared resources. For example, at AdStage, we built a WYSIWYG tool for marketers called Report. With Report, marketers build dashboards to present their performance both internally and to their clients. Multiple users can access and edit the same dashboard.

Example Dashboard using Report.

Since we allow users in the same organization to access each other’s dashboards, we wanted our users to be able to edit the same dashboard simultaneously. Initially, we considered having some sort of version control system, but merge conflicts are difficult to implement and are not a pleasant user experience. We realized the ideal solution was to support real-time collaboration using websockets, much like Google Drive, where live edits are streamed to all other users who have access to the same dashboard.

Many web apps would benefit from this feature, but we do not see such implementations very often, likely due to performance issues and technical shortcomings in frameworks. To build this feature, the server needs to be able to handle a large number of concurrent connections. Also, many frameworks were not designed with this feature in mind. In the Clojure stack, we found that performance was not an issue, and even though there was no tailor-made solution for real time collaboration, Om Next, Datomic, and Sente provided all the right abstractions to make this feature straightforward. In this post, we will go over how we built real-time collaboration at AdStage and why this implementation was easy.

Models

We have four models, namely organization, user, dashboard and widget. Relationships are depicted in the diagram below.

Models in Report.

In Datomic, we store these models as the following entities. Attributes and other models have been elided to make the example code cleaner.

Datomic

Datomic has some convenient properties we take advantage of. The high-level architecture looks something like this. Reads and writes are done separately. For reads, we query over an immutable value of the database. Writes are done through a single threaded transactor.

Basic Datomic architecture. Reads (green) are directly linked to storage, whereas writes (red) are processed through a transactor.

The Datomic API provides a blocking transaction report queue. This queue is populated with data structures representing database mutations whenever the transactor successfully processes a transaction. These data structures can be queried over. This queue will be the first point in the stack where mutations are sent out to connected users.

We invoke the transaction report queue on a separate thread and wrap it in a record that implements the com.stuartsierra.component/Lifecycle protocol.

We transform the changes from the queue into a broadcast, which is a vector. The first element is a collection of user ids (we call them user/adstage-id). These are used to map change deltas to clients. The second element is a map with one key, an Om Next ident, and one value, the updated data. The broadcast looks like this.

The user ids in a broadcast are retrieved using the Datomic entity API; we start at the modified resource, traverse up to the organization, and then get all the users in that organization. The message in a broadcast is built by querying over the the latest database value with a pull expression.

After transforming transaction report queue changes into broadcasts, we put them on an output channel to be consumed by the Sente component.

Sente

We used Sente to handle the messy aspects of websockets. This library takes care of all the connection details, periodic pings, and identity management. It even falls back to long polling if websockets are unavailable.

Our setup is pretty standard, mostly adapted from the example project in the Sente repository. For the client-id we use <random-uuid>::<jwt-token>. We use a random uuid prefix so we can support streaming for the same user with multiple tabs open. The token has the :user/adstage-id attribute that we use to uniquely identify users. Lastly, we have built a component to process broadcasts coming from the TxReportMonitor component. Since this component deals with a lot of network I/O, we asynchronously process broadcasts in a core.async/go-loop.

On the client side, we use Sente in a way that mirrors the server. Our implementation is also similar to the example project except for the :chsk/recv handler. We apply an Om Next transaction on the reconciler with the new data from the server. Initially, we experimented with om.next/merge-state, but because of UI nuances, we wanted to merge the updates in a more controlled way.

Om Next

In Om Next, you have a single global app state and UI that renders, for the most part, dependent on the global app state. The reconciler is the brain of the framework, and it uses metadata, queries, and indexes on UI components to efficiently handle rendering and updating in both directions.

The three main components to an Om Next app.

The relevant mutations in Om Next look like this.

Conclusion

Whenever a dashboard or widget change is processed by Datomic, the updated data is broadcasted to all clients that have access to that resource in real-time. Sente, Om Next, and Datomic were developed independently, but they composed together very easily. Not only was this approach flexible and performant, but it was also easy to setup as well as understand, and most importantly, it was easy to extend to other models in our application. Having the right abstractions makes implementing features like real-time collaboration much more tractable.

--

--