Event Sourcing in React, Redux & Elixir — how we write fast, scalable, real-time apps at Rapport

Written by Dan Tame and Gary McAdam

Gary McAdam
Rapport Blog
6 min readNov 8, 2018

--

At Rapport, our users depend on an experience which handles many connections and handles changes in real-time. The app allows a team of people to join a real-time meeting, add cards for its agenda, vote on topics, edit card titles, add Emoji reactions and navigate a stack of discussion items – and it’s crucial that we deliver all these changes to all members of a meeting as quickly as possible.

Our front-end is built with React using Redux and Sagas; and talks to an Elixir back-end using Phoenix Channels — a WebSockets wrapper.

What’s different about this compared to a normal React app is that all state changes in Redux are propagated to all connected clients, and whats more, we can replay any meeting to new members who join later, or rewind events and play them back to any point in time.

We achieve all of this using a technique called Event Sourcing. Rapport is entirely modelled on immutable events which contain instructions to change the state of the application.

Event Sourcing

Every single action a user can take has corresponding events. We store these events in the order that they were applied, this means we can jump to any arbitrary point in time and get the overall state of the meeting as it was in that moment by just passing it through normal reducers. Given this fact, it means that we can also replay from the beginning of the event stream to get the latest version of the state of the meeting. Really useful stuff for keeping users in sync and caught up to the latest version of the meeting in real time.

But wait, it gets better! Since we have a chronological list of events that each user has performed in a meeting, we can start to construct views of that data to serve different needs. One view would be in the Redux store of the frontend to be able to display a UI to the user that contains all the correct information. Another use is creating a view for analytical purposes, we can query the event stream to find out things like:

  • how popular our features are
  • how long the discussion on each meeting item took
  • what the general sentiment of the reactions in the meeting was

The great benefit to Event Sourcing for us is that it scales tremendously well with new events and use cases. For example if we were to add a feature to allow the auditing of meetings (who did what, when) it would be trivial to create another view of the data which included only these details from the existing event stream.

Likewise if a feature required new events be emitted to function all the backend currently needs to worry about is persisting those events into the event log which it already does for every other event we send through.

React and Redux Sagas

Rapport is written with a fairly typical React, Redux and Saga boilerplate; we take advantage of Sagas to listen to all events (actions) and effectively forward them without modification to the server.

The server side in Rapport is our source of truth, it is where the meeting’s event stream goes to and comes from. We persist events as they are sent up to the server so that any new clients can receive the latest state of the meeting on join.

Communication between the client and server is done via WebSockets to keep everything as responsive as possible. This avoids the HTTP round trip overheads for the many pieces of small data that are going back and forth between the client and server.

Elixir

An example of how we persist Redux actions to Postgres using Pheonix channels

Elixir is a great fit for us for a few reasons. The main motivating factor was the performance and cost footprint. We wanted to be able to start from a position of cost efficiency while maintaining the ability to scale the backend if required. We are handling persistent connections for every meeting so there is a lot more going on than a traditional request / response API model. Luckily for us Elixir (more specifically the Phoenix web framework) has been quite open about performance metrics and benchmarks.

Although we were mainly concerned with choosing a stable base for our backend there were also some additional benefits to choosing Elixir and Phoenix. The developer experience for example is almost unparalleled to any other language in terms of its standard tooling and features.

Elixir is a purely functional language which guides developers into writing code that is naturally more testable and maintainable. It comes with a great standard library and because it is built to run on the Erlang VM it is rock solid in terms of reliability. (If you want to know more about Elixir as a language we recommend checking out the homepage / docs if you learn best by reading or Elixir Sips if you prefer to watch short videos.)

Handling Meeting Joins

People can join and leave a meeting as they please and we should give new participants the latest version of the store to work from.

We achieve this by sending down a payload on join which contains all the actions to date which are then replayed in order through the reducers, which in the end gives us the latest state of the meeting.

This is great because it allows us to re-use the exact same logic that would have been applied if the event was received as part of the normal course of the meeting. There are some events which are time sensitive which we choose not to replay such as the reactions to discussion items which aren’t important to the state of the meeting but need to happen in realtime.

Scalability

We believe this approach scales well into very large applications. Each component is isolated enough that it would be quite easy to migrate the data store from Postgres to something more log based such as Apache Kafka for increased performance.

On the Elixir side of things we can scale horizontally to support more WebSocket connections and lower latency. The memory footprint of the application is very very low (somewhere around the 12–25mb range) it feels like it would work very well as part of a Kubernetes cluster.

Our frontend is hosted on S3 and fronted by Cloudfront which will provide us with no scaling problems for the foreseeable future.

We’re really happy with this approach. It feels very modular and lean. It gives us the ability to iterate quickly and analyse as we go from our un-aggregated event stream.

See it in Action

Check out the product demonstration above to see an example of how meetings play out in real time. If you want to see this in action for yourself, head over to Rapport and start a meeting (no sign up required!) — share this with some friends/colleagues or just open it twice in a private browsing tab to see how it handles multiple real time connections.

Also, if you want to learn more about Event Sourcing, Elixir and Phoenix, see the References and Further Reading section below.

Thanks for taking the time to read this article. We publish regular tech blogs so if you like this, please ❤ and follow.

References and Further Reading

--

--

Gary McAdam
Rapport Blog

Co-founder of Leanloop and Rapport. Consultant + Engineer + Coach; love solving real problems and helping others reduce waste!