Unidirectional Data Flow? Yes. Flux? I Am Not So Sure.
As is our custom, let’s talk about the future with a few colored rectangles and arrows. But first, what’s the problem?
I spent four years working on a sprawling Backbone app, and while there is plenty to love about Backbone, the Facebook crew has it right that Models and Views observing each other in a many-to-many way creates an absolute circus. Sure, cascading updates have a performance impact, but it is really the resulting application complexity that kills.
On one hand, I have been rapidly gravitating towards a functional style, preferring immutable data structures and pure functions to classes and model or collection methods, but we still need CRUD, and we need a pattern for flowing data updates to the UI.
Flux: A Pattern for Unidirectional Data Flow
As a complement to React, Facebook has proposed Flux as a starting point, though they describe it more as a pattern than a library, and at the recent ReactConf they already hinted at its impending deprecation. But first things first: Here is their relatively well known diagram for unidirectional data flow through Flux:
Views trigger actions, which are passed through the dispatcher, who is responsible for updating stores, the data of which is bound to views.
The goal is to avoid that unholy mess we’ve experienced with Backbone Views and Models observing each other in that many-to-many web of chaos and its cycle of violence into existential despair.
The problem with the Flux diagram above, however, is that it does not make mention of interactions with external data sources — namely, “the server.” If you dig through the examples a bit, you find the authors distinguishing between view actions and server actions. In reality, the round trip experience inclusive of the server looks a bit more like this:
It is the view action that is sending information in two directions (notifying the dispatcher that a request has taken place while sending the request itself to the server). A little more complex, but we still have unidirectional data flow, so I am still listening.
Enter Relay and GraphQL
The challenge emerges as the hierarchy of React views grows. With each view requiring different information, it becomes a code maintenance nightmare to funnel all of the data needs up to controller views at the top, which ultimately are responsible for server and store interactions.
So the next iteration seems to be Relay and GraphQL (introduced at this year’s first ReactConf), which attempts to create a DSL for aggregating data request needs.
You can find a killer talk about Relay and GraphQL here:
Perhaps this is a serious enough issue at scale, but for the rest of us plebeians it seems like a lot of work to consolidate data requests. Let’s not forget, React components can request their own data directly.
Another Kind of Unidirectional Data Flow
To that, the original gangster of unidirectional data flow:
More to my taste, what if we ease up on GETs from the server, leaning more heavily on what essentially become materialized views in serving layers like Redis and Firebase? In the case of Firebase (or home-grown solutions like ElephantDb delivered with WebSockets — you get my meaning) you get the added benefit for real time bindings for data.
Of course, the trick is to enforce unidirectional data flow by convention. Handlers within the React views make Ajax requests to a server which alone updates the caching layer.
Rather than having a hierarchy of React components funneling their data needs up to a view controller, why not have each React component worry about the data it needs in isolation, each its own punchy little titanium-plated micro-MVC… (I wouldn’t want to run into a pack of these composable bad mothers on the wrong side of the train tracks.)
Now we are getting somewhere. The result looks something like this:
But if we are going to build an army of React components on the front end supported by an army of micro-services on the backend, we won’t have one central server, and we don’t have one central caching layer, so this is a bit academic, isn’t it?
Nothing Says Unidirectional Data Flow Like the Event Stream
Enough foreplay. Here’s where I think we’re going to land:
I posit that in the emerging modern web application architecture:
- The Event Stream (i.e. logs from realtime distributed messaging such as Kafka or Kinesis) will become the source of record for historical data
- Realtime, asynchronous computation (such as Storm or Lambda) will, among other things, populate materialized views
- Composable UI components (React currently the obvious choice) will consume data from materialized views and push changes to the event stream (via backend APIs and micro-services performing validation, etc.)
The key to success for this model is going to be round trip latency. Heroku suggest that in the case of web requests, anything over 500ms should be avoided. Agreed.
This is achievable for reads in the model above. Similarly, this is achievable for the writes themselves to the APIs and micro-services. The real question is the latency from a successful write to the event stream through to the caching layer and back to the same view.
To wit, though the actual web request may have long since completed, for how long is the UI showing a pending state? Or if you optimistically display success — a disgusting, deplorable practice in my view — how long until the user finds out the chat message she thought was accepted…really wasn’t?
For the prototyping we have done so far: React => Express (via http) => Firebase => React (via the ReactFire mix-in), we have no problem, but I am eager to measure how that changes as we grow and lean more heavily on messaging. For now `console.Time` is working fine, but I am eager to figure out how to instrument this sort of thing in NewRelic where we can track it like adults.
More on that to come, but in the meantime I hope this helps!
If you found this interesting, please recommend it to others!