Why is Flow Important?

James Urquhart
Digital Anatomy
Published in
7 min readNov 26, 2018

I was honored to present about Flow Architectures to the O’Reilly Radar conference in San Francisco recently. If possible, I will post the video when it is available, but I’ve put the slides here. The topic of the presentation was really about what Flow is, and what problems it creates for us to solve (based on my Five Facets of Flow Strategy post). It was the first time that I ever presented specifically on Flow, and as such I learned a lot about what works and what doesn’t.

While I got some good feedback about the Flow Architecture itself (which I will discuss in a separate post), the most important piece of feedback I got was that the presentation failed to make clear why Flow Architectures are important. This was very valid feedback, so I thought I’d take a post to talk through why Flow will impact the way we do things so much.

How Flow differs from past architectures

#1: From “stacks” to “graphs”

I wrote an earlier post that talked through the basic patterns of Flow Architectures (which, again, needs an update based on feedback — mostly that queues are not a necessary component for reasons that should be clear later in this post), in which I began with:

“As I look forward to the coming evolution of application architectures from client-server’s “stack” approach to a data flow centric model, I think it is important to have a ‘point on the horizon’ to shoot for.”

So, the first big difference I think there is between traditional distributed architectures and Flow is a movement from a “stack” type model (like MVC or multi-tier architectures) to more of a graph model, in which mobile apps, web apps, services, data sources, event sources, and most other deployable components act more like nodes in a graph.

It will have some semblance to what software engineers mean when they say “peer-to-peer”, but with some of those peers being large scale data sources and services, large-scale messaging queues and pipes, and some massive infrastructures-as-software elements, all running in data centers (and mostly in public clouds).

The graph won’t be “planned” or “designed”, but evolve organically as different interests add or remove software or links between software components. This is also different than architectures that grew out of the client-server tradition, where change management was attempted through increasingly large, complex “applications” which were treated as a single deployable. As of now, we are seeing the challenges of both managing change within such an application itself, as well as the integrations with other applications and supporting systems. Thinking in terms of a tightly controlled stack that handles dozens or hundreds of features limits your scale.

Flow enables smaller deployables, a greater breakdown of functionality among deployables, and greater flexibility in how integration takes place between those deployables. Now, to be clear, I think there is still a pattern within that graph of “front ends” that are mostly dependent on other services, a class of services that handles calculations needed for both data analysis and event routing, and a class of data services that are dependent on very little except data stores themselves. But, that said, the rules for how these services are created and linked together will evolve greatly over the coming decade, and that prediction may turn out to be pretty naive.

#2: From “historical” to “real time”

Perhaps the biggest impact felt from flow will be the ways it will enable greater real-time processing of data streams (for some value of “real-time”). This, in turn, will accelerate the use of data in decision making and automation across business, government, consumer, and technology markets. The result will be entirely new business opportunities that we likely can’t even dream of today, as well as new challenges to many of the checks and balances we depend on in human society.

A grand statement, I know, but this is why the impact of Flow will likely be greater than even that of cloud computing. To demonstrate why I believe this is so, let me use an example from my days at SOASTA, the performance management and Real User Measurement (RUM) vendor that was acquired by Akamai in April of 2017.

SOASTA put a lot of effort into enabling close-to-real-time (sub 10s delay) analysis of the massive amount of performance data it collected from browsers all over the world. But it succeeded in finding ways to correlate browser performance (such as various page load timings) with business metrics such as conversion or revenue velocity. They also had the genius idea to put together a display product that would allow that data (and related visualizations such as globes with real-time page experience data and constantly updating charts) to be displayed in one place.

Early versions of those displays were installed in a couple of friendly customers, in what amounted to a “POC” (performance operations center). But, because the business data was being put into context in real time, within 48 hours at both customers, an executive in charge of web marketing approached the team asking if they could station employees at desks within view of the display. In other words, user experience data bridged the gap between business imperatives and technical imperatives. It was amazing to see.

Real time processing of data is important for more than human interpretation, of course. The financial industry has for years been tinkering with using real-time stock data (in this case at a millisecond delay scale) to enable automated arbitrage of trades — aka high speed trading. Being able to automatically take action as certain patterns of data emerge in a data stream has already revolutionized a key part of our world, and the idea is spreading quickly to healthcare, supply chain management, and even government. It doesn’t outright replace data lakes and “big data” analysis, but it does subsume many of those use cases.

#3: From “disparate” to “holistic”

As we evolved distributed systems from the very early days of client-server to the API-centric multi-tier built-for-scale applications of the early part of the previous decade (more hyphens means more capability?) we continued to largely assume we can operate each individual deployed application as a stand-alone entity. We set up monitoring for that application, we create operations rules for that application, we plan updates to infrastructure around that application, etc.

Cloud, however, forced a shift in thinking, as we — by design — assigned responsibility for the infrastructure to an entirely different company without any explicit coordination with our own application timelines. Furthermore, we found that it was easier to scale if we deployed different components of a complex application individually, making monitoring and management an effort across different teams, technologies, and even companies.

Events available from cloud provider infrastructure are going to push this dynamic even further. Instead of identifying operations tools and practices for each individually owned deployable, the default is going to be to integrate that component into a mesh of tools and signals that span not only your company’s assets, but the Internet itself.

The availability of these signals (and supporting tools) will change the way we think about software management. By default, development and operations will increasingly be ongoing learning organizations that continuously observe, orient, decide, and act on context provided by hundreds, if not thousands, of relevant signals. Sound insane? It would be if it was just humans doing the OODA actions. But events are going to allow a new generation of systems entrepreneurs to build solutions that are complex systems aware, intelligent, and simplify the practice immensely. Again, refer to the Five Facets of Flow post for more of my thoughts here.

(Realtime, by the way, is one of the reasons why queues may not always be required in a Flow architecture…if latency is of the utmost importance, you may use a different mechanism to deliver events, including direct calls to APIs elsewhere. There are people much smarter than me writing about this topic, however.)

The Value of Flow

Understanding these elements of change now allows me to to directly address the value that events at scale will provide the technology community.

At its heart, the value generated here will be best defined as “speed to value”. The next decade will be spent exposing elements of our economy as events and APIs that enable applications and services to respond–in realtime–to what is happening in other applications and services.

As new data is made available as event streams, and subscription to those streams is made easier, more standard, and more secure, a myriad of new automation opportunities is made possible. The more mundane uses for this include providing live transaction data to partners and regulators, or replacing existing batch reporting with real-time analytics. More interesting ideas include producers of valuable data (such as retail transaction data) selling that data to interested consumers such as vendors, industry analysts, or yet-to-be-created startups.

The key here is that it will become not only orders of magnitude simpler to connect data to where it will be needed most, but that the existence of these data streams will force those relying on that data to respond faster, or risk being severely disrupted. For example, if you are an insurance company relying on traditional news sources for data about severe disasters, you might want to be among the first to subscribe if regional first responders post data in real-time in a secure, easily processed stream.

I hope this post goes a long way to giving you a mental model for why Flow will be valuable to developers and organizations in the future. Obviously, there is much that needs to evolve from where we are today to create a clearer picture. On the other hand, there is much that complexity science, pioneering examples of Flow, and other sources can tell us about how Flow is emerging today. This is what I hope to write more about in the near future.

As always, I write to learn, so please provide your comments, disagreements, corrections, or other insight in the comments below, or on Twitter, where I am @jamesurquhart.

--

--