Adding Context To The Wardley Map of Data Flow

James Urquhart
Digital Anatomy
Published in
9 min readJan 12, 2017

This post is a continuation of a process that I started in previous posts, including The Data Flow Value Chain and A Simple Wardley Map of Data Flow. If you haven’t been following the series, please take the time to read those posts first.

After completing a Wardley Map, where does one go next? How do you extract value from the jumble of components and value relationships depicted below?

Finding any information that would help one actually make smart strategic decisions seems almost impossible, right?

Climatic Patterns

The secret sauce in analyzing Wardley maps comes from the process he outlines in wardleymaps, which was strongly influenced by the thinking of Sun Tzu in The Art of War. Read his post, On Being Lost, for the thought process that got him here, but the basic concept is captured in the following diagram:

We now have a first iteration of purpose (“real-time business automation”), a decent map of the “landscape” (the Wardley Map above), so the next step evaluates the climatic patterns that will affect our analysis.

But, what do we mean by “climatic patterns”? From Simon’s Exploring the Map:

Climatic patterns are those things which change the map regardless of your actions. This can include common economic patterns or competitor actions. Understanding climatic patterns are important when anticipating change. In much the same way Chess has patterns which impact the game. This includes rules that limit the potential movement of a piece to the likely moves that your opponent will make. You cannot stop climatic patterns from happening though, as you’ll discover, you can influence, use and exploit them.

Simon gives several examples in his post, but what I have found most useful is his fairly thorough categorization of climatic patterns provided in the following table:

Let’s choose a few of these, and see what our map might suggest those patterns will do to the way the market shapes up over time.

Everything evolves

The first pattern is the most directly obvious from the map format itself. Every technology component evolves over time from genesis to custom to product to utility/commodity. The pace and form of that evolution is almost impossible to predict, the fact that the evolution will happen at some point can be counted on.

Here, our map is only somewhat interesting, because most of the components are already utility or commodity elements. Data flow is being built in the cloud (and is, in large part, enabled by cloud), so there isn’t much opportunity to disrupt through evolution. In fact, it could be argued that each utility component in our value chain is already an evolution from an earlier product-based approach towards a similar problem.

The clear exception to this are the custom deliverables built upon these services: the functions, data, protocols and policies that define a specific data flow system. More on these as we go on.

Characteristics Change

If, indeed, our utilities are evolved from earlier approaches, how have they fundamentally changed from their predecessors. I see the answer to this as being made up of several elements:

  1. Less product, more utility. Businesses want less and less to handle their own “plumbing”. Data capture, processing, analytics and visualization/alerting should all be simple services to assemble as needed to address a problem.
  2. Less burden, more scale. A huge factor in how these services have changed from earlier attempts at handling streams of data is the scale at which they can operate with similar or even less expertise required to make them operational for a given problem space.
  3. Less context, more composition. As I noted in a post some time ago, in the long run composable architectures beat out others that force actions to fit into a single solution model — enable the developer and/or user to define the model, assembling the pieces they need as they go.

These characteristics are what make complex data flow applications a) possible, and b) flexible to meet a wide variety of needs.

No one size fits all

That flexibility to meet widely varying needs is critical. So is the flexibility to handle everything from one data event a month to one hundred thousand a second. But perhaps equally important is the ability to enable both widely varying experimental activities, and regulated, SLA driven production deployments.

swardley talks a lot about needing a variety of skillsets within an organization to address varying activities ranging from research and invention (pioneer); to organization and productization (settler); to industrialization and standardization (town planner). In many ways, it is important to realize that for any given technical domain (business automation, for instance), there will be an equal need for components that can span these practices.

(This is yet another reason why composable systems outlast contextual ones, by the way, as composable systems can be reassembled to a wider varying set of processes with greater agility.)

Assumptions made in the architecture and development (or operations) model about who will use the system, for what purpose, and with what policies and restrictions are what get’s a software architecture in trouble. While I think this is one area where there is work left to be done, I also think its clear that the “big three” cloud vendors, at least, get this.

Efficiency enables innovation

This is where things get really interesting. As the nature of these services get better defined and more consistent from release to release (which, face it, may already be the case), the efficiency gained by taking a data flow approach will not only make some (most?) existing use cases faster and more easily updated, but will likely enable new use cases that will begin to drive new efficiencies.

At the heart and soul of the technology innovation curve that Warley uses as a basis for his work is the simple concept that we can anticipate what will happen, but not necessarily exactly how or when it will happen. But a prerequisite for new major platforms is the sufficient evolution of the underlying value chain to support them.

So, as we look at “real-time business automation” and data flow, we are looking for ways in which the underlying value chain has (or has not) evolved enough to make new approaches technically and economically viable.

I think one can certainly argue that the services enabling real-time business automation are, in fact, well evolved (almost out of the gate). So, where do the innovations come from? Perhaps it starts with functions, data, protocols and policies…

Higher order systems create new sources of worth

Building on “efficiency enables innovation”, we can also see how these new innovations allow for some new value components to trigger new market opportunities. Perhaps a specific data analytics function makes certain new insights trivial for stock brokers. A new streaming protocol helps make supply chain financing almost completely automatic. A standard security policy set enables safer transactions with federal government programs.

In short, the higher order systems of worth that I think data flow will create are major advancements in the goal I stated for “real-time business automation”: the further automation of our economy.

Which gets to a key point that applying this climatic condition brings to the forefront. It is now clear that I (purposefully) chose too general a “user need” at the beginning of the process to arrive at any specific strategic opportunities in this space. But its great for making one thing clear:

The businesses that will create new value through data flow will almost certainly add value on top of the existing utility infrastructure.

There are whole posts to be written about that topic, and much of the remaining analysis of our map will reenforce this, but those that have read this far get a preview to this very important point.

No choice on evolution

But what happens if functions, protocols, policies and data doesn’t evolve from being custom in the vast majority of use cases? Well, this is where technology evolution has been very consistent over the last century or two. If a technology has high general utility, it will evolve from genesis to custom to product (or rental) to utility (or commodity). The terms might be different in different technologies, but the general concept has been consistent.

(Simon Wardley did a ton of research on this point. See Everything Evolves from 2012, for a good starting point.)

The key thing is that these technologies will evolve, so how can you take advantage of that? This is where gameplay becomes important, which we will explore in depth in a later post.

Past success breeds inertia

So, if real-time is the future, the utilities are already there to support it, and much of enterprise software is perfect for this model, why don’t the existing enterprise software companies just take over and provide their crown jewels as data streams, functions, protocols and policy?

This is one of the key climatic patterns that opens opportunity for ambitious people to make money from any transition like this. They can’t. Or, stated more accurately, to do so would disrupt existing cash streams and priorities, put the accuracy of future sales predictions at risk, and require a retooling of not just the architecture, but of marketing, sales, business models, performance metrics and so on. It’s a huge endeavor.

Furthermore, if you ask enterprise customers today if they want their favorite ERP application as a composable set of core services consuming a real time data stream in the cloud, they’ll likely ask “why”. The ratio of immediate demand for software products and large grain, contextual software services will likely remain very high relative to the models that data flow enables for years to come.

This inertia creates the opportunity for risk takers to keep taking shots at the next step of evolution until the combination of the right approach and market readiness align. Then, the inevitable evolutionary step will occur (seemingly “overnight”) and big, smart businesses will be caught unable to stop the disruption.

How to counter this effect? Microsoft has the right idea with how they’ve approached building cloud services that completely disrupt their traditional software business:

  1. They saw the disruption coming — maybe not using the mapping techniques we are using here, but through thorough and honest evaluation of the changing world around them
  2. They invested in research and development projects that created services that would ultimately disrupt their own products, sales model, and even channels.
  3. They continue to cycle through evaluating, experiementing, monetizing and productizing the larger set of technologies that are required to make the new model possible.

Remember, Microsoft took a huge hit in the stock market (briefly) for being honest about the future of Windows and Office software. But now they are growing quickly, and are one of the leading cloud utilities. They look like geniuses now.

And, just to close the loop, notice the emphasis they are beginning to put on data flow: stream processing, functions, and so on. This is not a side project for them (nor for Google or Amazon).

Where to go from here

Our next step is to orient our plan of attack through applying some basic doctrine, or principles that can be used in a variety of business situations. I must admit that I’m still wrapping my head around some of these concepts, but I think you’ll see that these principles will help us pick a viable “point on the horizon” to shoot for.

In the meantime, as I’ve said many times before, I write to learn. I’d love your feedback, either in the response section below, or on Twitter where I am @jamesurquhart.

--

--