A Simple Wardley Map of Data Flow

James Urquhart
Digital Anatomy
Published in
7 min readDec 30, 2016

--

Last week I demonstrated the process for defining a “simple” value chain for “real time business automation”, which is my code phrase for stream processing in a business setting. This week, I want to turn my attention to building a Wardley Map from that value chain so that we can begin to get clear(er) about where the opportunities lie.

If you are unfamiliar with Wardley Mapping, take a look at

’s posts on the topic. He’s basically writing “the book” on strategic mapping, chapter by chapter, right here on Medium. It’s definitely a series to follow closely. Start at the beginning. This post brings us only to Chapter 2. There is much fun we will have in later posts, I assure you…

Where we left off…

The diagram we ended up with in the last post looks like this:

As I worked on the next steps for this post, I discovered some issues that I wanted to resolve before I moved forward:

  1. While I was sharp enough to create the custom elements that are build on both Digital Operations (policies) and Streaming Integration (streaming protocols), I failed to do so for perhaps the most important element of our value chain: functions built on the Functional Platform.
  2. Several of the names were too generic, IMHO. I needed to be more specific about the capabilities consumed. Some of the name changes might be slightly controversial (such as insisting data storage is a utility), but I would argue that exceptions will actually be rare among the greenfield applications built in this solution space.

So, here’s the new diagram:

Creating the first map

Mapping the value chain to a Wardley Map is basically an exercise of moving the value chain elements to a graph that adds the state of each elements evolution. Evolution is measured on a scale that runs from “Genesis” to “Custom” to “Product” to “Utility/Commodity”. Simon’s explanation in his Chapters 1 and 2 do much more justice to why this scale is reflective of technology evolution patterns than I can. Read the chapter if you haven’t already.

Now, there’s a lot to our “little” value chain, so let’s start by simply mapping each element to an evolutionary position. We’ll add the relationships between the elements later.

As you can see, there is a lot here. Again, you may argue with the exact placement of each element, and if so I welcome your feedback. But I believe this is pretty damn good, given the “squishy-ness” of some of these concepts.

You will notice, however, that if I were to put the relationships back into the diagram, the nice clean schematic nature of the value graph we started with is lost. Given this, I thought I’d step through each of the five top elements and their individual value chains individually. I started by color coding each of the elements.

Data Distribution

Beginning with data distribution (which is where the “real time data” nature of this problem is most evident), we have the following:

Thus, data distribution adds value to both real-time data and data capture (which also adds value to real-time data on its own). These components all add value to a basic compute and network utility. (As this is real-time distribution of data, I am not claiming a value add to data storage, though you could if you think its needed in your context.)

Real-time data is highly custom, with completely new sources (and forms) being created all the time. However, data capture is increasingly moving from an OSS/product model (see API gateways, Apache Camel, etc) to a utility model (AWS IoT and API Gateways, Microsoft Azure IoT Hub, etc.).

Digital Operations

This is one of the more straight forward portions of the value chain, IMHO.

Digital operations is primarily (but not exclusively) a function of visualization and alerting that adds value to basic data capture via monitoring. Alerting also adds value to the concept of policies by either raising alarms or executing actions in response to policy violations.

Policies are still highly custom today. There are not many “standard” policy configurations, especially for real-time stream processing. However, alerting and monitoring are firmly part of the cloud platforms themselves (though they could use improvement). Visualization is quickly moving from an OSS/product model to a built-in cloud function, but is far from being there at this time. (“Maybe 2017 will be the year of Cloud Operations Visualization?”, he says with tongue firmly in cheek.)

Fast Processing

One of the more interesting and fundamental capabilities required to do “real time business automation” is fast processing. This is where an increasing number of fundamental cloud utilities comes into play.

Fast processing (I argue) relies on the ability to run functions which build value upon a functional platform which may or may not hide use of a container or server platform, a data storage utility and/or a compute and network utility.

Everything is available today as a utility except the functions themselves. As we shall see, there is opportunity here, but almost all such functions are custom today. In fact, quite a few people are experimenting with how they can push the envelope of functional systems and do new things.

Applications/Services

Another heavy user of basic compute utilities is the execution of services that consume, manipulate or disseminate real time data from the data distribution functions.

Remember, I purposely kept this simpler in this map than I would in other contexts. Applications and services add value to basic container/server, compute/network and data storage utilities.

You might argue that there is a lot of genesis and custom work in these services, but I would argue so many services are in existence today either within a business or on the open Internet market (e.g. Twilio), that its probably fair to say services are moving quickly into the product phase. Again, happy to hear arguments about why that might not be true.

Streaming Integration

As I noted in the last post, I always feel a little on the fence about calling this out separately from fast processing as a whole, but there are potential differences in how these capabilities evolve. One arguement for this is IFTTT. They have a huge library of standard integrations, but the library of recipes is still somewhat nascent and often requires modification for each specific use case.

Nonetheless, there is much overlap between integrations and functions in general.

Streaming Protocols probably deserves to be higher on the value scale than drawn here, but as you’ll see when I put it all together, I fudged a bit to make things readable. Nonetheless, the relative positions remain valid. Streaming integration adds value to streaming protocols by leveraging functions and its underlying value chain.

There are many more streaming protocols that are becoming “standard” than functions at this point. IoT, news feeds, electronic financial transaction protocols and even APIs are a big part of why integration is moving fairly quickly. The rest is just a restatement of the previous fast processing analysis.

Putting it all together

Hopefully breaking things down for you this way gives you a decent understanding of how each major element of this user need builds value on the components that enable it. The final step for this week is putting it all together, which certainly becomes harder to read.

I left the color coding so you could get a sense of where each relationship is coming from. While this map is maybe too dense to be a starting point, as we’ll see in the next post, it gives us some idea of how we might attack this market and take advantage of the strategic opportunities before us — and avoid the pitfalls that might tempt those without a map to follow.

Until then, let me know what you think. Am I on the right track? Have I got you interested in Wardley Mapping and the information it provides? (Remember, we haven’t even gotten to the good stuff yet.) Feel free to respond below, or reach out to me on Twitter at @jamesurquhart.

--

--