Where do transformational cost reductions come from?

Alan Mitchell
Mydex
Published in
13 min readDec 7, 2018

If order-of-magnitude reductions in the cost of key resources transform societies and economies, where do these cost reductions come from?

The short answer is simple: by changing the way a system works instead of trying to make an existing (wasteful) system work better; by moving the baseline rather than taking the baseline as a given and then trying to make it more ‘efficient’. In the case of personal data, our current system is organisation-centric. It is built on, and organised around, organisations collecting data about people and using this data to improve the way the organisation works (including, hopefully, the services it provides).

This organisation-centric architecture:

  • Creates huge amounts of duplicated effort (because many organisations are having to do the same things to serve the same individuals)
  • Creates layer upon layer of excess complexity (especially when it comes to managing relationships with individuals or sharing data between organisations)
  • Has in-built problems with the quality and reliability of the data it collects and uses (mainly because systems for sharing data are so clunky and complex). Poor quality data generates errors which reverberate across the system creating endless knock-on costs.
  • Builds misalignment into almost every process and activities (because each separate organisation is focused solely on ‘optimising’ its own activities in isolation to all the others)
  • Produces a mountain of opportunity costs, as huge amounts of time, energy and money that could be spent doing more productive things are invested in fire-fighting the issues outlined above.

The systemic cost reductions we (Mydex CIC) are working to deliver all flow from changing the way this system works: by creating a new, neutral infrastructure layer which, via Personal Data Stores, makes individuals the point where data about them is integrated, and which uses this infrastructure to enable safe, efficient data sharing between individuals and their organisational service providers.

That’s it, in a nutshell. The rest of this blog provides more detail on the opportunity by looking at the challenges faced by many of the service ‘clusters’ we are currently working with.

The challenge of systemic waste

‘Clusters’ are groupings of different organisations who, in one way or another, are involved in helping an individual undertake a task such as ‘get out of debt’, ‘treat cancer’ or ‘reduce fuel poverty’. The organisations may be in the private sector, the public sector or the third sector. It doesn’t matter. What does matter is that within clusters, a group of separate organisations each provide an essential component of the complete service the individual needs. No single organisation, working alone, can do it all. And to provide the complete service they need to share data.

This is where the problems start.

The first source of excess cost is duplicated effort. With many services, one of citizens’ biggest complaints is the way they’re forced to provide the same information again and again and again, just so each separate organisation can get the information it needs to do its job. If there are ten different organisations in the cluster, there will be ten different forms of some kind (paper or electronic) with each undertaking collecting the same core data (as well as the specific specialist bits of data they need for their particular task).

Because they are separate legal entities with different policies, processes and systems, none of them are designed to ‘talk’ to each other. Sometimes, these missed connections are infuriatingly simple. How about the cluster where, when collecting information about age, one organisation had one age range (16–21), another a different age range (18–24), and yet another different again (16–24)?

Sometimes, it involves a bigger challenge — such as each of the ten organisations creating and managing ten different identities and reference numbers along with any number of identity login systems to service the same individual with different usernames and passwords.

But the inability to coordinate can go even deeper. Because they are working as separate organisations, to deliver the efficient effective ‘joined up’ service everyone wants, they need to share personal data. But some organisations are reluctant to do so. They may regard the data as ‘theirs’ and feel proprietorial about it. They may be concerned about risks: what happens to the data once it leaves systems they’ve worked so hard to make secure? In addition, to comply with data protection regulations they need to get citizens’ consent for each piece of data shared, and that’s a headache in its own right.

So, often the data doesn’t get shared. Result? Things fall between stools, coordinating and integrating activities becomes a major headache, and mistakes get made: appointments are missed, the wrong decisions are made or can’t be implemented properly because the right information isn’t available. And people (that is, both the individual needing the service and the organisations providing this service) have to spend a lot of time firefighting. It is frustrating and stressful for all concerned.

One oft-touted solution to these problems is to set up a new joint venture or consortium. In theory, this means all the data can go into one, single larger system which, it is hoped, will streamline data sharing. But sharing data between the original separate organisations and the new JV remains a headache, and the quality of its data invariably suffers. Because it’s not seen as the ‘main’ system, the JV’s data isn’t kept as up to date and accurate as the data held by the separate organisations. And the reality is there will never be a big enough system capable of doing everything that needs to be done.

Result? Even more mistakes and inefficiencies. Even though these sorts of ‘solution’ are extremely time consuming and expensive to establish and run, we have never seen one that works or survives one organisation dropping out or another joining.

A further quality related problem is that even when data is shared, the organisation receiving the data is not always confident that the data is up to date and accurate. So they invest time and effort running checks. This, in turn, means that many processes that could and should be automated remain manual and therefore slow and expensive to operate.

Widespread duplication of effort; unnecessary and costly complexity; poor quality inputs and processes that require manual intervention or else cause errors; misaligned activities that fail to ‘join up’: these sources of cost, friction and risk are endemic not only in the clusters we work with but everywhere personal data is collected and used, resulting in endless frustration, hassle and irritation for the individuals seeking the service.

It’s not the fault of the clusters themselves or the people working within them. They do their level best to make the organisation-centric system they’ve been landed with work. But the fact is, it doesn’t. And it can’t.

That’s where order-of-magnitude cost reductions come from: by shifting from an approach that doesn’t work (and creates new cost, complexity and error every day) to one that does — for both the individual and the organisation.

Let’s take a few examples.

Duplication of effort

Before Henry Ford came along with his mass production lines, motor cars were made by hand. By craftsmen. Most of these craftsmen were exquisitely skilled. But because each part was made by hand, each part was different. And because each part was different, each time they wanted to fit one part to another, they had to re-work it: duplicate the effort. Nobody saw it at the time, but under this system, most of the cost that went into making a motor car didn’t come from the cost of making or even assembling the parts. It came from the cost of re-working them so that they could be assembled together.

In personal data today, we have endless duplication of effort — endless re-work — as countless different organisations all replicate and duplicate the same basic tasks, collecting the same information many times over, each one of them minting their own identities using their own identity processes, and so on. There is a huge amount of cost here, just waiting to be stripped out of the system by a simple design change: the ‘make once use many times’ approach of making individuals the point where data about themselves is integrated, via their own Personal Data Store.

Unnecessary complexity

In the early 20th century, Greater London boasted 65 electricity supply utilities, 49 different types of supply systems, 10 different frequencies, 32 voltage levels for transmission and 24 for distribution. Oh! There were also 70 different methods of charging and pricing. One by-product of this complexity was that if you wanted to move home, you couldn’t take your appliances with you because they almost certainly wouldn’t work in the new location. Appliance manufacturers had to multiply their costs many times over to create products to fit the multiple different standards. Guess what? The supply and use of electricity was so complicated and expensive hardly anyone bothered.

Eventually everyone realised that this complexity nightmare was killing the electrification opportunity. The solution was to standardise: to all work to the same standards so that the system as a whole worked smoothly. This was the standard ‘railway gauge’ solution re-applied. In the early days of rail, when many different train companies each built tracks using different gauges, most of the time and cost of rail travel wasn’t incurred in actually doing the travelling. It was incurred by the hassle and cost of shifting from one gauge system to another.

In personal data today, costly complexity comes from two sources: unnecessary ‘added’ complexity that could simply be stripped away, and necessary complexity that can nevertheless be handled much more efficiently.

Two examples of unnecessary added complexity are ludicrous hand-cranked consents/permissions and identity processes. Both turn every organisation and every contract into the equivalent of a hard border where manual data inputting and/or checks (and perhaps even separate negotiations and adjustments) need to be made billions of times a day. These costs could be stripped away via the introduction of standardised Safe By Default consent processes which eliminate the need for separate scrutiny of multiple different ‘contracts’, and by ‘make once use many times’ identities built out of Verified Attributes.

What about necessary complexity? The fact that multiple organisations in different industries use different IT systems, data schema, protocols etc mean that no matter how important (consent-based) data sharing may be, they struggle to ‘talk’ to each other easily. A common knee-jerk reaction is to create large projects which attempt to bang many heads together to create ‘one standard to rule them all’. Like the new Joint Ventures set up to ease data sharing between multiple agencies, these standard-setting initiatives invariably fail (after consuming huge amounts of time, money and energy). Why? Because each separate organisation, having invested in systems that work for it as a separate organisation, have no pressing reasons to invest in additional/different systems that focus on the needs of external third parties.

But for a new PDS-based data infrastructure that makes individuals the point where data about themselves is integrated, building ‘translation services’ between different systems and protocols is a core part of its job. And there are huge efficiencies in doing so, because once it has created a ‘translation service’ from System A to System B, this translation service can be used for all such translation needs. A Personal Data Store has to enable interoperability if it is to provide the service it was set up for. And by adopting a ‘make once use many times’ approach, it makes interoperability efficient.

In this way, a PDS-based approach to data sharing cuts both sets of complexity costs (while improving data quality), thereby benefiting all parties.

Poor quality inputs and processes

In the 1930s, when the aviation industry was just getting going, an aeroplane’s engines could operate safely for about 50 hours before needing an overhaul. Today the average time between overhauls is 50,000 hours: a thousand-fold increase. Just imagine what our aviation industry would look like if every Jumbo jet had to be taken out of service for an overhaul after every 50 hours of flying! It certainly wouldn’t exist in its current form.

With personal data, we are still in the aviation equivalent of the 1930s. Many millions of data-driven processes for things like orders, applications, enrolments and other activities such as bank ‘know your customer’ processes require verifications of claims that are still largely manual because the accuracy of the data has to be checked: it is not reliable.

What if the Government mandated organisations to provide the individuals they serve with secure electronic tokens verifying attributes about them — which these individuals could store in a Personal Data Store (PDS) safely and easily and then use share where and when they are needed? With this ‘make once, use many times’ approach huge amounts of waste and bureaucracy could be eliminated from the system.

The manual costs of creating an identity, including paper-based verification of claims about things like address, currently hover at around £30 per unit. If this process was automated using safe sharing of Verified Attributes, these costs would fall to around 60p: a 98% cost reduction (which helps explain why the attempts over the last 10 years to make money out of ‘a market’ for identity are continuing to fail). Similar order-of-magnitude savings could be reaped from virtually all currently manual data processing and verification tasks. Yet today, we carry on down the opposite road of ‘make once, and re-make again, and again, and again …’

Unreliable data creates further knock-on costs — through error. When people have to fill in forms manually, transcription errors add inaccurate data into the system. In the UK, on average, people move home once in every seven years. That means that over a year, the accuracy every large database will decay by around 14%. But database managers don’t know which addresses are out of date vs which are still accurate, so they make the wrong decisions and take the wrong actions (such as sending things to the wrong address).

That’s just the first round of costs created by unreliable data. The second round is the money and time that has to be invested correcting the original mistaken action/decision. And on top of that, there is the money and time that has to be invested in (probably manual) data maintenance operations that are needed to reduce these error rates.

Yet, it is within our grasp to follow the example of the aviation industry with a thousand-fold fall in such costs: by using APIs to link data held in organisations’ databases and individuals’ personal data stores to share updates on data so that it always remains accurate, most of these costs can be eliminated and processes can be automated. Why, oh why, isn’t this happening?

Misalignment

Creating waste from misalignment is as easy as falling off a log. There are just so many ways of doing it. You can make the wrong thing, or you can make too much of one thing and not enough of another (misaligning supply to demand). You can make the right thing but get it to the wrong place. Or you can get it to the right place but at the wrong time (poor logistics). And so on.

Getting exactly the right things to exactly the right places at exactly the right times turns out to be a quite a challenge — and critical to transforming endemic waste into improved productivity.

Usually, doing so requires the creation of new coordination and logistics systems. By eliminating re-work via the use of standardised parts, and by getting the right work to the right person at the right time, Henry Ford reduced the time it took to make one axle from two and a half hours to 26 minutes. For engines, it went from ten hours to four. When Ford integrated many such advances into a single system, he reduced the costs of making a motor car by more than 75% (over ten years) and turned a plaything for the super-rich into a utility for all. In doing so, he created a world-changing mass market apparently out of thin air and instigated a revolution in production that would societies and economies around the world.

Ford wasn’t successful because he made better cars. He was successful because he made a better way of making cars. That’s exactly the challenge we face in personal data today: the need for a better way of ‘making’ the services that use personal data as a key input.

By dispersing individuals’ data into multiple separate data silos that never connect, our current data system has deep-seated misalignment built into the very way it works. By making individuals the point of integration of their own data, a personal data store-based personal information logistics infrastructure puts improved alignment into the heart of how the system itself operates.

Reducing opportunity costs and making possibilities

According to UNICEF, women in Africa spend the equivalent of 22,800 years of labour time every day collecting water; about an hour a day for each woman (yes, it is mainly regarded as ‘women’s work’). Just imagine all the things that could be done, all the wealth that could be created, if every year, the women of Africa could save the 8.3 million years they collectively spend fetching water. That 8.3 million years worth of labour is just the direct, initial, savings a proper water infrastructure would deliver.

Right now, in advanced industrial countries like the UK, when it comes to personal data we are in the same position as poor rural families in Africa fetching water in buckets. Only now (with the work that Mydex CIC is now doing) is the infrastructure that will enable us to access and use our own data for our own purposes being built. As it’s built, it will free up huge amounts of resources enabling us to do oh! so much more, at oh! so much lower cost and effort.

Every economic breakthrough in history has remained invisible until people found ways to overcome structural barriers, transcend trade-offs and eliminate hidden institutionalised waste. In early 20th century London very few people could see the potential for a national energy grid. In early 20th century America there were 600 car-making companies. Not one of them saw the opportunities presented by the moving assembly line. Before Google, who would have predicted that making search quick and easy would spawn a business worth over $50bn?

In each case people unleashed far-reaching transformation by doing the things outlined above: designing out duplication of effort and applying the principle ‘make once use many times’, removing unnecessary complexity by standardisation, improving alignment and reliability, using these advances to automate processes and eliminate opportunity costs. The more they cut the systemic costs of doing the basics the more they made new things possible.

Empowering individuals with their own data is a good idea for many reasons: moral and ethical, to spark innovation, to achieve a fairer distribution of rewards and benefits, and so on. But there is another compelling reason. Forget ‘monetisation’ of data. The personal, social and economic opportunities created by PDS-driven systemic cost reduction are where the future lies — and with it a fairer society offering equality of access and inclusion.

--

--