Data Trusts and other shaggy dog shortcuts

Alan Mitchell
Mydex
Published in
7 min readFeb 18, 2019

Over the last year or so a buzz has grown up around the concept of ‘data trusts’. The OECD has begun work on designing and building data trusts. And in its recent paper on the economic value of data, the Treasury says the UK Government is “exploring data sharing frameworks, such as data trusts, to allow and ensure the safe, fair and equitable data sharing between organisations”. [Note: only safe, fair and equitable data sharing between organisations, not safe, fair and equitable data sharing between organisations and individuals.]

The core idea of data trusts is that they are legal structures that provide independent third-party stewardship of data. These fiduciary trusts steward, maintain and manage how data is used and shared, based on legally accountable governance structures. Many people are now promoting data trusts as the solution to current abuses and imbalances of power and reward in the collection of personal data.

At one level, data trusts are a great idea. You could argue that Mydex CIC was the first ever data trust, with its data sharing agreements designed to help individuals keep control of their data, and its operating platform which is designed to redress imbalances of power in the collection and use of personal data. But on closer inspection we think that, as proposed, data trusts represent a shaggy dog shortcut.

Shaggy dog shortcuts

A shaggy dog shortcut is where people, on encountering a problem which seems rather big and daunting, decide to go for something that seems more manageable, that at least does something to address the problem. But once they embark on this shortcut they get lost in a maze of difficulties and complexities from which they never emerge. The supposed shortcut ends up being a shaggy dog story that goes on and on without end.

At Mydex, we fear that data trusts are becoming a bandwagon that could roll on for another ten years while getting us no closer to solving the core problems of personal data. Why ten years? Two to three years of hard work trying to get a variety of different data trusts going (including the full razamatazz of conferences, speeches, papers and so on) plus another three to four years trying to make them work operationally plus another four years seeking (unsuccessfully) to deal with the problems they create (at which point those that actually get off the ground end up being quietly dropped).

Five acid test questions

To help us assess the promise of data trusts, lets use five very simple acid test questions that fall out of our proposed National Strategy for Personal Data.

When assessing any proposal for reform, just ask:

  1. Does it promote phoney, restricted ‘control’ (restricting ‘control’ to what organisations do with individuals’ data) or promote real control (enabling individuals to use their own data in any way they wish)?
  2. Does it perpetuate flawed notions of consent, or move towards Safe By Default?
  3. Does it depend on the formulation and implementation of better corporate policies (placing the initiatives with these corporations) or does it create an alternative architecture and infrastructure that directly empowers individuals?
  4. Does it reinforce the tethering of data storage and use so that individuals’ data is always tethered to the organisation that originally collected it, or does it enable data to be untethered so that it can be used (again), by the individuals for their own purposes?
  5. Does it perpetuate the status quo where citizens are treated as data subjects, or does it put citizen agency and empowerment at the heart of what it does?

Now let’s apply these five acid test questions to data trusts.

  1. Does it promote phoney or real personal control over data? Answer: only phoney control. All data trusts promise to do is impose some restrictions on what trust-subscribing organisations can do with individuals’ data. Data trusts do nothing to actually empower individuals with their own data.
  2. Does it perpetuate consent or move towards Safe By Default? Answer: 50/50. Depending on which model of data trust you look at, some seek to improve broken consent procedures, some edge towards a Safe By Default model — but with big dangers (see below).
  3. Does it depend on the implementation of ‘better corporate policies’ or does it create an alternative infrastructure and architecture that addresses the problem at root? Answer: data trusts focus on amending the status quo and ignore the need for structural reform.
  4. Does it reinforce the tethering of individuals’ data to particular organisations, or does it enable individuals’ data to be untethered and given back to the individual? Answer: Data trusts reinforce tethering and do nothing to free individuals’ data from corporate monopoly control.
  5. Does it perpetuate a status quo where citizens are treated as data subjects, or does it put citizen agency at the heart of what it does? Answer: Data trusts perpetuate the status quo, continuing to treat citizens as data subjects while in some cases positively denying citizen agency. For example, the ODI suggests data trust trustees should “take on a legally binding duty to make decisions about the data in the best interests of the beneficiaries”. Whatever you do, don’t let individuals make their own decisions!

The dangers of data trusts

We have five main concerns about the way data trusts are being promoted as a solution to the personal data problem.

First, as above, they do nothing to build the structural, systemic solutions we need. They are just sticking plaster solutions.

Second, they could waste huge amounts of time, energy, effort and money. Every new data trust would involve a mountain of work including: establishing the trust’s data sharing rules plus associated mechanisms for audit, enforcement, complaints handling, investigation and redress; defining the remits of trustees and recruiting them; negotiating with and recruiting corporate members of the trust; establishing the actual mechanisms by which data will be shared; agreeing budgets, establishing the right business model, and securing the funds to do all of the above. (One question we never see in discussions on data trusts is ’Who is going to pay for it all?’)

Data trusts would also impose a new burden of work on citizens, who would now be expected to investigate and understand the particular data sharing and use policies that each trust has developed — thereby replicating the problems they currently face with privacy policies and consent, only at another level.

This mountain of work would be a big ask even if it only had to be done once. But with data trusts, it would have to be repeated potentially thousands of times as different data trusts are set up to do different jobs: a data trust for cancer treatment in Manchester, a data trust for disability services in Birmingham, a data trust to access individuals’ data for medical research, a data trust for the provision of integrated financial advice, a data trust for sharing information about pensions. And so on, and so on, and so on.

Third, there is a dangerous lack of clarity around the data trust concept — with the term ‘data trust’ already becoming a loose fashionable label used to cover many different ideas. For example, some mooted data trusts focus solely on non-personal data while others see data trusts as the way to tackle issues relating to personal data. Some see data trusts as a sort of collective bargaining mechanism where groups of citizens band together to get a better deal for their data. Others see them as impartial governance mechanisms where the trustees oversee a system of data sharing that is designed to generate fair rewards for all beneficiaries including, as the ODI puts it “those who are provided with access to the data (such as researchers and developers) and the people who benefit from what they create from the data”.

This leads to our fourth problem of gaming. Those proposing data trusts as a solution clearly have good intentions. But already other parties are seizing on the idea with less than good intentions. They are eyeing the data trust concept as an opportunity for a new data landgrab. “Hey! Cool! Why don’t we set up a new data trust where we say to people your data is safe with us, and where we set the rules so that we can share people’s data willy nilly without even having to ask for consent any more! What a lot of hassle that would save!” (Which raises a further knock-on question: who is going to police all these new data trusts, and how?)

Our fifth and final concern is very simple. All the above is unnecessary. A solution to the challenge of safe, trustworthy data sharing which protects individuals and puts them in control already exists: the Mydex platform already offers all the functions promised by data trusts without any of the additional costs or the (extremely high) risks of gaming. A fully fledged personal data infrastructure would offer many such services. Personal Data Stores place the individual at the centre, able to donate or share their data with anyone pursuing purposes the individual wants to support, in ways that provide the individual with ongoing control including updates on progress and use.

Getting the basics right

The personal data ecosystem right now is awash with potential shaggy dog shortcuts, all of them boiling down to the same basic error. They seek to fix the status quo by improving the ways in which organisations collect and use individuals’ data while failing to address the core structural, architectural issue — that individuals don’t have the practical means and ability to collect and use their own data for their own purposes independently of any data controller. As long as this continues, huge amounts of time, energy and effort are going to be wasted in initiatives that end up going nowhere. The only possible result would be to delay real progress and waste millions of pounds of public, private and third sector money.

--

--