The majority of these questions and choices are well intentioned. They have been put in place by technologists and legislators to protect users from the harm that results when our personal data is abused. This is definitely worth doing because abuse of personal data is a real danger, and is only likely to grow as computing pervades more and more of our lives.
I respect and admire all the effort that’s been put into these legislative and technology solutions to date: a great deal of effort has clearly been put in place, and the intention is definitely a good one — people deserve protection.
However, one of the most visible signs of victory in this battle for greater protection of people’s data has been a steady increase in the number of personal data choices that an average person is asked to make every week. Popups now abound asking for your permission to do this and that, some of them driven by company policies, others driven by legal requirements.
I’m increasingly of the mind that presenting individuals with ever more choices about how their data can be collected and used is just fundamentally not the best solution to the very real need to protect people from data abuse. The choices that people are asked to make about their personal data are already too numerous and already demand an unrealistic knowledge of the implications of sharing different kinds of data with different companies. And that’s before everyone’s lives are filled with internet connected forks and the suchlike, a shift which will result in people being posed ever more unanswerable personal data questions. We’re already passed the point of decision-making overload, and we’ve hardly started yet.
I think people everywhere deserve to have meaningful, comprehensible and above all easy ways to make choices about their personal data in the digital era. The question is how to square the circle: how to get both usability (fewer popup permission questions) but also better protection?
Having thought about this for some time, I decided it was important to discard any solutions for empowering users that simultaneously decrease usability (sorry personal data lockers). And just saying ‘let companies or legislators decide’ assumes that people all want the same privacy protections, when views on appropriate personal data usage vary massively.
I believe that what this cutting-edge problem needs is a solution that involves one of the oldest and best ideas in the history of human collaboration: it is time to allow people to nominate trusted representatives who can make decisions about our personal data for us, so that we can get on with our lives.
The following is my attempt to describe what this means, and why I think it could lead to both fewer popup question dialogues, and overall greater personal data protection.
Introducing Personal Data Representatives
A Personal Data Representative would be an organisation or an individual who you trust to set good privacy and data sharing default settings on your behalf. At the simplest possible level, you would nominate a Personal Data Representative to make choices for you about which apps can do what with your data, and then you’d forget about it. The setting of permissions would happen invisibly, and you’d never have to think about them unless you went in and tweaked them manually.
A Personal Data Representative could be a big internet company, it could be a church, it could be a trade union, or it could be a dedicated rights group like the EFF or the ACLU. All a personal data representative would need to function is the ability to make thoughtful, values-driven choices about personal data default settings, and the ability to log those choices on a digital system that would talk to the rest of the world through an API.
As a user, I wouldn’t want or to think about my personal data representative very much. At some point, maybe when I am setting up a new phone, I would be asked to choose a personal data representative who aligns with my values. Thereafter I probably wouldn’t think about it again, or notice it doing anything in particular. This is because the main consequence from the user’s perspective would be to reduce the number of personal data questions and choices I get asked — something that nobody will miss when they’re gone. And the user would pretty much never notice the abuse of their data that was prevented. After all, nobody celebrates not being mugged.
You might be wondering if this would rob users of autonomy, but that’s really not an issue at all. If I wanted to change any personal data settings in any app, I still could — the role of these personal data representatives is simply to set defaults in apps and systems, not to rob users of the power to make conscious decisions. Right now most apps have defaults anyway, they’re just set by the people who produce the service, rather than by individuals who I trust to watch my back. The idea behind personal data representatives is to let most people enjoy the benefits of a less hassle-filled digital life, whilst increasing the average level of their personal data protection by a significant degree.
How would this work at a technical level?
This isn’t an idea that comes complete with a draft RFC: it’s an idea being presented to see if such work is even worth doing. Here’s roughly how I was thinking it would work, behind the scenes.
- A data standard is agreed so that apps and services can communicate their plans for a user’s data to a server in machine readable form. For example ‘This app wants to send this user’s exact location data to servers controlled by Google inc’. Such a standard could very quickly become unwieldy, attempting to model all of human life, so ensuring that the standard only covers a limited problem space is going to be very important here.
- The staff or volunteers of a Personal Data Representative would meet to discuss an initial set of default permissions they want to see implemented, such as ‘Let any app owned by Facebook access the camera’. This decision is then entered into a software system that is capable of talking to other apps using the standard described above.
- The subscriber to the personal data representative service would have to have software running on their device that was monitoring for attempts to load new apps or visit new untrusted web applications. There is a debate to be had about what part of the stack such software should live, but I’m bypassing that for now.
- The very first time a user runs a newly installed app on their phone, structured data about the permissions that the app wants to have are sent to the Personal Data Representative server, via API.
- The server would look at the list of permissions being requested by the app, would compare these against the rules that had been entered by its programmers, and then would reply to the user’s device with one of three messages: ‘Go ahead’, ‘Stop now’ or ‘You may go ahead running this app if the app is happy to run without only these particular permissions enabled’.
- The app would receive the message from the server, and either run with the personal data settings set as the server requested, or refuse to run on the grounds that it had insufficient permissions to operate properly. In that case the user would be told ‘This app cannot be installed because it your Personal Data Representative believes it will not treat your personal data in a way that you would approve of’. The user would then have the option to overrule the choice — the representative is a servant, not a master, after all.
Who is this for? Would anyone actually want it?
I don’t think that personal data representatives are the kind of innovation that most people really ‘want’ in the way that they want a cool new selfie app. The people who I hope might be interested in this idea are technologists, designers, privacy advocates, legislators and internet policy people. Ultimately it’s they who have driven the technology and legislative progress on personal data so far, and they who will continue to make the really key decisions in future. In this way it’s an internet infrastructure proposal, not a product proposal.
On incremental introduction
One thing that I think is a strength of this idea is that it isn’t an ‘all or nothing’ idea that requires mass adoption before it creates any meaningful value. You could build bits independently, like the standard for expressing permission defaults, and implement them in small but real apps. For example, someone I talked to when writing this piece suggested that you could use the permissions standard to build a tool to allow you to give copies of your privacy settings to your friends: ‘Hey — here’s a copy of my Facebook privacy settings. Just click if you’d like to use the settings I use.’
Objections and problems
I’m not so enthralled by my own idea that I can’t see it has various weaknesses. These include:
- It might not be possible to develop a security model that prevents apps from simply lying about what they plan to do.
- How do you stop the data standard becoming flabby, over-broad and unusable? Standards that try to model the whole world tend to fail.
- How do you make OS, app store or device manufacturers decide that it is in their own interests to explore or implement something like this?
- How do the personal data representatives pay for the staff to make the decisions about so many potential apps, services and permissions?
- Shouldn’t we be getting AIs to do this, not pathetic, fleshy humans?
- Shouldn’t we just have tough laws that protect everyone’s personal data from mis-use? Why appoint special data representatives when we already elect real representatives to implement policies that protect us from bad stuff?
However, even if this proposal, as conceived has some fatal flaw, I hope it can kick off a more general debate about whether there are other ways to bring about a world that simultaneously contains fewer popup choices, and at the same time higher levels of real protection.