Giving evidence to a Parliament Committee
A few weeks ago I was invited by the Scrutiny Unit of Parliament to give evidence on Part 5 of the Digital Economy Bill, a section that addresses digital government. Yesterday afternoon I gave my evidence to the Committee. You can watch the video on parliament.tv, it’s at 15:50.
I wanted to publish some of the notes I took in to the session, partly because it’s good to be transparent, and partly because there wasn’t really the time to unpick the implications of some of this in the time we had.
My notes on part 5 of the Digital Economy Bill
In my opinion, part 5 of the bill is inadequate. There is a lot in the bill that should be reconsidered and improved.
- To start with, there is no statement about transparency. Citizens aren’t guaranteed the mechanisms they need to understand how their personal data is being used, when it’s used, how many times it’s accessed, by whom and what insights are generated from it. The bill misses the opportunity for Government to set best practices in transparency.
- I also expected to see sections written on how citizens will be asked for consent for their personal data — but they’re just not there. By not making it explicit that citizens should understand what is happening, “consent” will just mean more legal jargon and tick boxes. I asked the committee if any of them had actually read a terms and conditions document recently: one had tried. T’s and C’s are the default way many services ask for consent, but their impenetrability effectively stops people from understanding what they’ve actually consented to (as I’ve said before, we need new models). The committee needs to consider what it means for people’s relationship with and trust in government if they don’t know (or can’t understand) what’s happening with their data. As Jeni Tennison from the ODI said — a key part of securing trust is explaining things clearly.
- The bill talks a lot about data sharing, but really it should be about access. “Sharing” suggests duplication of data — perhaps bulk duplication, instead of accessing minimum data using APIs or canonical government registers. Jeni Tennison and the Co-op’s Mike Bracken both spoke to this: the language feels out of date, and doesn’t reflect the trend towards data minimisation that government should be pursuing.
- The safeguards around storage and processing of data are missing. For example, encryption of data in rest is not made explicit. While some of that is about process rather than legislation, addressing the security of data is some way feels like an important thing to do, because without that detail it’s hard to know what the bill really means.
- A clear definition of personal data is not given. Existing definitions already fall far short in terms of what can be considered personal data. We need better definitions so that product teams understand what kinds of permissions or design patterns are appropriate for access to a particular data type.
- The bill states that data sharing may take place when the purpose is to help an individual’s “wellbeing” which is such a broad term that it’s difficult to think of anything that couldn’t be justified under this. As a designer this really concerns me, because we should be moving towards services that are designed to collect the minimum viable data and where access to personal data is only for provision of the service. This bill suggests services that will be built around government needs rather than user needs.
How it felt to give evidence
Despite the time, we actually managed to cover a lot of ground. Speakers like Dr Edgar Whitley, Renate Samson from Big Brother Watch, Chris Taggart from Open Corporates, Jeni and Mike all voiced concerns about the unfinished quality of the bill, pointing to areas around transparency and consent in particular as places that need more thought.
I was also pleased we were asked to address the accountability of algorithms. As Paul Nowak from the TUC alluded to, this is an area that touches huge amounts of public life where blanket publishing only manages to obscure the important detail.
People need to understand the decisions being made about them by automated decision making processes, and we ought to have the power to reveal the biases of those algorithms. We’re just wrapping up an internal project looking at the forthcoming GDPR legislation that touches on that. We’ll be sharing more about that next week.
This is the first time I’ve been asked to appear at something like this and I’m indebted to many people for giving me their time and their opinions: Jim Killock for helping me understand the process of giving evidence, Tom Loosemore for the advice, Richard Pope and Matt Sheret for making sure my notes made sense, IF’s Marine Schepens for coming along on the day and making sure I had everything I needed, and to Sam Smith for telling me to keep it simple and say it as it is. And to the many people who reached out to congratulate me after I gave my evidence — thank you for your support, it’s deeply appreciated.