“Data privacy is a collective concern”
Cross-posted from The Privacy Collective
As part of their campaign to lift the lid on data privacy violations, The Privacy Collective is asking some of the UK’s leading experts why this issue matters.
Ania Calderon is the Executive Director of the Open Data Charter, a movement committed to achieving a world where government information is used to empower people in order to build more prosperous, equitable and just societies. Here, she talks about finding the balance between openness and privacy, why the world’s data infrastructure was unprepared for the coronavirus pandemic, and why we need new types of data institutions.
Why does online privacy matter?
First, you have to define why you value privacy. Privacy has been described as an elusive social value that varies across cultures and changes as the world evolves. I like to think of it not as a fixed concept, but rather a practice or process, which, as historian Sarah E. Igo explains, requires us to define the boundaries between our private affairs and our public selves, as a core component of building our citizenship.
So much information about us is now being tracked online. And as time passes on, that gives government and private bodies a clearer picture of our behaviour, our social and private activities. It also puts this idea of having authorship over how we project our public identity at stake. We tend to focus a lot on privacy when we talk about data, because that’s where it really hits the individual level. But it’s also important to talk about collective data rights, and the impacts that are societal.
Tell me about the work you do at the International Open Data Charter?
We’re a global organisation that works with governments and civil society organisations to study the way information is collected, used and shared, and how that is regulated. We promote policies and practices that facilitate well governed data. We think that context really matters and promote “publishing with purpose” — you don’t treat data about health the same as you do data about transportation, for example. Once that clear purpose is defined, it’s helpful to think about how information can be governed in a way that balances both the benefits and the risks.
Data exists on a spectrum. We don’t want all data to be open and freely available. But data collection is an inherently political process and it manifests how power is distributed. Behind the creation of data, there’s always someone making decisions about what data to collect, how it’s structured, where it lives, who’s being left out, etc.
By opening up data, we aim to redress that power so that access to data has been handled responsibly, that safeguards have been put in place where needed, and that it’s fairly distributed. It’s about setting out a path where the benefits of data can be stewarded for a public purpose, while also making sure that you’re safeguarding privacy and other fundamental rights.
How do you balance that call for openness with individual and collective data rights?
We spend a lot of time advocating for the need to balance between openness and privacy, but actually doing it is the hard part. First, you need to define the types of risks and benefits and the communities that are involved around that type of data. Some data should be open by default, some kept closed and some sensibly shared. Understanding where data sits across that spectrum is important.
A trustworthy data governance model that is dynamic and able to adapt is also important. It’s about being open and transparent about how you’re making those decisions and allowing the public and others to test your assumptions and influence the outcomes. It’s important to reach out to communities and groups that may be impacted by the way that data about them is used, uncover unintended consequences or unforeseen risks that may arise, and learn about people’s demands and concerns. This type of exercise should also be periodical, rather than a one-off, ticking-the-box exercise. You need to be able to monitor the impact both at the individual and collective level.
And finally, there are more practical measures and tools such as conducting privacy impact assessments, and using anonymisation techniques to publish data while protecting privacy. It’s a complex policy challenge, and there is no simple, one-size-fits-all solution.
We tend to focus a lot on privacy when we talk about data, because that’s where it really hits the individual level. But it’s also important to talk about collective data rights, and the impacts that are societal.
You recently said that “good data is the life and death question now”. How has the Coronavirus pandemic changed this conversation about data, and what are your thoughts about how personal data is currently being used?
The pandemic has really brought concerns that transparency advocates and privacy champions have had in the past, to the fore. We’re seeing how important it is for government officials to have timely information to make decisions that are a matter of life and death, but also for the public to know and trust that information — i.e. how are those officials making those decisions, with what information, and what are the limitations around that data?
One of the biggest concerns that we have in seeing how data is currently being used to fight the pandemic — in the UK and around the world — is that it’s so clear now that we were not adequately prepared with the infrastructure or institutions that we need to support responsible flows of data and trustworthy data use. Governments and data stewards must be able to adapt and adjust as we see things change. We can no longer rely on monolithic institutions or infrastructure.
Data has been described as “the new oil”. With such value placed on data by platforms and commercial companies, how can we succeed in putting the power back into the hands of users?
I think there was a lot of focus at first with talking about data being the new oil. But I like the phrase from Martin Tisné at Luminate — that data is the new carbon dioxide. We now know that we may be more impacted by other people’s information than our own. It’s not as important if I don’t provide my personal details, when other people who do consent can then be linked with me. We are starting to see privacy through this new lens and it will require a collective action approach.
While legislation is an important aspect of tackling this, it cannot be the sole response. Beyond that top-down approach, we also have to think about the bottom-up tactics, tools and policies. And in that regard, it’s about shifting cultures and practice. Professor Sylvie Delacroix from the University of Birmingham has called for the appointment of data trustees to address the power imbalance between the platforms gathering this data, and our ability to be able to meaningfully define how that information is used.
We need more skills and training, but we also need to develop or adapt data institutions that are able to oversee these processes, as well as accountability mechanisms to ensure there’s transparency and rights to redress. That will all help us build a more trustworthy ecosystem.
What can people do to educate themselves and protect their online data today?
Definitely aiming to be more data literate helps but I don’t think that that’s the whole solution. I don’t think we should place the burden on individuals to acquire the skills that they need, or to contribute the time that it takes to read these really long privacy terms and conditions documents, for example.
Inevitably that would also create a divide between the data rich and the data poor, leading to further inequalities around the world. Instead, I think we should be making platforms more accountable about how they design and govern their technologies, in a way that is providing agency to their customers to decide how the data about them is used.
This article was originally published by The Privacy Collective. They’re holding big technology companies accountable for the misuse of people’s data — support their claim here.