We need privacy and data laws to tackle this global pandemic

Governments are increasingly using digital technologies and big data analytics to address the Covid-19 pandemic. At this stage of the pandemic, these technologies may not deliver their promise and will further entrench surveillance in our societies and erode the rule of law. It doesn’t need to be that way.

Beatriz Botero Arcila
Berkman Klein Center Collection
5 min readMar 18, 2020

--

Part of the world as we know it came to a halt last week. But digital technologies have played a crucial role in keeping many of us afloat. If you are lucky, like me, you’re in some form of self-isolation, but you’ve been able to keep up with much of your social and work or study interactions online. However, your hours may have been cut — or your business shut down — and you may be relying on gig-work or delivery services to continue working or keep part of your business going. Many of us are increasingly using iMessage, FaceTime or Whatsapp to check on our loved ones, far away.

Photo by @etiennegirardet

Digital technologies are not only keeping us connected; they are also being used to tackle the crisis. Apps and big data analytics are being used to track citizen movements and identify and prevent transmission between known cases and the people they interact with — generally suppressing silent transmission by reducing contact between individuals (self-isolation, social distancing, heightened hygiene). China, for example, developed and deployed apps for people to log-in their health status, follow the spread of the disease and get circulation authorization. Israel’s prime minister recently used his emergency powers to tap into cellphone data — previously a counter-terrorism measure — to follow the virus. South Korea developed an app to supervise people under quarantine. Taiwan, early on, integrated its national health insurance database with its immigration and customs database to identify travelers that could be bringing the virus to the country.

The US may soon adopt some form of these technologies. On Tuesday, various news outlets reported that the US government had started talks with Google, Facebook and other tech companies “(…) about how they can use location data gleaned from Americans’ phones to combat the novel coronavirus, including tracking whether people are keeping one another at safe distances to stem the outbreak.” Firms like Palantir and ClearviewAI — which have recently been under the public eye due to serious concerns about their privacy-invasive practices — seem to be also participating in such conversations with the government. Volunteers are also building “Coronapps” that would allow users to see if they have crossed paths with someone who may carry the virus, encouraging self-isolation, and self-monitoring lessons.* This kind of initiative was also suggested in an open letter signed by medical professionals, epidemiologists and technology leaders.

Scholars and activists working on privacy have worried about these initiatives; they may reveal personal details of a patients’ life. If not widely used, they may create a false sense of safety, and if widely adopted they may create a collective panic that could bring its own harmful effects (though they may also create a healthy feeling of awareness). There are also concerns about this data being fed back to governments, and being used later, or in the meantime, for other non-pandemic related uses.

In times of crisis, it is generally the case — and accepted — that civil and individual interests and rights are limited to accommodate measures that intend to advance public interest goals. Measures like curfews, mandatory closing of businesses and travel restrictions fall within these lines. Limiting individual privacy laws could also fall within these lines. These limitations, however, should be evaluated carefully, taking into special account their potential effectiveness, their cost, whether similar results could be achieved with less rights-invasive measures, and I would argue, the effects they will leave when this crisis is over. Broad and vague surveillance authority would seriously compromise privacy rights and, in many constitutional democracies, the rule of law.

This kind of response may be coming too late and may not deliver its promises right now. CIGI reported earlier this month that “The largest difference between an epidemic and a pandemic is the way we respond. Epidemic response focuses on isolation, surveillance and control — all in an effort to use social controls to contain and, hopefully, eliminate the pathogen. Pandemic response lets go of social controls, acknowledging their failure, and invests in public health infrastructure to begin systemic response.” A week ago, however, the World Health Organization declared this is a world pandemic. Targeted measures — the ones enabled by surveillance apps — are maybe not what we need right now; we may just need to impose some forms of mandatory isolation measures — as various European countries and many US cities are mandating. Additionally, technological solutions that are rolled out are hard to roll back: Much like what happened after 9/11 and the war on terror, once we’ve managed to deal with this pandemic, we will be scared of a new one and the controls will stay. There’s a chance we’ll leave the apparatus in place to prevent the next one, and then we’ll get used to it. Additionally, of course, a bunch of new companies will have made this part of their business model.

This is not to say governments should not use personal information to tackle the emergency nor collaborate with the private sector at all. We are most likely going to need technology to get us out of this one. Indeed, countries in Europe have also taken data-related measures to address the crisis. Germany, for example, inserted wording into its GDPR enabling legislation that specifically allows for the processing of personal data in the event of an epidemic and Italy passed emergency legislation requiring anyone who has recently stayed in an at-risk area to notify health authorities either directly or through their doctor.

My point is, rather, that if governments decide that they do want to adopt personal data-enabled solutions — for example, to have a targeted approach to address the likely subsequent waves of the disease — these solutions need to be implemented carefully, hand by hand with strict rules about how the information collected by these solutions can be combined with other data, and limits about how, by whom and for how long it can be used. For example, policies that further limit personal privacy should limit the use of the collected data exclusively to address this pandemic and to meet objectives that would be very hard to achieve by any other means. Consent should not be the main privacy-protecting mechanism, and the limits about how the data can be used should not be ‘waivable’.

There is a good chance this will be a long, bumpy ride. Yet now is the time to think about what we might want to change for when we go back, slowly, to our daily lives, and of course, what the role of technology in that future should be. It doesn’t need to be an enhanced surveillance future. Maybe, as we see pollution levels drop, we’ll learn that we can travel less and use video conferences more. Maybe some cultural materials or textbooks could be free and open for all, always. Maybe, hopefully, we’ll finally come up with better personal data protection laws.

*I assisted one of these meetings, discussed and contributed to a document discussing some of the privacy risks they posed, and some ways to minimize them.

--

--

Beatriz Botero Arcila
Berkman Klein Center Collection

Fellow at the Berkman Klein Center for Internet and Society at Harvard University and a doctoral candidate at Harvard Law School.