Can technology still save us in fighting the pandemic? Could it ever?
COVID-19 case counts in the United States are setting new records by the day. People in Black, Brown, and Indigenous communities across the country are still dying at disproportionate rates, extending a long history of structural bias, exclusion, and exploitation in medical care and treatment. The federal government stands unprepared and unwilling to respond in the face of record daily caseloads, and contact tracing only crawls forward months after the pandemic began. As the numbers keep climbing and a grim winter encroaches, we are left despondent, wondering when it all might end.
When the pandemic began, technology seemed to present a hopeful solution, with private companies, state and local governments, and epidemiologists rushing to harness the powers of big data and technology. In the months that followed, shelter-in-place orders were implemented, masks were donned (and politicized), and case and death counts started rising. In response, technological interventions have proliferated, from the turn to virus tracking apps to the repurposing of mobile location data to assess social distancing measures. These technologies held promise to better understand, track, and contain the spread of the virus. But did any of them work? For what purpose and for whom? And what lessons might we learn from them now?
“Technologies of Pandemic Control: Privacy and Ethics for COVID-19 Surveillance,” the latest report published by the CITRIS Policy Lab, sets out to answer these questions through an analysis of four technologies aimed at mitigating COVID-19 in the U.S.: exposure notification and digital proximity tracing tools aimed to facilitate manual contact tracing efforts, aggregated location data, symptom-tracking applications, and immunity passports. The report provides a snapshot of the robust debate around these four technologies conducted from June through August 2020 and investigates how these technologies work, the privacy and ethical questions they raise, the legislation governing their use, and their potential consequences. It asks: What precedent does the use of technology for pandemic mitigation on a mass scale set? And what ethical questions does it raise for a post-pandemic future?
While technology might have an important role to play in facilitating a large-scale public health response, this report indicates that technology can only do so if attuned to, and designed to actively counteract, the clear racialized and classed disparities in pandemic impact that have continued unabated.
While the use of technology for pandemic response has the potential to make an impact, it is not in the ways initially envisioned. The report argues that efforts should be shifted away from interventions that lack sufficient evidence — such as exposure notification tools — and towards existing bottlenecks in the public health and logistics infrastructure, as identified by healthcare workers and public health officials at the forefront of response efforts.
Such technological interventions must function as components of a larger public health response, one that emerges from partnerships between technology and public health committed to removing barriers to equitable testing, treatment, and comprehensive care. Resources should be redirected and reinvested into addressing such bottlenecks, underfunded and under-resourced health facilities, and individuals needing financial and social support to self-isolate.
While much of the debate centers around the protection of user privacy — a crucial requirement for any technical intervention — the report also asks what a focus on individual privacy potentially obscures. COVID-19 has put into stark relief the structural racism that has long defined access to resources, adequate healthcare, testing, and treatment in the U.S. All four of the technologies featured in the report concentrate potential risk of harm in low-income groups and communities of color, due to the interventions’ reliance on smartphones, the (potential lack of) representativeness of the data, as well as the likelihood of mission creep, increased surveillance, and chance of appropriation by law enforcement.
While technology might have an important role to play in facilitating a large-scale public health response, this report indicates that technology can only do so if attuned to, and designed to actively counteract, the clear racialized and classed disparities in pandemic impact that have continued unabated. We are invited to ask: What can we learn from the initial rush to technology, its subsequent tapering, and what existing hierarchies it replicates? How might technology be put into best practice, facilitating disease surveillance rather than that of the population? Might there be hope yet?