When Tech-for-Good Backfires: Learning from Mistakes to Advise Covid-19 Innovation

A Case Study Analysis of Facebook’s Internet.org and the Democratic Party’s IowaReporterApp Initiatives

Sabina Beleuz
Legal Design and Innovation
29 min readJul 3, 2020

--

Crisis begets innovation is an often-uttered idiom of optimism in times of crisis. Whether or not the causal link rings true, the data is clear: great leaps have been made in technological advancement out of periods of economic and social downfall. The stock market crash of 2008, for example, gave way to a “banner year for new business start-ups in 2009.”[1] Looking back throughout history, the Depression of 1873 gave way to decades of immense innovations, from the refinement and adoption of the lightbulb and electric power, to the growth of urban railway transport systems.[2]

Medical crises such as the ongoing Coronavirus pandemic may lead to similar surges in research into vaccines, treatments, and technology-enabled discovery using technology such as AI. We can already visualize this impact on innovation using published and preprint research rates as a proxy for innovation progress. Current research efficiency is unprecedented — literature on COVID-19 published since January 2020 has surpassed 23,000 papers and is doubling every 20 days.[3]

“Time trend on the accumulated number of records overall, in journals and in repositories,” from Torres-Salinas, Robinson-Garcia, Castillo-Valdivieso (2020)

Notably, this innovation push is being led not only by scholars but also by private actors. Tech giants, in particular, are getting heavily involved. We’ve seen progress in both facilitating data collection for medical research purposes, such as through Verily’s Project Baseline case-mapping and testing initiative, and in efforts to supplement this data collection with contact-tracing technologies, such as through Apple and Google’s joint contact-tracing API. However, despite there being a landscape primed for innovation in the public interest and a surge in technological intervention during crisis, we have little sense as to whether this intervention will truly be effective at aiding crisis effort — or whether it will have net positive ethical consequences on society.

Public health innovation, in particular, necessitates intimate ties to sensitive user health data, geolocation information, and more, that if misused puts at risk the privacy and security of immense populations.

In order to prepare for the rise of these innovations and anticipate the worst-case scenario impact, should they have negative consequences on society, one must ask: when does technological innovation developed in the so-called public interest cause more harm than good, and what best practices distinguish the bad apples from their positive, lifesaving counterparts? In order to explore this question, I investigate two roles that technological intervention has played in the so-called public interest: that of providing access to information in a humanitarian context through Facebook’s Internet.org, and that of facilitating data collection in an electoral context through the IowaReporterApp in the 2020 Iowa Democratic Caucuses. In each capacity, I delve into the impacts these technologies had on the advancement of public welfare and investigate why such innovation, despite often being leveraged in the name of the public interest and sustaining human rights, can utterly backfire. We can use these findings to generate questions that should be considered by developers of public interest technology, such as those engaged in creating contact tracing efforts, government benefits-provision tools, and public health applications in the wake of COVID-19.

I. Private Technologist Intervention for Communication and Information Access

Internet.org: Facebook’s Attempt at A Globally-Accessible Internet

Since 2013, Facebook has attempted to connect people to the Internet globally through their public-interest initiative, Internet.org. This project involved both providing “stripped-down web services (including Facebook) available for free with an app,”[4] called Facebook Free Basics, to 65 developing nations, as well as working on innovative infrastructure-based solutions, such as the use of drones to provide connectivity to regions without access.

A billboard advertising Facebook’s Free Basics; sourced from Computer World

Internet.org: Practical and Ethical Concerns

While Internet.org may appear to be a valiant effort at providing Internet access to citizens of developing nations and could potentially facilitate communication, organization, and education, there were inherent problems with its implementation. The European Digital Rights Initiative (ERDi) and 64 other civil society organizations focusing on Internet accessibility wrote an open letter voicing their concerns, arguing that Internet.org has several implications that could hinder human rights in nations where it was launched.

Snippet of the joint open letter detailing concerns about Internet.org; sourced from AccessNow.org.

Firstly, their concerns revolved around the fact that Internet.org, both in its marketing and naming, seemed to overstate and misrepresent its offerings to users which could be misleading; users would not be able to access the whole Internet but rather a selected portion of it initially selected by Facebook and local internet service providers.

The data suggesting this is a problem is clear: in many countries where Facebook Free Basics was launched, it seemed that users did not realize a broader Internet beyond Facebook existed. In Nigeria, India, Indonesia, and Brazil, over 50% of respondents surveyed agreed with the statement that “Facebook is the Internet.”[5] Based on impacts such as these, to critics, Internet.org seems to be a “play to indoctrinate the developing world to a Facebook-controlled Internet.”[6]

This was coupled with their second set of concerns that Facebook was creating a controlled “walled garden” of information that infringed upon net neutrality to create a two-tiered Internet.[7] In its development of Free Basics, Facebook had corralled a set of applications that fell under an approved list of technical requirements, many of which remain under the Facebook corporate umbrella, and packaged it as the de facto available Internet.[8] Zero-rating these Internet services, or categorizing them as freely provided services without requiring any access or data charges, made Free Basics susceptible to discrimination by service providers, at-risk for government control, and subject to less privacy protection than traditional internet services. Moreover, the lack of a clear governance structure in place to dictate Internet.org’s relationship with regional Internet Service Providers (ISPs) meant that net neutrality was put at risk, and that ISPs held power to slow down access speeds at a whim. Access to these free services, while seemingly beneficial, could foreseeably be throttled, and user data could be collected with fewer limitations given limited security structures put in place.

Demonstrators from Free Software Movement Karnataka protested Facebook’s Free Basics in January 2016; Sourced from Manjunath Kiran/AFP/Getty Images.

Lastly, criticism surrounded the fact that Facebook was co-opting a seemingly humanitarian mission to take advantage of a new potential user base in developing nations. Via Internet.org, Facebook could now funnel new users onto its services and profit off of collecting their data without properly informing vulnerable users regarding privacy concerns.

While not developed as a direct response to crisis, Facebook’s Free Basics allows for access to digital communication applications, such as Facebook Messenger, that could theoretically be leveraged in times of crisis. However, Internet.org also allows us to learn about the shortcomings of public-interest technology projects when spearheaded private actors. Although this project could be construed as a small step forward led by Facebook to improve global connectivity, there were inherent issues in its approach that reduced its effectiveness by impacting the overall equity of the solution. By focusing on providing access to its own services rather than supporting the underlying infrastructure for equal, non-tiered access to the internet as a whole, Facebook positions Free Basics as a tool of what researchers call “digital colonialism.” While this application may be providing free services, these services are provided in “exchange [for] extracting personal data, which are sold to advertisers while furthering tech companies’ authority over the production of knowledge…and capacity to influence behavior.”[9] This raises broader concerns about what the role of public-interest technology is when built by foreign rather than domestic private technologists; we may consider whether parachuting into communities to build technological solutions is ethically fraught or on a practically ineffective.

Internet.org: Questions for Determining Best Practices

The Internet.org case raises questions that affect how technologists should think about public-interest initiatives in the realm of information access. The first set of questions revolves around transparency; How could public-interest technologists inform users about their limited offerings and ensure that users understand the way their own data is being collected and utilized? Part of the solution here to protecting user data could be a technical one: the current implementation of Internet.org prohibits the use of Secure Socket Layer (SSL) and HTTPS encryption.[10] By allowing services to leverage these existing privacy protections, users could be ensured that their data is protected. Moreover, a broader shift in initiative messaging would be required to ensure that users are informed about what the platform is truly providing (a curated selection of services rather than broad access to the Internet) and to make certain that users are informed as to how their data could be collected and used by services such as Facebook.

Secondly, we must consider: how can technologists take into account cultural and regional differences that may impact project success? To avoid misinforming users about initiatives such as Internet.org, messaging should be localized and written by individuals immersed both culturally and linguistically in the respective launch region. Ideally, this messaging is also combined with a concerted effort by on-the-ground social workers and community leaders to educate potential users not only on existing offerings but also on risks associated with their use. While this is a multi-prong endeavor, it is mutually-beneficial to integrate community discourse in the launch plan of a technological intervention: more communication regarding services means both more users on platforms but also more genuine understanding from these users of how they are interacting with the technology they choose to use.

Moreover, consideration of cultural norms should be integrated into development plans in order to foresee the impacts these will have upon launch. While we don’t have demographic data to understand the actual impact of Facebook’s Free Basics, there are clear instances where cultural norms affected who was included and excluded from using technological services; for example, as a result of freer physical mobility without gender-based restriction, new WiFi hotspots in rural areas such as Rajasthan, India, were dominated by male users, while female users relied on mobile data services.[11] Usage-based metrics may not reveal such differences in use, so cultural research in tandem with on-the-ground user testing is important to understand the realities of technological intervention in practice.

Thirdly, Internet.org raises the question: which form of intervention is more effectively impactful: top-down intervention or bottom-up support? In this case, I argue that Internet.org pursued the former, which may be less beneficial to users; Facebook parachuted into developing countries and provided free access to a select set of services. The latter, bottom-up support, could involve building the foundations for equitable internet access without any curation of accessible services. In practice, this could involve supporting technology like Alphabet’s Loon balloon initiative to bring WiFi capabilities to regions that were previously uncovered. To actually provide Internet access services once this infrastructure is made available, in a capacity that would be analogous to Internet.org’s mission, one could find a way to fairly monetize these internet offerings to provide access to the entire Internet without having to zero-rate or curate certain services from the top-down. This is the method that tech company Mozilla used when partnering with Grameenphone in Bangladesh; users had the ability to download sponsored apps or watch advertisements in order to gain data coverage and access the unrestricted Internet at their own discretion, and there were also partnerships in place to offer free Internet access for a limited period of time when purchasing a new low-cost phone plan.[12] While there are qualms with this approach regarding socioeconomically-driven vulnerability to advertising, it avoids the discrimination and net neutrality concerns posed by zero-rating and demonstrates how a top-down approach of curation need not be the only alternative. Instead, providing support to let individuals gain access to technology and decide themselves how to access it may be the more equitable and less ethically-fraught solution.

Lastly, this case may raise concerns about the role of profit-making in the creation of public interest technology. Should public-interest initiatives be driven by profit-motivated companies, especially when they stand to earn from these initiatives? Many concerns around Facebook’s Internet.org revolved around the fact that as a first-mover in the plethora of developing countries it would launch in, Facebook would be able to get exclusive access to an immense number of individuals who can now only access Facebook’s, along with a few other developers’, applications. More users for Facebook entails more data collected, which means more profits from people insights that could be offered to advertisers.

This sparks a broader ethical debate about the role of profit-driven actors engaged in public service. It may be ethically ideal to posit that nonprofit organizations ought to lead the development of public interest technology, such as in the case of Wikimedia Foundation’s development of Wikipedia Zero to offer zero-rated Wikipedia access to users. However, this argument severely limits the resources and innovation available to advance public interest technology, and may put a chokehold on the scale and long-term sustainability of public interest projects that could be launched. I argue that private technologists may still have a role to play in the development of public interest technology, and that profit can be an incentive to both encourage and sustain this development. However, an ethically-minded approach must be used in development to ensure that end users are supported rather than exploited by the final product, and to emphasize that advancement of public interest is a core mission of the technology in the first place.

II. Technology Partnerships for Data Collection in Elections

IowaReporterApp: Vote-Tallying for 2020 Iowa Democratic Caucuses

One example of electoral technology that faced several vulnerabilities and put at risk the credibility of the democratic process is the 2020 Iowa Democratic Caucus smartphone application, known as the IowaReporterApp. Developed by a subcontracted private technology development company, called Shadow Inc. (now Bluelink), IowaReporterApp was meant to tally up votes for candidates and report them to the Iowa Democratic HQ in Des Moines.

The app would have ideally simplified the vote-tallying process, speeding up reporting and automatically calculating the number of delegates each presidential candidate was awarded using a backend formula, and thus by extension hopefully quicken public reporting of results. However, in practice, the application’s success was fraught by technical and organizational difficulties.

Volunteer staffers who helped facilitate caucusing at sites ran into problems using the app from the very beginning. Because the app was not available on the Android or Apple app store and had to be “side-loaded” via a third-party platform, staffers had to navigate downloading and logging in to the app using a security code that often didn’t work.[13] This raised initial concerns from critics, who suggested that the fact that the application was not available on a formal app store eluded to the fact that it was not finished early enough to receive app store approval.

A screenshot of the IowaReporter app’s login error message; sourced from Vice.

Other staffers, finding the process simply “too cumbersome,” preferred to phone in their precincts’ results instead to caucus hotlines. However, caucus chairs often “found it took hours for any of the dozens of people at party headquarters in Des Moines to pick up the phone to receive the results.”[14] Problematically, this was the same hotline that was clogged up by caucus site staff reaching out who were not able to use the app due to technical difficulties. To add insult to injury, while there were staffers that were willing and able to use the app to collect voter data at caucus sites, the app only reported out partial data to the Democratic Party due to a “coding issue in the reporting system,” causing delayed and potentially inaccurate results.[15]

Luckily, a paper trail was still used, as all caucus votes were also recorded on paper. However, due to the mishaps in recording and transmitting caucusing information, Democratic Party Chair Troy Price announced that the Party would have to “manually…[verify] all precinct results.”[16] Hence, the application failed both in improving electoral integrity and making the caucus tally process more efficient, since the traditional paper and calling-in methods were ultimately used.

IowaReporterApp: Practical and Ethical Concerns

Many concerns revolved around the use of this type of app far before its use in the 2020 caucuses from a privacy and security standpoint. Not only was the cybersecurity of such a sensitive application in question, but a lack of transparency surrounding development and testing procedures fueled concerns regarding its technical integrity.

During its development, Chair Troy Price and the rest of the Iowa Democratic Party refused to reveal which private company was being subcontracted to create the application. Price also declined to state whether the application was being tested and verified by a third-party entity for vulnerabilities, stating that, “We want to make sure we are not relaying information that could be used against us,” in fear of malicious parties jeopardizing the security of the system if more details were revealed.[17] However, this purposeful lack of transparency left developers unaccountable to criticism from cybersecurity experts and the broader public alike. Iowan Cybersecurity professor Doug Jones argued that:

“Drawing the blinds on the process leaves us, in the public, in a position where we can’t even assess the competence of the people doing something on our behalf,” [18] which meant from the very beginning that the public could not weigh in on the development of a tool used to facilitate the democratic process.

This led to a higher likelihood of issues appearing upon implementation and less accountability and credibility from the general public that could be developed through the institution of an open feedback period. Moreover, many critics suggest that the IowaReporterApp was rushed to be completed in just two months and severely under-tested for vulnerabilities; Christopher C. Krebs, director of the Department of Homeland Security cybersecurity agency stated that “the mobile app had not been vetted or evaluated by the agency,” which would likely have been an expectation of many experts had they been able to understand how limited testing had been prior to launch.[19]

Similar issues have occurred with secretly-developed voting systems in the past. For example, ORCA, Mitt Romney’s “online voter-turnout operation” that also functioned via a smartphone application had similar issues with field workers not being able to use their security pins to log on.[20] As was the case with the IowaReporterApp, if development was transparently revealed to the public and external parties could have been involved in vulnerability testing, these bugs may have been found prior to launch. In this way, the security through obscurity fallacy that Shadow had succumbed to only contributed to the vulnerabilities of their application rather than protecting it from them.

The secrecy around the development of the IowaReporterApp leads to additional ethical concerns regarding the integrity and fairness of the electoral process, given the lack of knowledge regarding who is truly funding and developing the technology and whether they have political ties that could put at risk credibility of the application as impartial. For starters, it was revealed that the development firm Shadow “was initially a tech firm named Groundbase, founded by Hillary Clinton campaign veterans.”[21] Prior to this project, Shadow was found to have received funding for various projects from both Joe Biden’s[22] and Pete Buttigieg’s[23] campaigns. Upon investigation, it was also found that Shadow had close ties and a funding relationship with Acronym, a progressive non-profit umbrella company housing many for-profit companies, including a PAC, several digital media companies, and a consulting strategy firm under its wing, among others.[24]

A screenshot of Shadow Inc.’s (now renamed to BlueLink) former website, which states that team members formerly worked on technology development for other political campaigns. Sourced from VentureBeat
The super PAC Pacronym, affiliated with Acronym, is required to report its funders to the Federal Election Commission. As a dark money 501(c)(4) nonprofit, Acronym, which helped fund Shadow — the IowaReporterApp’s developer — is not required to reveal funding and donation sources.

Finally, since Acronym’s legal status as a 501(c)(4) means that it does not need to disclose its donation sources, it remains impossible to track exactly the root of Shadow’s funding.[25] In this capacity, as journalists Markay and Stein state, “Acronym takes on roles that appear to be in conflict: not just political vendor and vote tabulator, but also ostensibly-independent media mogul and Democratic activist.”[26] These political affiliations make it all the more important for the public to understand that it was indeed Shadow who was developing this technology, and all the more crucial that funding sources are disclosed to help improve accountability in the political process despite apparent potential biases.

Another factor that may have exacerbated the flaws in Shadow’s application was an apparent lack of learning from successes and failures of past similar projects. For instance, a completely different mobile app used by precinct chairs was also implemented in the 2016 elections, developed by a company called InterKnowlogy in partnership with Microsoft.[27] However, unlike the year of lead time provided to InterKnowlogy for development of the tool, including three months exclusively spent on product design, Shadow produced their app in only two months, which severely restricted the amount of time available for testing. Moreover, unlike InterKnowlogy’s protocol, which relied on using automated telephone hotlines to record votes in the case of an app malfunction, Shadow relied entirely on human operators. This allowed for more room for human error as well as longer wait times to talk to a phone line operator, which could have easily been avoided by analyzing the efficiencies of InterKnowlogy’s system and implementing them.[28] Rodney Guzman, the CEO of InterKnowlogy, suggested that a “large percentage” of tallied votes were reported through these automated phone lines, which should have provided a warning to technologists pursuing similar projects that an optimized phone-in system was still vital despite the institution of the new technology.[29]

Beyond merely technical concerns, there were many issues with the organization around how the app was implemented. In particular, there was a lack of consideration for the prior knowledge of end-users: the volunteers at caucus sites that were supposed to be downloading, logging in to, and recording votes on the app. There were apparently “widespread reports that the precinct chairs weren’t adequately prepared or instructed on how to use the app.”[30] This posed high barriers to entry for volunteer precinct staffers that were less technologically-comfortable and familiar with processes such as side-loading applications, leading to precinct chairs giving up on or struggling with the application’s UX, and being left with no sufficiently quick source of technical support to turn to.

IowaReporterApp: Questions for Determining Best Practices

The first question that this raises is a foundational one to consider when determining whether public interest technology should be built: is the technological solution being proposed better, both ethically and practically speaking, than the status quo alternative? The Atlantic argues that in the case of the Iowa caucuses, “There never should have been an app,” and that this technology could have been replaced by a nontechnical, simple adversarial confirmation system where both campaign representatives and precinct officials could tally their own results to confirm integrity.[31] The fact that precinct chairs were unwilling to adopt the app and that the application failed to improve efficiency or accountability when they did might suggest that this technological solution does more harm than good, or at the very least that it might not be the right moment to implement such vote-tallying technology without further prototyping. Despite the hype surrounding technological solutions, it is crucial to understand whether they are worthwhile in the first place; Barbara Simons, board chair of electoral integrity nonprofit Verified Voting, argues that even technology-embracing experts are against electoral technology since:

“The problem with cybersecurity is that you have to protect against everything, but your opponent only has to find one vulnerability to lead to a failure.” [32]

Relatedly, we must ask: are there instances where information is so sensitive and high-stake that low-tech solutions may be more accountable than their digitized counterparts? There are examples of this ringing true beyond American borders.

A Biometric Voting Machine in use at a polling station in Ghana’s 2012 elections; Credit: Gabriela Barnuevo, sourced from Techpresident.com.

Several examples of digitized biometric-based identification and registration systems, such as the one implemented in Ghana in 2012, for example, brought along technical difficulties when the systems failed to identify thumbprints properly and few attendants were available in-person to assist voters.[33] This exacerbated a credibility issue in these elections regarding concerns about mistallying votes, which may have been avoided by continuing with a more trusted low-tech solution as the status quo, or at least as a back-up option.[34] Researchers also note that the increased costs associated with implementing electoral technology causes an increased reliance on currently-ruling governmental regimes to support elections, or on foreign entities, which increases risks of profiteering. For example, in order to support Kenya’s high-tech elections in 2013, the “Canadian government offered to help secure the loans needed to pay for the introduction of digital equipment…but only if it was purchased from an approved company under Canadian supervision.”[35] Increased dependence on and potential interference from other regimes in elections can lead to issues of credibility, and thus this situation was considered “highly controversial.”[36] Evidently, the financial implications of introducing a new technological solution in the public interest need to be considered before deciding to do so.

Technologists must do the heavy lifting — the analysis of the pros and cons of intervention, the interviewing of key stakeholders, and the understanding of what it would take to truly make the tech-forward solution appropriate — before deciding to intervene.

Finally, Shadow’s unique ties with political campaigns and with Acronym, which may not be a completely impartial actor given its positionality as a purveyor of political activism and media content, raise the question: are there instances when, as a result of affiliations that might appear to form conflicts of interest, technologists should abstain from developing public-interest technology? This circles back to Facebook’s role as a profiting party off their Internet.org initiative. However, the critical need for impartiality to ensure credibility in the electoral process seems to create a particularly unique situation where this may be true. As a bare minimum, technologists engaging in democratic processes such as elections ought to transparently disclose funding sources and any political or corporate affiliations to ensure that electors understand who is collecting and processing their data. However, one could argue that technologists in these situations need to go a step further and incorporate citizen voices in their development. The Electoral Knowledge Network echoes this, stating that,

“In cases where technology is potentially disputed, for example when it seems very costly or is not welcome by all stakeholders, information campaigns need to go one step further…to remind stakeholders why the technology was chosen, which trade-offs and options were considered, and how expected improvements to the electoral process outweigh the potential downsides. Such a campaign would not only be conducted around the election itself but begin in the early stages of considering and selecting a technological intervention, when relevant stakeholders should be given an opportunity to have their say.” [37]

In practice, this could look like the Democratic Party of Iowa disclosing its intent to partner with Shadow to build this technology early on in the process, and holding a public comment period to generate feedback. It could also entail Shadow disclosing all funding sources on its website and any political affiliations of their funders, and allowing third-party cybersecurity organizations to get involved in testing its tool for vulnerabilities. Building technology for democracy calls for a democratic development process, and these steps would allow electoral technology, if deemed valuable enough to implement, to slowly approach this democratic ideal.

III. Looking Forward: Applying Best Practices to COVID-19 Intervention

Based on the two case studies presented, we can distill our findings regarding best practices for technological intervention into two broad buckets: transparency and equity. Both of these elements are critically important but are not exhaustive; they merely act as umbrella terms to cover certain critical ethical issues that must be considered when implementing technological interventions during COVID-19.

Covid-19 Intervention: Transparency Considerations

In both the Internet.org and IowaReporterApp cases, lack of transparency was a concern that both affected public credibility and raised ethical concerns. We can expect that for the majority of contact-tracing and health monitoring efforts, transparency will be a prominent ethical concern as sensitive user data must be handled in order for case-mitigation efforts to be effective, and tracking this data leads to privacy risks.

Transparency thus needs to be two-fold: incorporated both in a technical capacity and in a user-centric context. First, focusing on technical transparency, technologists should consider categorizing their code as open-source and making it auditable to third-parties whenever it is possible to do so without sacrificing security in order to allow stakeholders to raise concerns, test for vulnerabilities, and build off of existing work. Whenever possible, private technologists should commit to initiatives such as the Open Covid Pledge and Open Source Against Covid-19 to make intellectual property freely available to external parties.

Beyond allowing for collaborative development, this protocol would allow technologists, particularly those dealing with high-risk data such as geographic location and public health information, to avoid the security through obscurity fallacy that the IowaReporterApp fell prey to. In contact-tracing technologies, this is particularly important as these technologies necessarily incorporate an algorithm to identify cases that could be biased by a variety of underlying prejudices. Thus, as researchers Dubov and Shoptaw argue:

“Algorithms that will operationalize any case identification intervention should be open to public scrutiny to ensure fairness, accuracy, and absence of bias.” [38]

The second prong of transparency that needs to be kept in mind is the responsibility to inform users candidly about the technology’s offerings, the risks associated with its use, and the entities that played a role in its development. For example, if a free application providing exclusively coronavirus prevention information is made accessible to all individuals with a mobile phone even without data, efforts should be placed in ensuring that the messaging of the service is honest about the limited offerings it provides. As seen in the Internet.org case, it may be better to provide more information about the limitations of the service and inform users about where they find more information rather than provide broad messaging that misleads and misinforms users about the resources available to them. Information should be provided regarding where users can access more information and support, online and otherwise, in a culturally-relevant context. This entails forming on-the-ground partnerships with local community leaders and public health officials, as well as gathering the input of stakeholders from communities to ensure that resources are appropriately localized and trusted in the community where such technology is being implemented.

Similar on-the-ground localized work is required in order to ensure that users fully understand the risks to privacy associated with using such technology. For coronavirus monitoring technologies such as contact tracing in particular, the question of consent to use is a difficult topic, since the amount of users involved directly relates to the effectiveness of the initiative in mitigating COVID-19 spread. While certain countries have implemented non-voluntary contact tracing, requesting voluntary consent from users should be a priority whenever possible to preserve independent autonomy. In the instances where consent to contact tracing technologies cannot be requested and an opt-in program is out of the question, as was the case with Israel’s mandated case-monitoring system, “an ethical alternative is a third-party contact-tracing app freely downloaded by users who give their consent to location tracing and disclosure of information in a privacy-sensitive manner.”[39] This could be achieved by not revealing specific personal identifiers of those that have been diagnosed to others using the technology and instead utilizing an anonymized, aggregated-data based approach merely to inform someone if they have been in contact with an individual that may have had the virus.

Google and Apple’s contact tracing infographic emphasizes needing user consent to upload users’ broadcast beacon keys; sourced from Geoawesomeness.

In order to both increase credibility in the technology being used and to ensure that conflicts of interest are avoided, technologists should also make transparent to users which stakeholders were involved in their projects’ creation, either as consultants, developers, or funders. This is important, as, for example, a user may have different perceptions of the reliability of a platform if it is created by a governmental or official public health entity versus a corporation. In an infodemic of rampant misinformation in the digital sphere during this pandemic, ensuring accountability by revealing stakeholder involvement is particularly important.

A snippet of Project Baseline’s Privacy Policy outlines information collected from users being tested through the mass testing initiative.

In a similar vein, it should also be made clear with whom collected data is being shared. Private technologist initiatives such as Verily’s Project Baseline, which aims to conduct mass testing and mapping of cases across the nation, take steps to show how and when data will be shared with third parties in their privacy policy. However, given the sensitivity of such public health data, technologists ought to go a step further to tell users exactly who their data is being shared with or allow users to inquire about this in the case that their data is used by third parties. Importantly, this information should be available in a user-centric manner and avoid technical jargon as much as possible to ensure that users have the capacity to provide genuine informed consent.

Covid-19 Intervention: Equity Considerations

The world of contact tracing and mitigation of case spread via diagnosis and identification is a minefield for exacerbating societal inequity: in all forms, these initiatives essentially involve singling out individuals who have the virus, and sharing this information with government bodies or other users to encourage differential treatment. Since that health is an issue that is so entrenched in access to infrastructure, the dangers with COVID-19 interventions with regard to equity become even more salient. If technologies do not consider equity-related concerns, they risk sharing extremely sensitive user information or risk misinforming users in a way that directly impacts their health and the lives of those around them.

To ensure that technologies are developed without marginalizing certain groups, technologists should imagine the worst-case scenario for the inequities that could arise as a result of their technology being adopted.

Startups are getting involved in the development of Covid-19 immunity passports as well, which document testing data to allow for increased freedoms to those testing negative or shown to have the virus antibodies; sourced from FT.com.

For example, particularly worrisome is the potential for discrimination that could be caused by initiatives such as immunity passports, which would provide privileges to those that already have developed antibodies to COVID-19 such as the privilege to go back to in-person workplaces or travel sooner than those without the antibodies. Some experts argue that this kind of discrimination could have a positive impact by protecting “the most fragile, not marginalizing them,” while others suggest that this kind of discrimination could wreak havoc on equity within society.[40] This passport, which is currently under consideration in Britain, Italy, and Germany, might be effective at providing privileges to some individuals but has drastic implications for equity within populations.

The worst-case impact of this can be predicted by looking back at the spread of Yellow Fever in Antebellum New Orleans, where “acclimation” to the Fever (or surviving the Fever and thus developing the antibodies against it) was used as another layer of rampant discrimination within society, building a hierarchy on top of existing prejudices and a regime of slavery. However, instead of dismantling racism and socioeconomic inequality by allowing those who contracted the disease to have more opportunity in society, this acclimation privilege only broadened the gap — “enslaved people who’d acquired immunity increased their monetary value to their owners by up to 50 percent. In essence, black people’s immunity became white people’s capital.”[41] Meanwhile, white, wealthy individuals who were unacclimated could still stay home and socially-distance, and have access to healthcare if they so needed it.

The dangers of using technology to discriminate during COVID-19 seems salient. While technologies will likely inevitably discriminate as a function of identifying those with the virus, it is crucial that if such technologies are developed, that efforts are made to support those most vulnerable in tandem with the implementation of such technologies. If an immunity passport is implemented, effective governance structures must be in place to ensure that those with immunity are not exploited as a result of their ability to perform close-contact work, as well as to confirm that an appropriate social security net is in place to support those without immunity in their continued distancing or isolation. These efforts extend to other interventions as well — if contact tracing is required, identities should not be revealed and those asked to isolate should be referred to accessible services that can provide healthcare and basic needs in a timely fashion. Furthermore, access to these technological applications in itself must be considered in the context of equity. Dubov and Shoptav state in their explanation of best practices that:

“The mobile intervention may not reach vulnerable groups, if they have no access to mobile phones, or if they cannot navigate an app interface due to language or tech literacy, or if they are worried about the security of their private data. Efforts to implement digital contact-tracing should go hand in hand with determining which groups are likely to be excluded or misrepresented by these tools.” [42]

Finally, even beyond the realm of health-related applications, concerns for equity should be kept front-of-mind. If remote education technology is implemented through digital channels, for example, technologists should consider whether their services are ubiquitously accessible for all children, or whether developers should instead be transparent about their limited ability to serve only a segment of the population. If video-conferencing is used for education, employment, healthcare, and more, efforts should be made to ensure that those with access to technology are not marginalized based on their living spaces, and that populations without access to technology can continue to access such services. Evidently, to ensure that public-interest technology does not exacerbate existing inequities, efforts need to be made offline to remedy the inherent lack of access in society that provides the environment for this inequity to flourish in the first place.

[1] Richard Florida, “Start-Ups Surge in the Great Reset,” The Atlantic, May 21, 2010, https://www.theatlantic.com/business/archive/2010/05/start-ups-surge-in-the-great-reset/57052/.

[2] Ibid.

[3] Jeffrey Brainard, “Scientists Are Drowning in COVID-19 Papers. Can New Tools Keep Them Afloat?” ScienceMag.org, May 13, 2020, https://www.sciencemag.org/news/2020/05/scientists-are-drowning-covid-19-papers-can-new-tools-keep-them-afloat.

[4] Jessi Hempel, “What Happened to Internet.Org, Facebook’s Grand Plan to Wire the World?” Wired, May 17, 2018., https://www.wired.com/story/what-happened-to-facebooks-grand-plan-to-wire-the-world/.

[5] Leo Mirani, “Millions of Facebook Users Have No Idea They’re Using the Internet,” Quartz, February 9, 2015, https://qz.com/333313/milliions-of-facebook-users-have-no-idea-theyre-using-the-internet/.

[6] Jason Koebler, “Human Rights Groups Say Facebook’s Internet.Org ‘Exacerbates the Digital Divide,’” Vice, May 18, 2015, https://www.vice.com/en_us/article/bmj4xz/human-rights-groups-say-facebooks-internetorg-exacerbates-the-digital-divide

[7] EDRi, “Open Letter to Mark Zuckerberg: Internet.Org vs. Net Neutrality, Privacy and Security,” May 19, 2015, https://edri.org/letter-facebook-internet-org/.

[8] Facebook. “Free Basics: Myths and Facts.” Internet.Org (blog), November 19, 2015, https://info.internet.org/en/blog/2015/11/19/internet-org-myths-and-facts/.

[9] Mirca Madianou, “Technocolonialism: Digital Innovation and Data Practices in the Humanitarian Response to Refugee Crises,” Social Media + Society 5, no. 3 (April 1, 2019): 3, https://doi.org/10.1177/2056305119863146.

[10] EDRi, “Open Letter to Mark Zuckerberg.”

[11] Preeti Mudliar, “Public WiFi Is for Men and Mobile Internet Is for Women: Interrogating Politics of Space and Gender around WiFi Hotspots,” Proceedings of the ACM on Human-Computer Interaction 2, no. CSCW (November 1, 2018): 126:1, https://doi.org/10.1145/3274395.

[12] Samantha Bates, Christopher Bavitz, and Kira Hessekiel, “Zero Rating & Internet Adoption: The Role of Telcos, ISPs, & Technology Companies in Expanding Global Internet Access,” Berkman Klein Center for Internet and Society, November 2017, 7–8, https://dash.harvard.edu/handle/1/33982356

[13] Sara Morrison, “Iowa Caucus: How the 2020 App Disaster Could Have Been Avoided,” Vox, February 7, 2020, sec. Recode, https://www.vox.com/recode/2020/2/7/21125078/iowa-caucus-2016-mobile-app-2020.

[14] Sydney Ember and Reid Christian, “The 1,600 Volunteers Who Were Supposed to Make the Iowa Caucuses Run Smoothly,” New York Times, February 4, 2020, sec. 2020, https://www.nytimes.com/2020/02/04/us/politics/iowa-caucus-problems.html.

[15] Jessica Taylor, “Iowa Democratic Party: App ‘Coding Issue’ To Blame For Delay In Caucus Results,” NPR.org, February 4, 2020, https://www.npr.org/2020/02/04/802502709/iowa-dem-party-says-delay-due-to-reporting-issue-county-chairs-blame-malfunction.

[16] Ibid.

[17] Kate Payn and Miles Parks, “Despite Election Security Fears, Iowa Caucuses Will Use New Smartphone App,” NPR.org, January 4, 2020, https://www.npr.org/2020/01/14/795906732/despite-election-security-fears-iowa-caucuses-will-use-new-smartphone-app.

[18] Ibid.

[19] Nick Corasanati, Sheera Frenkel, Shane Goldmacher, and Nicole Perlroth, “App Used to Tabulate Votes in Iowa Is Said to Have Been Inadequately Tested,” Pittsburgh Post-Gazette, February 4, 2020, https://www.post-gazette.com/business/tech-news/2020/02/04/app-used-iowa-democratic-caucus-technical-problems/stories/202002040078.

[20] Michael Kranish, “ORCA, Mitt Romney’s High-Tech Get-out-the-Vote Program, Crashed on Election Day,” Boston Globe, November 9, 2012, https://www.boston.com/uncategorized/noprimarytagmatch/2012/11/09/orca-mitt-romneys-high-tech-get-out-the-vote-program-crashed-on-election-day.

[21] Emily Stewart, “Acronym, the Dark Money Group behind the Iowa Caucuses App Meltdown, Explained,” Vox, February 5, 2020, sec. Recode, https://www.vox.com/recode/2020/2/5/21123009/acronym-tara-mcgowan-shadow-app-iowa-caucus-results.

[22] Lee Fang, “New Details Show How Deeply Iowa Caucus App Developer Was Embedded in Democratic Establishment,” The Intercept, February 4, 2020, https://theintercept.com/2020/02/04/iowa-caucus-app-shadow-acronym/.

[23] Sara Morrison, “The Iowa Caucus Smartphone App Disaster, Explained,” Vox, February 6, 2020, https://www.vox.com/recode/2020/2/4/21122211/iowa-caucus-smartphone-app-disaster-explained.

[24] Stewart, “Acronym, the Dark Money Group,” https://www.vox.com/recode/2020/2/5/21123009/acronym-tara-mcgowan-shadow-app-iowa-caucus-results.

[25] Ibid.

[26] Lachlan Markay and Sam Stein, “Investors Rush to Downplay Ties to Shadow, the Firm Behind Iowa Caucus Clusterf*ck,” The Daily Beast, February 4, 2020, https://www.thedailybeast.com/investors-rush-to-downplay-ties-to-shadow-the-firm-behind-iowa-caucus-clusterfuck.

[27] Sara Morrison, “Iowa Caucus,” https://www.vox.com/recode/2020/2/7/21125078/iowa-caucus-2016-mobile-app-2020.

[28] Ibid.

[29] Ibid.

[30] Ibid.

[31] Zeynep Tufecki, “How a Bad App — Not the Russians — Plunged Iowa Into Chaos,” The Atlantic, February 4, 2020, https://www.theatlantic.com/technology/archive/2020/02/bad-app-not-russians-plunged-iowa-into-chaos/606052/.

[32] Jill Leovy, “The Computer Scientist Who Prefers Paper,” The Atlantic, December 2017, sec. Technology, https://www.theatlantic.com/magazine/archive/2017/12/guardian-of-the-vote/544155/.

[33] Nic Cheeseman, Gabrielle Lynch, and Justin Willis, “Digital Dilemmas: The Unintended Consequences of Election Technology,” Democratization 25, no. 8 (November 17, 2018): 1042, https://doi.org/10.1080/13510347.2018.1470165.

[34] Ibid.

[35] Ibid, 1044.

[36] Ibid.

[37] The Electoral Knowledge Network, “Voter Education and Public Information,” Ace Project, http://aceproject.org/ace-en/topics/em/emia/emia04.

[38] Alex Dubov and Steven Shoptaw, “The Value and Ethics of Using Technology to Contain the COVID-19 Epidemic,” The American Journal of Bioethics (May 18, 2020): 1–5, https://doi.org/10.1080/15265161.2020.1764136.

[39] Ibid, 2.

[40] Jason Horowitz, “In Italy, Going Back to Work May Depend on Having the Right Antibodies,” The New York Times, April 4, 2020, sec. World, https://www.nytimes.com/2020/04/04/world/europe/italy-coronavirus-antibodies.html.

[41] Kathryn Olivarius, “The Dangerous History of Immunoprivilege,” The New York Times, April 12, 2020, sec. Opinion, https://www.nytimes.com/2020/04/12/opinion/coronavirus-immunity-passports.html.

[42] Dubov and Shoptaw, “The Value and Ethics of Using Technology,” 4, https://doi.org/10.1080/15265161.2020.1764136/.

--

--

Sabina Beleuz
Legal Design and Innovation

Researcher at Stanford’s @LegalDesignLab & @Wilabcom. Thinking about access-to-justice, data science, & ethical innovation.