Expanding Frameworks

An economic justice approach to digital privacy

Michele Gilman
Data & Society: Points
7 min readNov 6, 2019

--

Mrs. Silver’s health was deteriorating. At the age of 79, she was suffering from dementia, high blood pressure, and severe arthritis. She needed daily care, and her family was shocked to learn that the state was reducing by two-thirds the hours of home health care that it would provide to Mrs. Silver under the state’s Medicaid program. Her family challenged the assessment in an administrative hearing, where the state revealed that its decision was based in part on the output of an algorithm it purchased from an independent vendor.

Mrs. Silver’s lawyers* were in a bind: How do you cross-examine an algorithm? The state’s expert witness could not identify the factors that went into the algorithm, how they were weighted, or the logic behind the algorithmic outcome. Thus, there was no way to know if there had been an underlying policy change, if the algorithm accurately coded the governing law, or if errors in Mrs. Silver’s data were fed into the algorithm. The stakes could not be higher. Without adequate health care, Mrs. Silver’s life was at risk. Without money, she could not buy the additional home care hours she needed.

Digital technologies raise challenges not only to equality, but also to equity.

Low-income people are being adversely impacted by a range of public and private digital technologies, and automated decision-making in social service programs is part of this alarming trend. Thus far, public debate has focused on the serious threat digital technologies pose to civil and political rights. This is vitally important, but it isn’t enough: To fully understand the digital sorting and surveillance faced by low-income people, we also need to assess these impacts through a social and economic rights framework.

In other words, digital technologies raise challenges not only to equality (being treated fairly), but also to equity (having what you need to fulfill your human potential). Illegal discrimination magnifies economic disparities, but equality doctrine alone cannot lead to equity. After all, Mrs. Silver was not being discriminated against — rich people do not turn to the state to meet their health care needs. Rather, she was being denied a core necessity for human existence. Because discrimination law is not about fulfilling substantive guarantees to life’s necessities, it can never do the heavy lifting of eliminating all forms of digital exploitation.

Economic Rights in the Digital Era

In fighting oppression, Americans tend to turn first to rhetoric and rules around civil and political rights. This is not surprising, given our Constitution’s focus on negative rights, that is, protections against government coercion. We have less of a tradition and fewer legal “hooks” to advocate for substantive economic rights. By contrast, the Universal Declaration of Human Rights contains a robust commitment to economic and social rights. It provides that people have a right to work and to “an adequate standard of living … including adequate food, clothing and housing,” as well as health care and education.

Yet as a growing body of research has illustrated, an array of digital technologies implemented in our social services and health care institutions may be undermining these rights. The UN Special Rapporteur on Extreme Poverty and Human Rights recently warned that a “digital welfare dystopia” is emerging, in which “systems of social protection and assistance are increasingly driven by digital data and technologies that are used to automate, predict, identify, surveil, detect, target and punish.” In addition, low-income Americans are vulnerable to a range of predatory and discriminatory treatment online.

An algorithm can be technically correct, but heartless.

Based on their digital profiles, low-income people in the United States are targeted online for predatory marketing, such as high-interest payday loans and for-profit educational scams that trap people in debt and undermine their economic stability. At the same time, algorithms are used to exclude low-income people from access to housing, education, mainstream financial services, and employment. For example, colleges are using algorithms to identify potential students who can pay the full cost of college tuition and market to them, thus excluding underprivileged high schoolers from their outreach efforts.

As Mrs. Silver’s story shows, governments are also using algorithms to allocate public benefits, but these systems lack transparency and accountability. In Michigan alone, the automated unemployment insurance system erroneously accused over 20,000 people of fraud, resulting in claimants facing massive fines that pushed them into financial distress, bankruptcy, and even suicide. Likewise, in Arkansas, an algorithmic model used to determine levels of home health care was riddled with errors and oversights, such as determining that a patient should have less care because they did not have foot problems — the algorithm did not recognize that the patient was an amputee. An algorithm can be technically correct, but heartless.

Poor people experience privacy differently because the stakes for them are often much more dire.

Overlaying these patterns of targeting and exclusion, low-income people are living under heightened surveillance. Public housing authorities are adopting facial recognition technology in the name of safety, but the technology is inaccurate, particularly for people of color and women. And even the most accurate facial recognition technology contributes to the further criminalization of poverty. As one low-income tenant stated in protesting facial recognition, “We should not feel like we’re in a prison to enter our homes.” By contrast, residents of a high-income property may welcome facial recognition technology for the sense of increased security and convenience it provides them. Of course, it is unlikely their biometric data or that of their guests will be fed to local law enforcement. As this example reveals, poor people experience privacy differently because the stakes for them are often much more dire.

Digital Privacy and Economic Justice

Anti-discrimination law is a weak tool to counter these trends. Legal scholars have convincingly explained how laws enacted for an analog world are ill-fit for the highly predictive, and often unintentional forms of discrimination in the digital world.

Yet even if anti-discrimination law caught up to the modern era, poor people could still be discriminated against with impunity. Neither the Constitution nor civil rights statutes consider poverty as a protected attribute. Thus, a payday lending company can electronically identify and target poor people to trap them in never-ending debt cycles without raising the specter of illegal discrimination.

Anti-discrimination principles simply do not capture the full range of digital harms facing poor people.

Anti-discrimination principles simply do not capture the full range of digital harms facing poor people. Anti-discrimination law is about letting everyone on the playing field — but it is not about providing people with uniforms, cleats, or the equipment needed to join the game. The ability to obtain a low-skill job with a living wage, predictable hours, and health care benefits is not simply a matter of purging discriminatory employers from the workplace. There is an entire realm of our economy that exploits low-wage workers regardless of their race, ethnicity, or gender.

Privacy law, as it is currently structured in the United States, also fails poor people. The origins of American privacy law derive from a “right to be let alone” conceptualized in the late 19th century by Samuel Warren and Louis Brandeis, who were affluent, legal elites concerned about the prying eyes of the press. This conception of privacy does not help poor people, who must interact regularly with government agencies for social support. They do not need to be left alone; they need to be treated with dignity.

The highly individualized “right to be let alone” has morphed into a neoliberal privacy regime that relies on notice and consent to protect people’s privacy interests.

Over time, the highly individualized “right to be let alone” has morphed into a neoliberal privacy regime that relies on notice and consent to protect people’s privacy interests. This puts the onus on individuals — rather than businesses or government — to protect their personal data. Yet no one reads the convoluted take-it-or-leave-it terms offered as a condition for internet access. Low-income people tend to have lower education levels, and thus, are even less equipped to navigate this complex legal maze. Not surprisingly, a Data & Society survey and report found that low-income people have less confidence and greater concerns about their ability to protect their digital privacy and security.

Accordingly, we must reconceptualize our notions of digital privacy through the lens of economic justice. It is time to bring more tools to the table to fight digital profiling and automated decision making through coalitions of digital privacy advocates, the civil rights community, and economic justice and anti-poverty activists. Economic justice and data justice are linked. In today’s world, we will not have one without the other.

Michele Gilman is the Venable Professor of Law at the University of Baltimore School of Law, where she directs the Saul Ewing Civil Advocacy Clinic, in which student attorneys represent individuals and community groups in a wide array of civil litigation and law reform projects. She researches and writes about privacy, poverty, economic justice, and feminist legal theory. For the 2019–2020 academic year, she is a faculty fellow at Data & Society exploring the intersection of data privacy law with the concerns of low-income communities.

*The author was a lawyer representing Mrs. Silver, whose name and identifying characteristics have been changed to protect her identity and privacy.

--

--

Michele Gilman
Data & Society: Points

Venable Professor of Law and Associate Dean for Faculty Research and Development, University of Baltimore School of Law and Affiliate, Data & Society.