“We Are Who We Say We Are”

On the datafication of identity and the social safety net

Michele Gilman
Data & Society: Points
7 min readMay 11, 2022

--

This post is part of a series on the Mindful Digital Welfare State, a collaboration between Postdoctoral Scholar Ranjit Singh and Research Analyst Emnet Tafesse with Data & Society’s AI on the Ground Initiative.

In March 2020, when the pandemic cost Joe Clark his job as a hospital food service worker, he applied online for unemployment insurance. He received benefits for a few weeks, but then payments abruptly stopped, and his account was frozen. The state department of labor posted messages to his account demanding that he verify his identity. At the same time, the system denied him access to his account, flagging him as a case of potential fraud. It was a classic Catch-22: Clark had numerous documents proving his identity, but was unable to upload them or reach a human at the state agency who could help him. The technology that should have served this citizen in his time of need utterly failed. Without any income, Clark could not make rent, and in time he became homeless and suffered a mental breakdown.

In the wake of the pandemic, Joe Clark’s situation was not unique. State unemployment insurance (UI) systems were flooded with applications from almost a quarter of American workers, and state labor agencies struggled to meet demand after years of cutting staff. Criminal syndicates pounced on this chaos and the influx of federal UI funding to file billions of dollars in false claims, often stealing the identities of real workers.

In their efforts to stanch the losses, states imposed a variety of new identity verification requirements on UI claimants and deployed automated fraud detection systems. Identity verification involves taking the data presented by an individual and comparing it against an existing database. But instead of delivering efficiency and accuracy (as tech developers promised), these automated systems wrongfully denied benefits to millions of eligible workers like Clark, and disproportionately harmed Black workers. This nationwide fiasco upended people’s lives, and demonstrates the perils of datafying identity. As one worker locked out of her state’s automated UI system exclaimed in frustration, “We are not hiding who we are. We are who we say we are.”

Inequity in ID verification

Identity verification systems were designed with privileged users in mind; their developers assumed that claimants would own a computer and understand how to use it, that they would be literate, and speak English. Yet for many populations who became unemployed during the pandemic — particularly those that were low-income, elderly, disabled, and non-English speaking — these assumptions created problems from the outset.

Many low-income people access the internet via their smartphones, and yet certain UI platforms were fully accessible only on computers. Other unemployed people lacked access to an internet connection of any sort. Due to a persistent digital divide, twenty percent of American adults do not have a smartphone, while twenty-five percent do not have home broadband — and low-income Americans and racial minorities are disproportionately disconnected.

Even with access to a computer and broadband, some claimants, particularly senior citizens and non-English speaking people, lacked the digital literacy to comply with complex and technical uploading instructions. The instructions were often overly technical (and sometimes even conflicting): For instance, onscreen identity verification instructions in Maryland told claimants to crop the images they submitted, a skill many of them lack.

Certain design features were destined to ensnare innocent people. When several applications use the same address, it can result in a fraud alert — even though it is common for multi-generational families to share an address (and this category grew during the housing crunch precipitated by the pandemic); a shared address is also a feature of housing settings like homeless shelters. Fraud alert systems also commonly flag ethnic names and those that do not follow typical American naming conventions. All of this means that these systems are titled toward capturing fraud rather than delivering benefits to those in need.

Digital denials of due process

Due process was undermined by unclear ID verification processes, and government assistance was often reduced or denied without the required notice and hearing. Many applicants foundered at the threshold of the application process, never getting far enough along in the process to generate a denial that would even trigger a hearing.

Moreover, across the country, states failed to clarify the acceptable forms of identity documentation, leaving unfettered discretion in the hands of low-level bureaucrats and system designers. State agencies were making it up as they went along, without legislative or notice and comment processes, which are designed to enhance democratic accountability and provide for public input. As one UI contractor said, that “nobody has a definition of ‘fraud,’ or any clear cut process or guidelines to follow…” Identity verification standards continue to change often, without notice to claimants, and the standards continue to lack specificity.

The lack of clear standards is particularly difficult when it comes to people who do not have photo IDs. We know that eleven percent of voters lack photo identification, and this gap widens for minorities and the elderly. Twenty-five percent of African Americans lack identification, and the rate is sixteen percent for Hispanics, nineteen percent for Native Americans, and eighteen percent for senior citizens. Members of these groups face numerous barriers to obtaining photo ID, such as lacking the funds to pay for ID and lack of access to an ID-issuing office, including limited transportation. When Alabama sought to resolve a budget shortfall, the state closed thirty-one drivers’ license off ices, exclusively in poor areas. As identity verification requirements expand, these groups are left behind.

Problematic privatization

The privatization of identity verification raises profound questions about the outsourcing of governmental functions, particularly those that impact the relationship between the citizen and the state. In 2021, over half of US states contracted with private companies to handle identify verification in UI. The main player is a company called ID.me, used in at least twenty-seven states (and by at least ten federal agencies for a variety of federal programs).

Almost immediately after ID.me was rolled out for UI, claimants began reporting difficulty with the technology, venting their frustrations on Twitter and online message boards. ID.me’s technology failed to recognize many people, who then complained of waiting for days and weeks to reach a human “referee.” One frustrated applicant stated that ID.me rejected his video selfie, “didn’t give us a reason, just rejected it. It rejected it three times, and then it locked me out of the system.”

In November 2021, the IRS announced that taxpayers would need to use ID.me to access their tax records and online services. A furious backlash ensued. Critics pointed to user frustrations and lengthy delays, as well as ID.me’s use of facial recognition technology (FRT) to confirm user identities: Extensive research has established that FRT has much higher errors rates for non-white people, and particularly for Black women.

Lawmakers on both sides of the aisle opposed the plan, demanding that the IRS abandon the use of FRT. In February 2022, the IRS announced it would no longer require ID.me for identity verification, but it remains an opt-out option, and ID.me continues to expand its reach across state and federal bureaucracies. As a result, the biometric data of millions of Americans is in the hands of a private vendor with no transparency and limited accountability.

Fulfilling the human right to identity

Identity verification software has become more than a tool for assessing eligibility; it has become the definition of eligibility. This is contrary to the remedial purpose of UI benefits and many other government services. Moreover, it undermines identity as a human right. The 1948 International Declaration of Human Rights sets forth “the right to recognition everywhere as a person before the law.” Still, over 1.1 billion people in the world today lack official identification. Thus, in 2015 the United Nations made one of its Sustainable Development Goals the requirement that countries “provide legal identity to all including through birth registration, by 2030.” At their best, identity verification processes can promote individual rights and enhance access to state support. At their worst, they can exclude poor and marginalized groups while expanding the reach of the surveillance state.

Where do we go from here? Current identity verification processes used by governments are built on a fraud-first presumption that reflects distrust and “othering” of the poor. That presumption needs to be flipped, with the primary goal being to serve eligible citizens in an equitable and timely manner. This also requires that digital platforms be designed with the needs, interests, and input of users at the forefront. Identity verification requirements must be clear and specific, with alternate, in-person options for people lacking digital access. While preventing fraud is important, these systems require far more sophisticated and piloted analytics for identifying suspected criminal conduct than those currently in use.

Identity verification is a core governmental function, and there are serious ramifications for citizens when platforms fail. Outsourcing this function should be eliminated, and government agencies should be regularly auditing their identity verification systems for access and accuracy, and publicly reporting the relevant data. They should also work across agencies to streamline identity verification procedures, rather than putting citizens through multiple, conflicting processes each time they interface with the state online. And identity verification should be accomplished without the collection and retention of biometric data.

None of these measures were in place when the state forced Joe Clark to prove his identity. Through its online platform, the state not only stripped him of the very financial resources designed to support workers in times of economic calamity — it denied him his very personhood.

A longer version of this piece was published in Minnesota Law Review Headnotes as “Me, Myself, and My Digital Double: Extending Sara Greene’s Stealing (Identity) From the Poor to the Challenges of Identity Verification.”

--

--

Michele Gilman
Data & Society: Points

Venable Professor of Law and Associate Dean for Faculty Research and Development, University of Baltimore School of Law and Affiliate, Data & Society.