When verification is also surveillance

EVV devices could intrusively track Medicaid recipients

Jacob Metcalf
Data & Society: Points
15 min readFeb 27, 2018

--

What would it feel like if you needed to verify your identity and physical location with your state government via a GPS-enabled biometric device every time you exercised a civil right? And if you didn’t properly check in, you ran the risk of losing that right — or of losing your health, or even a family member’s life?

What would it feel like if the government then outsourced the responsibility for managing that check-in process to a third-party contractor that required you to use a clunky, custom-made device with a slow UI — and a backend database prone to leaking protected personal health data to other people? What if the recipient of the contract to build those devices and maintain those databases was a major lobbyist behind the law establishing this obligation?

A client in Ohio attempts to use the preferred EVV device, demonstrating confusion about whether the front-facing camera on the device is actually operable.

My guess is that you would be infuriated. I certainly wouldn’t stand for it. Yet this is the scenario being enacted for Medicaid recipients who utilize personal care attendants (PCAs) and in-home health care aides (HCAs).

Buried in the 21st Century CURES Act, signed by President Obama in 2016, is a directive for states to adopt Electronic Visit Verification (EVV) technologies and services that will track and verify the labor provided by caregivers to Medicaid recipients and their families. By electronically logging when and where a caregiver begins and ends a shift, EVV is intended to ensure that the services billed were actually provided, which ostensibly offers some fraud-protection to both care recipients and taxpayers.

The core purpose of in-home care assistants is to ensure that differently-abled persons and their families are able to live the fullest life possible in their own community and not in an institution.

At first glance, EVV seems like a simple distributed time clock, but it also represents a deceptively intrusive tracking of the lives of Medicaid recipients. Caregivers for the disabled are required to work flexible hours and in a variety of locations. Recipients of these services are people who may need help getting to a doctors appointment one day, or getting around the mall another day, or none at all the next day. Parents of disabled children might need relief one day in order to attend their other child’s recital and need none the next day because Grandma is in town. Furthermore, many caregivers have more than one client and many clients have more than one caregiver. That trip to the mall might involve a shift change at the food court, or attending a recital might involve a shift change back home. And in states that allow family members to be paid as caregivers, the majority of PCAs and HCAs may also be family members.

Due to this wide variety of working conditions, caregivers checking into work is not as simple as punching a time clock. Thus, EVV technologies are intended to verify when and where care assistants provide billable labor. According to the law, states must adopt EVV systems that can verify:

  • Type of service performed;
  • Individual receiving the service;
  • Date of the service;
  • Location of service delivery;
  • Individual providing the services; and
  • Time the service begins and ends.

There’s some ambiguity about whether states must adopt a single service provider or can create open standards to which care-giving agencies subscribe. Some states have already started rolling out the devices and the situation is not promising. In Ohio, clients are sent a device produced by the home medical care data services company Sandata (a chief lobbyist for the EVV rule in the 2016 CURES Act) which are CAT-manufactured military-grade cell phones running a custom Android OS. These devices are smartphones with a camera and microphone, although the state of Ohio claims that the cameras have been disabled (without offering further specifics) and that the microphone is only turned on when the recording function is triggered.

Yet the discussions of these devices among clients is full of confusion and distrust about what types of data the devices are technically capable of recording. Conversations in anti-EVV Facebook groups include Medicaid clients who have relied on family members to decompile similar software offered by Sandata to try to get a sense of the technical specifications of the devices. Other users note they store the devices in the garage, the doghouse, in a drawer to soundproof the microphone, representing a number of folk theories about how to foil potential surveillance. The most innocent explanation of how these devices work may be true, but the widespread confusion indicates that the relevant government agencies and device manufacturers are not taking seriously the legitimate privacy concerns of the clients. It further indicates that we should doubt these were designed using participatory design practices and appropriately field tested.

There is the further issue of how biometric data is used in automated verification system. The devices in Ohio use voice recordings or electronic signatures for a client to confirm the logged work, although there is no public communication that explains whether voice verification is done using biometric analysis, nor explanations about when and where that biometric data is stored. In the public prospectus for the EVV system in California, Sandata proposes using biometric facial data to confirm provider and client identities, and yet offers no explanation of how, when and why that data will be used. As we will see, those details matter.

The core purpose of in-home care assistants is to ensure that differently-abled persons and their families are able to live the fullest life possible in their own community, and not in an institution. Altogether, access to Medicaid, the protections under the Americans with Disabilities Act and the 1999 Olmstead v. L.C. ruling, establish that Americans with disabilities are entitled to receive — as a statutory civil right — independence-sustaining support from caregivers of their choosing in their own homes and communities. This is supported by a somewhat kludgy public policy of treating Medicaid recipients as employers whose employees are paid by the state’s Medicaid office or via a third-party agency.

Elaine Wilson and Lois Curtis, the plaintiffs in Olmstead v L.C., who were held in a Georgia mental health institution despite their ability to live in their community.

So why should we be worried about rules that require caregivers to provide an electronic verification of the labor provided to clients? Because without careful controls and ethical design thinking, surveillance of caregiver labor is also functionally surveillance of care recipients, especially when family members are employed as caregivers. And all evidence points to the likelihood that this surveillance can easily be repurposed as leverage to reduce access to Medicaid. There should be an extremely high burden for requiring technological surveillance as a price for receiving entitlements.

Refractive surveillance: Surveillance is always adaptable to other purposes

It’s rare that data collected for one purpose will be restricted to just that purpose. Sociology of data scholars Solon Barocas and Karen Levy have shown how attempts to monitor consumer behavior within retail stores becomes a method for also monitoring the activities of employees. This unilaterally changes the power dynamic between employer and employee. For instance, many high-end retailers have adopted “clienteling software” that tracks consumer preferences and history and puts it in the hands every staff member via a tablet. This software replaces the primary value offered by experienced workers at high-end retailers: an established memory of, and familiarity with the tastes of regular clients.

In Barocas and Levy’s interpretation, this externalizes the worker’s valuable knowledge, and makes each worker more substitutable. Additionally, monitoring customer behavior at a granular scale leads to tailoring staff schedules to customer activity to cut staff expenses, which results in less reliable schedules and income. Thus electronic tracking of customers leads to a change in the power of the employees, even though the employees aren’t technically the people being tracked.

This is what they name refractive surveillance. Collecting information about one group can lead to increased control over another group because data is easily repurposable:

This effect of data collection is often overlooked. Debates about consumer privacy have largely missed the fact that firms’ ability to develop a better understanding of consumers also impacts workers’ day-to-day experiences, their job security, and their financial well-being. …

Collecting data about customers, then, can have non-intuitive effects on workers — by potentially reducing their bargaining power, contributing to schedule instability, and subjecting them to new types of evaluation. Our notion of refractive surveillance highlights a very practical need to build data-driven systems that acknowledge and balance the many interests at stake.

When caregivers log into a GPS-enabled EVV device, they also provide a precise and analyzable location history of the clients, and by implication a record of private and Constitutionally-protected behavior. This is one of the tricks of data analytics, especially when enhanced by machine learning techniques: data is never just about the thing you originally think it is, it is always also about what can be mechanically inferred. EVV adds an extra layer of intrusion because it unilaterally alters the labor-management relationship between PCA and client while offloading audit responsibility to the client. EVV establishes distrust as a baseline in a relationship that is fundamentally about trust.

In a computer system, trust is coded as acceptable variance from a norm, and how an EVV system surveils that variance says a lot about how we assume disabled people should live their lives.

Consider the account of EVV from Karin Willison on the disability-rights blog The Mighty. She has cerebral palsy and requires PCA support to pursue her very active life (which includes a delightful wheelchair travel blog with plenty of service dogs). While vendor-produced instructional videos about EVV portray the process as simple, they do not highlight that automated “verification” always includes a check against a baseline.

How does the system know to trust a geolocation input? It checks that input against a baseline set of common locations, such as one’s home, school or church. A client must teach the system what those locations are, and if service is rendered at an “excessive” distance from one of those locations, then the client will have to log on to a web portal or speak on the phone to explain the “exception event.” Yet if a client like Karin is constantly out and about, much of her life will be an exception event. Disability advocates have warned that this type of data surveillance is effectively house arrest. In a computer system, trust is coded as acceptable variance from a norm, and how an EVV system surveils that variance says a lot about how we assume disabled people should live their lives.

Screenshot of Sandata’s EVV training explaining geographical exception events, courtesy of The Mighty

In order to “verify” service to ensure her companions are not committing fraud, a client must render her movements, behaviors, and relationships as machine-readable signals for a third-party private contractor that will demand she personally justify her activities to them. It therefore becomes clear how coercive a simple “verification” process can be: the state now attaches her statutory civil right to receive medical support as close to her community as possible to mandatory automated surveillance. The client must account for her whereabouts to a corporate call center or website or her companion will not be paid, and she will possibly lose their service and perhaps therefore her independence and health.

How eager are you to teach a corporate system where you spend your days, and then explain why you did something different yesterday? Of course, many of us do so through voluntary services such as Google, but those services are not obligatory passage points for actualizing our civil rights. What if that web form only showed two slots to enter service locations — as the system designed by Tempus for Massachusetts Medicaid does — implying you only go two places regularly in your life? Would you choose not to go to the art museum or the temple because you didn’t want to tell a corporate entity about it? Could you live as an exception event?

In order to “verify” service to ensure her companions are not committing fraud, a client must render her movements, behaviors and relationships as machine-readable signals for a third-party private contractor that will demand she personally justify her activities to them.

Computer systems for sorting the deserving poor from the undeserving poor

Like much surveillance of citizens who make use of their right to public services, this is a punitive model that demands constant justificatory labor from the citizen in exchange for the right to live freely in their community. Virginia Eubank’s recently released book Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor (St. Martins Press, 2017) is an essential guide for anyone seeking to understand how information systems offer leverage for controlling and punishing people who require government assistance (see also her talk at Data & Society). Eubanks offers three core case studies: 1) how the state of Indiana pared down welfare rolls by making their automated systems extremely hard to use and rife with errors; 2) how a county child protective services program makes automated suggestions about which children are likely to require additional interventions; and 3) how homeless services in Los Angeles require people seeking assistance to repeatedly take automated surveys and behavioral histories that often make it less likely that they will receive support.

A common thread through these stories is that data analytics collected by one agency for the specific purpose of helping a person receive benefits from the state will place them at increased lifetime risk of punitive interventions from other agencies. For example, Allegheny County’s Child, Youth, and Families (CYF) agency uses a machine learning algorithm to help human caseworkers triage the most important calls that are made to their service center. When a citizen or a mandatory reporter calls CYF, the algorithm checks the family’s history (possibly going back decades and across several generations) and cross-checks it with the information that came in to assign a risk score indicating how likely it is that the child will be removed from the home in the future. If the risk score goes above a certain threshold then a case worker is much more likely to make an in-person visit. There is a clear trend line toward receiving increased (often unwanted and unnecessary) contact with other state agencies simply by virtue of needing any supportive services, even on a temporary or voluntary basis. Even voluntary clients of other CYF services often organize their lives around surveillance management. Such records are now held and analyzed forever, creating a robust account of the travails of family life in poverty that the middle- and upper-classes would never have collected about them.

There is a clear trend line toward receiving increased (often unwanted and unnecessary) contact with other state agencies simply by virtue of needing any supportive services, even temporarily and on a voluntary basis.

Similarly, on Los Angeles’ Skid Row, homeless clients are required to provide detailed histories and submit to automated surveys repeatedly to determine their eligibility for housing. Many people have severe histories of chronic problems and receive long-term management plans and supportive housing, while others are only acutely (temporarily) homeless and need a month or two of support to get back on their feet. But a large portion of the homeless population is placed in a purgatory of never receiving appropriate support because their scores are neither adequately chronic nor acute.

What they do receive is increased attention from other agencies, especially law enforcement. Because police forces have become de facto social workers in locations with high nuisance crime rates, law enforcement agencies are granted access to case histories with detailed information about people’s mental health status and other medical histories. For example, Eubanks relays a story of a man hearing for the first time that he has an official history of mental illness in the process of being sentenced for a nuisance crime. These detailed and flawed histories seem to be accessible to interrogation by nearly everyone except the subjects of the history. And they are functionally permanent.

Thus, people enrolled in EVV programs need to proactively seek assurances that their data will not be repurposed by other state agencies for increased surveillance or law enforcement intervention. Although Medicaid status is protected health data, other forms of information that can be used to infer Medicaid status are not. Is client EVV login data, including geolocation data, a protected category or is it governed by much looser laws around surveillance of employees? Will law enforcement be able to know sans warrant when and where a client confirmed service was provided? Will child welfare services have access to geolocation data for minors who utilize PCAs, and how will that data be used? Are biometric markers used by the EVV device strictly protected? If the state requires that biometric data is provided in order to verify service, does the contract with the third-party provider prohibit repurposing of that data?

Design Matters

What would these EVV systems look like if we assumed that there are no undeserving care recipients rather than starting from a position of managing distrust? Both Eubanks and the political theorist Yascha Mounk (in the Age of Responsibility: Luck, Choice and the Welfare State, Harvard 2017) argue that technological systems that surveil the poor are typically constructed to provide evidence that their suffering is their own fault. Data becomes leverage for denying benefits to individuals and populations, yet the data is provided through the compliance labor of those very individuals.

Although ultimately the leadership of this issue belongs to disability rights advocates and caregiver labor organizations (who are already earning some victories), data and technology scholars can add to the discussion with insight into how these systems are designed and implemented. There are two major warning signs about the design process already visible in these systems:

  1. These systems have been designed and rolled out in such a short time frame that it is impossible for manufacturers to have satisfied the requirements of user-centered design nor adequately consulted with the user populations using established participatory design practices.
  2. Public communications about the technologies are shockingly vague about technical details, resulting in situations where clients are unsure of whether EVV devices have remote-operable cameras or microphones. For example, these documents from Ohio Medicaid and Sandata, and the Ohio law governing EVV, contain neither the term “privacy” nor “biometric.”

This can be read as a signal that the organizations have not adequately accounted for what the client population wants or needs, nor respects their rights to articulate and have those needs accounted for in design and policy. The short timeline for implementing these technologies is not justified and will result in devices that are miserable to use and unnecessarily invasive. Implementation of EVV policies should be delayed until privacy and usability concerns can be addressed, at the very least.

I suggest that people pushing back against EVV in its current form make demands along these lines from their Medicaid agencies and device manufacturers:

  1. The federal law does not require logging precise geolocation data, only “location.” That may be satisfied through many different design choices, such as a simple binary radio button offering “Home/Not Home” as options. Demand design practices that collect minimally invasive data.
  2. “Exception events” triggered by EVV systems must be rare for all users. Default settings that impose additional reporting burdens on even a small number of clients and care providers create an unacceptable cost on the right to community-based care.
  3. No biometric or geolocation data, nor data that can be used to infer Constitutionally-protected activities, should be collected unless it serves a specific, discrete purpose that cannot be accomplished otherwise and adds a clear value to the client and caregiver.
  4. If geolocation data is recorded, it must be disposed of after a reasonable period for auditing, such as 180 days.
  5. Even if geolocation data is logged in the EVV system only when a shift change is triggered, that still creates machine-readable patterns from which behaviors can be inferred. Additionally, the device itself may record GPS regularly in its RAM even if it is not reported. Without technical details this is challenging to determine, so manufacturers and service providers must provide those details and use open source software that allows verification of their claims.
  6. Data collected via EVV devices, particularly geolocation and biometric varieties, must not be repurposed for use by commercial third-parties, including subsidiaries and commercial partners.
  7. Explain clearly and concretely how and when biometric data is collected and analyzed and what limits are placed on its repurposing. Clients and care-providers must be able to request deletion of biometric data from the system.
  8. Data collected by EVV should not be available to law enforcement without warrants, and there should be no pipeline of EVV data to other governmental agencies without anonymization and aggregation.
  9. Medicaid agencies and product manufacturers must explain in clear, concrete terms when and how EVV devices collect visual and audio data. If audio or visual data collection is disabled (disabled is different from not used), explain whether that is done via software or hardware alterations, and explain whether those alterations could ever be reversed.
  10. It is not appropriate for agencies to provide “alternative” reporting methods to deeply flawed default or preferred methods. The default or preferred method should be privacy-preserving and appropriately designed.
  11. Device designers should see widespread fear or misunderstanding about the capabilities of the devices as a harm, even if those fears or misunderstandings are incorrect.

Additionally, consider supporting disabled rights activists and care labor activists on this issue. The Facebook group Citizens Against EVV & Geo Tracking of the Disabled! is running a social media and legislative outreach campaign. Some SEIU and AFSCME locals (unions that covers many caregivers) are opposing EVV implementation in California, and an anti-EVV campaign in Massachusetts recently won an implementation delay.

Jake Metcalf is a Researcher in the Law & Ethics in Computational Science initiative at Data & Society Research Institute.

This publication was partially supported by National Science Foundation Grant #1704425. The views expressed do not represent those of the National Science Foundation.

We welcome your feedback. To connect with Data & Society for updates on this issue, contact info at datasociety dot net.

--

--

Jacob Metcalf
Data & Society: Points

Tech ethics researcher and consultant. Founder of Ethical Resolve, researcher at Data & Society Research Inst. Dwell in an officebarn amongst the redwoods.