Despair Privacy:

Casey McCarty
13 min readFeb 7, 2022

--

How info about your most intimate mental health vulnerabilities quietly circulates to advertisers, Big Pharma, and predatory rehabs

Under nearly two years of pandemic conditions–with an unprecedented slew of financial insecurities, isolation, death and illness layered on top of an already deadly opioid crisis–millions of new users each month turned to one of 20 different mental health apps offering tech-driven spins on traditional therapy or wellness management. Thousands sought help in moments of crisis through web and text-based crisis chat lines while others searched online for substance abuse resources.

Americans are experiencing a lot of despair and our digital activity reflects this —creating a treasure trove of lucrative data. Data mined, sold, and traded to advertisers for the express purpose of capitalizing on our vulnerable mental states to sell more drugs, products, and services.

We are increasingly aware of just how much our browsers and social media apps know about our habits and proclivities. Big Tech knows what we buy from our takeout, to mundane household essentials, to the spicy products we feel too sheepish to buy in person. They know what we’re even thinking about buying based on what we research. Through our smartphones, they know how we physically move around our communities.

We often accept that some intrusion is the cost of the “free” resources we use online, but is there a moral imperative to treat our mental health data differently than the stuff we buy?

Regular news of data breaches and criticism of companies’ lack of privacy transparency constantly renew public debate, but while politicians claim privacy is a bipartisan concern, we don’t appear to be any closer to the type of comprehensive protections that have covered our European counterparts for years.

Privacy of our most sensitive health-related web use isn’t just an abstract protection of our anonymity–it can quite literally be a life or death situation.

Callout box with key takeaways from the article; stock image of woman sitting on floor holding her head.

HIPAA probably doesn’t apply for that App

In the area of data privacy, consumers often presume health-related information is already secured and regulated, largely due to misunderstandings about HIPAA and assumptions that the umbrella of protections covers a lot more than it does.

While HIPAA (Health Insurance Portability and Accountability Act of 1996) sets information privacy standards for what it calls “covered entities” (such as licensed medical providers, insurance companies, specific third-parties associated with these entities who handle approved outsourced tasks such billing and payment transactions; find the full text here) the panoply of healthcare-adjacent tech startups offering services via apps or websites, are typically not considered “covered entities.” While apps that connect you with a licensed therapist or doctor have HIPPA coverage at the point of the conversation between you and the therapist–information you provide to the app or website, such as intake questionnaires or surveys, or on platform community forums, are not covered.

Beyond communication directly with covered entities, companies have no particular responsibilities regarding your data rights other than what is established in those verbose, pages-long Terms of Service that you probably didn’t read.

Tech companies know you don’t (or can’t) read the Terms

Deloitte reported 91% of consumers admit they don’t bother to read Terms at all (97% of users aged 18–34) and a study found when users actually viewed Terms of Use or Privacy Policy texts that would have taken average readers 44–49 minutes to read in full, users only viewed an average of two minutes (Obar & Oeldorf-Hirsch, 2018). Being a stalwart reader of Terms would cost you about 250 hours of your life annually (more than 10 days) to fully review all the Terms the average American is asked to accept a year.

A predictable reaction to complaints about tech companies’ dubious data sharing and data monetization practices is often a smug scolding: caveat emptor, the buyer beware the saying goes–if you choose not to read or cannot understand the implications of the legal documents available to you and choose to use the service anyway, then you don’t deserve to complain about what happens to you. Courts certainly seem to agree–ignorance is no excuse, and what consumer privacy protections we do have rely on this “consent” model — as long as you have to tap “I agree” almost anything goes.

This one-dimensional argument of course fails to capture the power differential of the consumer vs Tech Giant Legal Team relationship, nor address the lack of availability or accessibility of market alternatives offering more ethical policies.

The argument also tacitly endorses exploiting consumers who lack the privilege to afford the level of education required to reasonably comprehend these policies.

Researchers found more than 99% of 500 top companies’ Terms were written at a level requiring English proficiency of 14 years of education, or higher; 20% were written at “academic journal” level or higher (Becher, 2019). More than half of the American population reads at a 7th-8th grade reading level or lower, however. The American Medical Association and the National Institute of Health recommend patient education material be written at the sixth-grade reading level.

Digital disability accessibility non-compliance and non-native English speakers dramatically exacerbate inaccessibility.

Simply put: the overwhelming majority of Terms are functionally incomprehensible to most users, which is very convenient if you want to stuff them with a bunch of subtle disclaimers hinting about how you can profit off of user data.

Informed consent in distress

Now imagine when that long, complex legal document prompts you to tap or click “I Agree” in order to use the app/service, you are under great emotional distress–such as during a mental health crisis, or having thoughts of suicide or self-harm.

That’s exactly the conditions in question when notable text-based crisis service provider Crisis Text Line recently divulged sharing user data with its for-profit spinoff Loris.ai.

While Crisis Text Line (CTL) is operated as a non-profit organization, the for-profit Loris.ai markets its access to CTL data (what CTL calls “the largest mental health data set in the world,”) to develop its artificial intelligence platform for automated customer service.

Nancy Lublin was both CEO of Crisis Text Line and Founder and CEO of Loris.ai when the latter received a $2 million venture seed round in 2018. Lublin stepped down as CEO at CTL in June, 2020, according to their post Update as of January 2022; although CTL notes it “currently owns shares in the company.”

In response to the privacy concerns, POLITICO reported the message they received from CTL general counsel: “Crisis Text Line obtains informed consent from each of its texters…The organization’s data sharing practices are clearly stated in the Terms of Service & Privacy Policy to which all texters consent in order to be paired with a volunteer crisis counselor.”

Although perfectly legal, claims that individuals in crisis, often children or teens, are truly providing “informed consent” in any practical sense of the term, to the 50 paragraphs of Crisis Text Line’s linked Terms of Service and Privacy Policy (which would take the average non-distressed reader over 20 minutes to read) ahead of being connected to emergency crisis support is tenuous at best. Even more muddied, when the person in crisis is being referred to the service from their state mental health department, school, church, company, or other non-profit under any number of hundreds of referral partnerships, thereby conferring an additional layer of trust.

How safe is your anonymized data?

Crisis Text Line, like other companies who share user data, assures users that the data has been anonymized before being sent to third parties. While international laws such as the European Union’s General Data Protection Regulation (GDPR, 2018) sets minimum standards for data anonymization, no such federal standards exist in the US.

Anonymization by a private company not covered by HIPAA could mean almost anything. Some anonymization techniques are more robust and harder to “crack” than others.

Data anonymizing is a process of stripping personally identifiable information (PII) such as names, addresses, email addresses and other identity signifiers from datasets.

Some techniques include:

  • data masking (altering, encrypting, shuffling or substituting characters)
  • pseudonymization (swapping real personal info with fake placeholders)
  • generalization (removing specifics, like deleting the house number but keeping the street name or zip code)
  • data permutation (rearranging the data set so that attribute variables don’t match up to the original record, like swapping a column of data)
  • data perturbation (adding “noise” to the data by changing it in some predictable way, like rounding numbers or multiplying them by a base number)
  • synthesizing data (using an algorithm to create completely new made-up data sets that retain the same patterns)

Researchers found the average person in the United States could be correctly identified from an “anonymized” database 81% of the time (Rocher, Hendrickx, & de Montjoye, 2019). You can even use the Imperial College of London’s machine learning-powered tool to see how easy your own identity would be to locate with just your gender, date of birth, and zip code.

As the MIT Technology review notes, the New York Times used data reidentification techniques to locate nine years of Donald Trump’s tax returns, but could easily be used for various forms of fraud or coercion.

Medical data has already been reidentified. Using anonymized patient-level health data the State of Washington sold for $50 at the time, plus publicly available information from newspaper articles, researchers were able to correctly match patients to their health data 43% of the time (Sweeney, 2013).

Researchers were also able to de-anonymize records by matching genome sequences to genetic information people voluntarily shared on genealogy sites–which helped match anonymized records to relatives who hadn’t posted their own genetic information online (Gymrek, McGuire, Golan, Halperin, 2013). That was almost a decade ago. Ancestry.com now boasts 30 million records alone.

Forensic researchers were able to sift through genomic data to track down suspects to a variety of cold cases, including the Golden State Killer, but these lucrative data pools are also attracting hackers — breaching DNA databases GEDMatch and MyHeritage (Aldhous, 2020), along with other repositories of sequencing, like fertility clinics (Vallance, 2021).

The combination of vast amounts of information publicly available about us–some we’ve voluntarily disclosed, some published beyond our control–increases the likelihood that poorly anonymized data can be traced back to us. As in the cases of Crisis Text Line–someone being outed as linked with services associated with addiction or suicide are rife with opportunities for abuse or exploitation–think employers or prospective employers, custody battles, life insurance coverage denials, or general blackmail opportunities.

Telling Facebook every time you go to your therapist

Consumer Report along with privacy research company AppCensus used an Android programmed to monitor app transmissions for seven mental health related apps, including the popular therapy apps Talkspace and BetterHelp., finding the apps sent robust tracking info to Facebook, Google and others.

Researchers found the app privacy policies were unclear on which information was being shared, and that policies typically disclosed the sharing of data “for research” quite vaguely to cover both scientific or academic research and marketing and “innovation research,” with opt-out instructions particularly difficult to ascertain (Germain, 2021).

These findings are similar to what Jezebel reporters observed when monitoring the data the BetterHelp app sent to dozens of third parties including Google, Facebook, Snapchat and the targeted marketing analytics firm MixPanel which compiles and then sells data insights to more than 35,000 companies.

MixPanel received detail-level information from BetterHelp’s intake survey: age, gender, sexual orientation, past therapy use, whether the user reported being spiritual or religious, general self-reported financial status, and history suicidal ideation. TalkSpace also sent metadata to MixPanel.

By downloading their user data from Facebook, reporters confirmed Facebook had retained metadata received from BetterHelp. Facebook received notices every time the app was opened as well as metadata attached to every message with a therapist, although did not include content from the message (Osberg & Mehrotra, 2020).

Please buy more drugs

The US is one of only two countries globally (joined by New Zealand) that allows direct-to-consumer pharmaceutical marketing. Two of the three types of FDA-approved marketing doesn’t require explaining any risks–making those formats perfect for online targeted marketing. Searching or engaging with physical or mental health apps give advertisers profitable opportunities to get drugs and products in front of users most likely to be susceptible to their messaging.

With over $6.5 billion in ad spending just in 2020 (with digital ad spending increasing by 43%) the resulting pharma marketing industry is big business itself. An additional $152 million was spent on social media-based ad spending from just the top-five firms(Bulik, 2021).

Pharma marketing firms routinely boast their success in what the industry calls “script lift” — the number of new prescriptions sold. Patents have even been issued around the tech supporting pharma marketing demand side platforms — which help advertisers automate ad buying and bidding for particularly lucrative web users, such as those who have been identified as suffering from a condition, to improve script lift.

While the hawking of prescription medication may seem shocking or ethically dubious to most of the world who doesn’t experience this type of pharmaceutical marketing, but at least drugs are FDA-approved–unlike the other sorts of unregulated products, supplements, and services that are also advertised to users after interacting with mental health apps or searching for help.

What harm can a little targeted marketing do?

One particularly dark path that targeted marketing can send vulnerable users after searching for substance abuse help is towards the industry of profit-driven drug and alcohol recovery centers.

With states under pressure from consumers, families, and insurance companies to investigate facilities for widespread claims of fraud, corruption, and even abuse–task forces often find recovery business models fly just under regulatory jurisdiction, with half of states offering no regulatory oversight of what is considered a cottage industry distinct from medical facilities. Researchers found less than a third of the facilities investigated offered any evidence-based care at all (Mann, 2021).

Rehab industry marketing companies deploy an arsenal of tactics to filter addiction help-seeking web users towards their partner facilities: targeted ad-based marketing, SEO and AdWords manipulation, and Google scams like hijacking business listings on Google Maps and replacing local treatment center contact info with “national hotlines” which are call centers staffed by sales agents who use aggressive sales pitches to direct prospective clients to whatever facility pays the best commission.

Some call centers even engage in “patient brokering” or bidding patients to facilities based on the prospective value of the patient after searches reveal insurance terms (like “alcohol treatment center Aetna”) or include less-profitable terms like “Medicaid.” Screening calls further drill down how much customers could self-pay or how much their private insurance would reimburse for add-ons like repeated drug tests, expensive chemical analysis of blood or urine, or genetic-testing. In some of the worst cases discovered, criminal “patient brokers” incentivized addicts with cash or even drugs (Ferguson, 2017).

Once clients arrive at centers–which often not only fail to have the amenities described in the sales call, services may not even involve medical care at all, but only group counseling, recreational activities, religious instruction, or program meetings such as Alcoholics Anonymous.

Even uninsured or impoverished addicts searching for help can be scooped up into the industry–often under arrangements that look a lot like indentured servitude to “pay” for their treatment.

Researchers from Reveal’s Center for Investigative Reporting identified 300 facilities in 44 states that required patients to work without pay as part of the “treatment” — including being subcontracted to for-profit entities such as retail, farms, refineries, and factories. Reveal found numerous instances of death and serious injury where participant-workers had been poorly trained and sent to perform dangerous work. (Walter, 2000).

Federal policy can clear up some of this mess

The states of California and Virginia have implemented their own consumer data protection laws, offering their own residents protection, and more states are considering their own laws, showing some inertia around privacy policy.

The EU’s robust privacy law framework, the General Data Protection Regulation (GDPR) is a model for how federal regulation could work not only to protect consumers, but to avoid a compliance nightmare for businesses facing a complex patchwork of state-by-state laws.

The GDPR requires companies to get meaningful consent for data collection and provides standards for how personal data is stored and transferred, defines the individual’s right to access their records and request deletion, and establishes data security standards.

In the US, regulatory fines are usually so small that large companies consider them a routine “cost of doing business” and we rely on civil lawsuits to serve as a deterrent against abusing consumers. Since individuals can rarely afford to go toe to toe with corporate legal teams, class action suits become the main mechanism of redress, which while great at making money for lawyers, the trickle-down to the people actually harmed can be comically miniscule.

Under GDPR, regulatory bodies can levy significant fines–up to 4% of a company’s global revenue or more than €20 million (about $22.5 million) whichever is greater. And they mean it– in 2021 alone Amazon was fined €746 million ($840 million), WhatsApp’s Ireland was fined €225 million (or $253 million) for privacy and transparency violations, while Google was fined €150 million ($169 million) and Facebook fined €60 million ($67 million) for non-compliance.

And spoiler alert: having to respect European consumer privacy did not make all those tech giants pack up and go home.

FOUR things you can do right now

  1. Take control of your privacy — if you currently use or previously used mental health apps or services, and don’t want your information shared, pull up their privacy policies and search (Ctl+F in Chrome) for “opt out” or “delete” language for instructions. For example, Crisis Text Line says “You may request that we delete your Personally Identifiable Information, such as your full name, physical address, zip code, phone number, and texts/message transcripts by messaging the word LOOFAH to us….” If you can’t find this information, email the company for instructions and their opt-out/records deletion policies. You can also download the data Facebook maintains on you.
  2. Consider using privacy browsers for sensitive searches — simply putting browsers in “incognito” mode does not stop tracking. If you would like to research sensitive health or mental health-related resources without web tracking, consider downloading tracker-blocking browsers for these searches. You can also consider various browser plug-ins to reduce your trackability.
  3. Consider supporting privacy rights organizations, such as the Electronic Frontier Foundations; check out their privacy resources here.
  4. Find out if your state is considering privacy protection laws and contact your elected officials to support data privacy rights. Use the International Association of Privacy Professional’s US State Privacy Legislation Tracker tool here. Find your federal, state, and local elected officials’ contact info here.
stock image of sign that says “privacy please”
Photo by Jason Dent on Unsplash

--

--

Casey McCarty

Data Privacy & Protection Manager, criminal justice policy advocate, data nerd, Crisis & Risk Manager.