With connected health technology, data is a safety risk

Despite tensions between data collection and data rights, it does not have to be a zero-sum game

Dena B. Mendelsohn
HumanFirst
5 min readJun 30, 2021

--

It may seem counterintuitive that a company whose mission is centered on the success of connected sensors wants you to know that products that collect personal information create safety risks. For us at HumanFirst, awareness about the risks these products produce opens the door to smart choices — there are options in the marketplace and some organizations have better security and data rights practices than others.

The problem: connected sensor technologies, including wearables and other biosensors, are revolutionizing how pharmaceutical trials are run, how clinical care is provided, and how individuals manage their health. At the same time, they introduce new risks: because the sensitive information they collect can be misused and cause lasting harm, their great power must be coupled with great responsibility.

The good news is that — despite tensions between data collection and data rights — this does not have to be a zero-sum game. There are wearables that are relatively secure with strong data rights.

The bad news is that the lack of comprehensive privacy laws creates uncertainty and that uncertainty feeds mistrust that can slow technology adoption. Confusion about privacy and security requirements under HIPAA can tempter enthusiasm to try new technology with patients.

Not all products will proactively ensure the data rights of users — HumanFirst will help our customers find those that do.

The fewer hurdles decision-makers have to jump, the better it is for everyone. Our “nutrition label” for health-tech security and data rights practices spotlights key details from organizations’ privacy policies and security programs. Soon, it will note which meet HumanFirst’s minimum expectations and suggest action steps for customers interested in products that fall short.

A Data Rights Primer for Wearables in the U.S.

Last year, Jennifer Goldsack and I coauthored Safety First: The nuances of health technology that can hurt your patients; the content still holds true and may help you understand why the way your data — or your patient’s data — is handled can create very real harm.

The current regulatory framework is complicated and incomplete

There is no comprehensive federal privacy law. The most prominent health privacy protection, the Health Insurance Portability and Accountability Act (HIPAA), applies to only a narrow sub-category of health information, not everything you think of as your “health information.” HIPAA was never intended to be a comprehensive health privacy law and it isn’t one. The law only applies to data created or held by healthcare providers, health insurers and plans, and “data clearinghouses” — a tricky category that, while being hard to explain, does not include connected sensor technologies. The fact that there’s a public outcry every time people learn about how their health information is handled shows that there’s still quite a lot of confusion around what HIPAA does and does not do.

On the flipside, healthcare providers, whose workflows have for years operated within the confines of HIPAA, may find themselves stymied by confusion over whether sharing protected health information with patients’ sensors requires a formalized business associate agreement or creates legal liabilities for them. The answer, as explained by two leading health privacy experts in this published article: it depends.

Health privacy laws at the state level are similarly limited and their jurisdiction is restricted by geography. Newer state privacy laws have not yet solved the problem. Although a new state-level privacy law — such as the Virginia Consumer Data Protection Act or the California Privacy Rights Act — could elevate protections for sensitive data, and wearable makers could opt to apply those protections to all their customers rather than only to those living in the legislating state, that hasn’t happened yet.

For individuals, legal protections are small consolation when they barely create a foundation of data rights.

The most comprehensive privacy law to date may be a connected product’s own privacy policy

A sign of the lag in true privacy legislation is that perhaps the closest thing we have to data protection comes from a 100-year-old law called the Federal Trade Commission (FTC) Act. Section 5 of the Act effectively turns privacy policies into privacy laws tailored to the companies that created them.

It is important to read the privacy policy associated with each connected sensor, both the organization’s website privacy policy and any separate policy for the product itself, because those are the rules that govern how data will be collected, used, and stored.

Unfortunately, privacy policies are not a rock solid foundation for data rights. Companies that rely on loopholes in their policies mislead consumers and erode the public trust. Privacy advocates at the Electronic Frontier Foundation have rung the alarm on behavior like this by Google and ad-tech for some time now. In addition, a strong privacy policy today does not ensure a strong policy down the line. Companies can change their privacy policy at any time, (but not retroactively), or even transfer your personal information as part of a sale or shift in assets. If you choose a product because of a company’s commitment to privacy, you may feel differently when a big tech conglomerate acquires it. The lack of strong legal privacy protections shifts the burden to consumers, who must maintain vigilance over a staggering array of privacy policies.

Finally, even the most well-intended company may inadvertently introduce processes into their product that go against their own privacy policy. Companies use technology built by other companies all the time — building software from scratch would otherwise be inefficient and impractical. Ensuring that each component is not only aligned with company values but transparently listed is complicated and time consuming.

Data can be collected and used safely

Emerging technologies offer many advantages for researchers, clinicians, and individuals. We recommend that evaluation of each product’s security and data practices be included as part of the selection process when choosing any connected technology. To do that, ask the same questions our experts ask and look for evidence-based responses:

A screen capture of the HumanFirst security and data practices security label. This label shows a link to relevant website and product privacy policies, and check marks indicating that the privacy policy provides key details.
  • What data is collected and for what purpose?
  • Who will receive the collected data?
  • Does the organization share the names of third parties that will have access to the data?
  • Is there a reasonable plan for data retention until complete deletion?
  • Is there a system for individuals to control?

You should be able to find the answers in each products’ privacy policy. A missing privacy policy altogether is a red flag that the company is not planning for the safety of your data.

The time to ask these questions is before buying a product or clicking through the acceptance screen when you register your account.

The status quo requires individuals — as well as decision-makers over large purchases, such as for clinical trials or healthcare — to be cautious when choosing a product. Connected sensor technologies always come with security and data rights vulnerabilities. HumanFirst encourages makers of connected sensors to ensure that their products are effective and safe for their users and users’ data. Those that accept this challenge will stand out in the marketplace.

--

--