Privacy Concerns with Medical Wearables and the Internet of Things

Assessing America’s medical data policies in a connected age

Matt Piccolella
13 min readAug 20, 2015

On November 7, 2014, Disney Animation Studios released the film Big Hero 6, a super-hero movie in which the film’s young star Hiro helps to avenge the death of his brother alongside his puffy, robotic counterpart Baymax. Coming from the esteemed “San Fransokyo Institute of Technology,” Baymax is described as a “personal health care assistant”; he senses injuries to people, performs frequent tests to look for possible ailments, and, most importantly, stores lots of data on the people he is meant to “protect,” building profiles on each person so that he may best treat them. In one particular scene, Baymax scans the entire city in order to find a single person, all in the name of “treating” Hiro.

Audiences loved the film; it grossed more than $210 million at theaters and received the Academy Award for Best Animated Feature Film. However, the movie provides an interesting perspective into medical technology and how an increasingly connected world raises incredible privacy concerns.

Cisco describes the “Internet of Things” as “increasing the connectedness of people and things on a scale that once was unimaginable”; this is essentially creating a world in which it is not just your computer that is connected to the internet, but your television, car, refrigerator, and more.

We now live in a world in which connected devices out number people three to two, mostly fueled by the “increasing interconnectivity of machines and personal smart devices,” and there are no signs of stopping. A Morgan Stanley study predicts that the number of devices connected to the “Internet of Things” will grow to 75 billion by 2020, citing the “200 unique consumer devices or equipment that could be connected to the Internet that have not yet done so.” Many of these connections are seemingly innocuous; if Samsung wants to put a Wi-Fi touchscreen in my fridge so I can more easily restock it using Amazon Fresh or Google Shopping Express, let them.

However, there has been a growing trend in the medical community toward creating “wearable technology” meant to aid in the healthcare of its customers. And the trend is accelerating; ABI Research predicts that by 2016, the annual sales of “wearable wireless medical devices” will grow to 100 million devices and “wearable sports and fitness-related monitoring devices” to 80 million devices, ranging from the more ubiquitous and seemingly less invasive Fitbit or Jawbone Activity Trackers to the Metria Wearable Sensor, a device that collects and transmits a data point every time its wearer breathes.

Here, we will explore the growing privacy concerns that come with a world in which healthcare is increasingly provided by technology. First, we will begin with a brief discussion of the history of medical data and privacy, explaining the general expectation of privacy in regard to medical information that people have come to know. Next, we will outline the medical wearables market, giving an idea of the kinds of devices are available, the types of data they collect, and what the data can be used for. Finally, we will juxtapose these devices with past conceptions of medical privacy, analyzing the relations and attempting to provide an outlook for the future of medical privacy and where advances in technology and connectivity will take us.

Before the age of the internet, medical privacy consisted mostly in individual communications, in which patients would communicate with doctors one-on-one. The Hippocratic Oath, the oath which all doctors must take before practicing medicine in the United States, includes the promise “whatever, in connection with my professional service, or not in connection with it, I see or hear, in the life of men, which ought not to be spoken of abroad, I will not divulge, as reckoning that all such should be kept secret.” This traditional doctor-patient makes privacy an almost non-issue. There is a clear “expectation that physicians will hold that special knowledge in confidence and use it exclusively for the benefit of the patient” and a clear opting-in of the information gathered; a patient, for example, knows that a doctor will be gathering his blood pressure when she places a cuff on his upper arm.

In the traditional doctor-patient setting, a patient feels compelled to share with a doctor anything that she believes will bring her the best care possible, knowing both the ethical and legal privacy that she is entitled to.

There is even a legal aspect of this relationship. The US Legal code defines “doctor-patient privilege,” which is “the right to withhold evidence from discovery and/or the right to refrain from disclosing or divulging information gained within the context of a ‘special relationship.”’ So, in the traditional doctor-patient setting, a patient feels compelled to share with a doctor anything that she believes will bring her the best care possible, knowing both the ethical and legal privacy that she is entitled to.

The landscape of this relationship changed in 1996 when President Clinton signed into law the Health Insurance Portability and Accountability Act (HIPAA), a law intended to “to make it easier for people to keep health insurance, protect the confidentiality and security of healthcare information and help the healthcare industry control administrative costs.” The law was essentially a healthcare privacy law with a very modern goal: to bring down healthcare costs by reducing fraud, leveraging electronic communications and establishing standards for exchanging medical data.

Particularly pertinent to our discussion is the HIPAA privacy rule, which “provides federal protections for individually identifiable health information held by covered entities and their business associates and gives patients an array of rights with respect to that information.” In addition, the Security Rule “specifies a series of administrative, physical, and technical safeguards for covered entities and their business associates to use to assure the confidentiality, integrity, and availability of electronic protected health information,” creating industry standards for storing and sharing medical data.

In its protection of “individual’s health information and his/her demographic information,” HIPAA makes an important definition of “protected health information” (PHI) as information, that when looked at, can be used to “tell who the person is”; the law specifically mentions “names…addresses…social security numbers…and photographs,” among other things, as examples of PHI. This information will prove to be important to us later, as we look at the “metadata” of medical care. Particularly important to us are the defined parties who are required to comply to HIPAA; the law specifically mentions three entities that must comply to HIPAA standards: “Health Care Providers, Health Care Clearinghouse, and Health Plans,” all of which require entities to “bill or be paid for healthcare.” This information will prove to be vitally important to our later discussion.

The ACLU points out that “medical privacy should not become a casualty of the race to set up databases of electronic health records,” perhaps suggesting that the government may not be fully considering the privacy implications of increasing the role of technology in the storage and transmission of medical data.

More recently, as a part of the American Recovery and Reinvestment Act of 2009, President Obama legislated for the “encouraged adoption of electronic medical records by doctors and hospitals,” despite doctors and patients that “worry that their personally identifiable medical data will not be protected.” While the law is certainly much more in-line with modern technology, it seems that the law “calls for the segmentation of sensitive information and use of the minimum about of information but it doesn’t explicit require informed patient consent for many uses of medical information,” seemingly solving one problem but creating others. The American Civil Liberties Union points out that “medical privacy should not become a casualty of the race to set up databases of electronic health records,” perhaps suggesting that the government may not be fully considering the privacy implications of increasing the role of technology in the storage and transmission of medical data.

Enter wearables. A “wearable” is defined as “an electronic technologies or computer that is incorporated into items of clothing and accessories which can comfortably be worn on the body.” Also specifically cited as a qualification of the technology is the inclusion of “communications capability,” which “will allow the wearer access to information in real time,” suggesting an intimate connection between our wearables and the Internet of Things.

A CNET article outlining the ten best wearables of 2015 includes devices such as the Pebble Steel, the Jawbone Up24, and the FitBit Charge; all but one are intended to be worn on the wrist, and five of the ten are intended to track and transmit fitness data.

The fitness tracker market is generally broken down into three main competitors; a Business Insider article points out that, of the 3.3 million fitness trackers sold in the United States between April 2013 and March 2014, 67% were sold by Fitbit, 18% sold by Jawbone, and 11% sold by Nike. As the article cites fitness trackers as being what will help bring medical wearables into the “mainstream market” and Fitbit holds a large lead in the fitness tracker market, it makes sense for us to further investigate the collection and usage of data by Fitbit.

Fitbit’s product offerings include a line of wristbands meant to track everything from steps to calories burned to heart-rate as well as a scale, called Aria, that tracks body weight, BMI, and body fat percentage, transmitting the data each time the scale is used over the user’s home WiFi network. Reflecting a growing taste for these smart activity trackers, the company recently saw a wildly successful initial public offering (IPO) that puts the company’s valuation at just over $9 billion.

However, the company has not gone unnoticed in a privacy context. Senator Charles Schumer, in a public statement, condemned the “privacy nightmare that could be created by Fitbit and other wearable fitness trackers”; Schumer specifically mentioned Fitbit, saying that “Personal fitness bracelets and the data they collect on your health, sleep, and location, should be just that — personal,” also claiming, in all caps, that “FITBIT BRACELETS & SMARTPHONE APPS ARE TRACKING USER’S MOVEMENTS AND HEALTH DATA THAT COULD BE SOLD TO THIRD PARTIES.”

“Fitbit Content” is “protected by copyright, trademark,…,and other laws of the United States and foreign countries” and users agree “not to remove, change or obscure” it, and thus seems to fall clearly into the category of data that Fitbit could sell.

A quick look at the Fitbit Terms of Use reveals the definition of “Fitbit Content,” which includes “any text, graphics, images, music, software, audio, video, works of authorship of any kind, and information or other materials that are posted, generated, provided or otherwise made available through the Fitbit Service to you”; it seems that user information, such as steps walked, location, calories burned, etc., falls clearly into this category of “information” that are “generated” through the “Fitbit Service.” Subsequently, it seems that there is a lot of validity to Senator Schumer’s claims; this “Fitbit Content” is “protected by copyright, trademark,…,and other laws of the United States and foreign countries” and users agree not to “not to remove, change or obscure” it, and thus seems to fall clearly into the category of data that Fitbit could sell.

Dana Liebelson of Mother Jones also points out that, according to Fitbit’s Privacy Policy, “at times Fitbit may make certain personal information available to strategic partners that work with Fitbit to provide services to you,” clearly showing an intent by Fitbit to reserve the right to “make available” (i.e. sell) personal information to partners.

While it seems clear that Fitbit has at least a legal right to sell the data it collects, many may feel un-perturbed by the data that is collected; many probably wouldn’t mind that a company has access to the fact that he or she walked 10,342 steps on February the 15th, 2015. However, we must now examine the wider array of medical wearables that exist in addition to these ubiquitous fitness trackers. Information Week’s “Ten Wearable Health Tech Devices to Watch” contains some devices which would unsettle even the most forgiving of technologists.

For example, AiQ “smart textiles” measure a “user’s heart rate, respiration rate and skin temperature,” Danfoss PolyPower create sensors that “precisely measure displacement on or close to the human body, such as motions, breathing, swelling, and posture,” and, perhaps most unsettlingly, Imec’s wearable EEG headset that tracks “brain and heart activity” and transmits the data “in real time to a receiver located up to 10 miles away from the system.” Information that used to be saved for doctor’s offices and medical clinics is being measured and transmitted by devices that we wear like a wristwatch, creating immense privacy implications.

Imec’s wearable EEG headset transmits data in real time to a center up to ten miles away.

Now that we have seen the landscape of the medical wearables landscape, we may now examine the relationship wearables have to current medical privacy legislation. Recall that earlier we saw that three entities, “Health Care Providers, Health Care Clearinghouse, and Health Plans,” are required to comply to HIPAA standards, which require protection of PIH. However, it seems that medical wearable companies do not fall into any of these three categories. The one that a company like Fitbit seems to most plainly fall into is the status of “Health Care Provider”; however, we saw in the legislation that to be considered a health care provider, a company must “bill or be paid for healthcare”; while it is clear that Fitbit is being paid ($99.95 for its flagship Flex), it is unclear whether this is considered “healthcare.” No service is being rendered, as the data offered to customers is merely collected by hardware, for which consideration is made at the time of purchase.

Thus, it seems that the language of HIPAA is not nearly up-to-date in handling these cases in which a medical “service” is not provided; it is clear from both the motivation for that act and the legislation itself that the act was meant only to handle insurance providers and healthcare providers themselves.

In addition, the distinction between ePHI (electronic PHI), data that can be used to “tell who a person is” which is protected by the legislation, and medical “metadata” seems to be shrinking. Medical “metadata,” specifically the data that these medical wearables collect (location, heart-rate, sleeping patterns, etc.), was most likely not seen by the framers of HIPAA as PHI, but recent studies seem to be proving otherwise; a recent MIT study revealed that an algorithm could be used to, given a person’s credit-card history, “identify the unique individual purchasing patterns of 90% of the people involved, even when the data were scrubbed of any names, account numbers or other obvious identifiers.” It seems entirely possibly that a person’s location, sleeping patterns, and several other pieces of information, completely scrubbed of names, etc., could be used to uniquely identify them; throw in their posture, breathing, and brain activity, and the concept of individual privacy seems but a dream.

It seems entirely possibly that a person’s location, sleeping patterns, and several other pieces of information, completely scrubbed of names, etc., could be used to uniquely identify them.

Perhaps most startlingly is the public’s lack of concern for their medical data, despite a seemingly wide-spread privacy concern for wearable technology in general. A 2014 Accenture study shows that “eighty percent of consumers have privacy concerns with wearable Internet of Things (IoT) connected technologies”; despite this, “half of those same consumers said they would be willing to share personal data collected by such devices with third-party retailers when presented with compensation such as a coupon or discount.”

It seems difficult to reconcile the fact that 40% of consumers would be willing to sacrifice their personal data and privacy for what may ultimately amount to 20% off a Pebble smartwatch. The study, in almost completely discounting the need for personal privacy with wearables, reveals the “gap in consumers’ fears of data privacy and their actual purchasing behavior,” exhorting companies to “focus on specific benefits that sharing data will deliver to consumers.” Perhaps this gap comes from an undervaluing of a person’s own data, as it seems consumers are willing to forgo it quite readily in exchange for even the most modest of discounts or coupons.

When considering this gap in the valuing of personal data, it is even more startling to examine the results of a NPR-Truven Health Analytics Health Poll, which found that a particularly small percentage of people worry about the privacy of their medical data. The poll found that only 16% of people have data privacy concerns with their insurers, and an even smaller 10% have concerns with their employers. Interesting to note is the order of concern, ranging from insurer to hospital to doctor to employer. It seems that the closer a person or entity is to a person, the less they are concerned about the privacy of their data; a person is likely to be very unfamiliar with his insurer, but likely is quite close with his employer. This may allow us to extrapolate an extreme discomfort with a technology company that could harvest information about a person. However, the trend we see here of a general disinterest in the privacy of a person’s medical data seems to be troublesome.

Ultimately, the emergence of medical technology, particular in the form of wearable devices, has and will continue to pose a particularly interesting privacy challenge. Examining the language and framing of HIPAA helped to reveal an out-of-date piece of law that does not account for many of the specific concerns raised by technology companies that gather medical data, generally falling outside the bill’s classification as a “health-care provider.” Additionally, we saw the somewhat tepid response of the 2009 Stimulus bill in better securing electronic medical records. After seeing the troubling landscape of medical wearables, ranging from fitness trackers to brain scanners, it seems obvious that some particular comprising data is available to companies that seem to reserve the right to sell the data to third-parties.

Perhaps it will take something drastic, such as a massive leak of data from a company like Fitbit, to convince the possible dangers of using these devices.

Finally, given our findings on the usefulness of medical metadata in uniquely identifying people, the observed public disinterest in preserving medical data perhaps reflects a failing of the government and privacy advocates to show the public that the electronic medical data measured and transmitted by these wearables is of the utmost importance. Perhaps it will take something drastic, such as a massive leak of data from a company like Fitbit, to convince the possible dangers of using these devices; in the meantime, it’s time for Congress to push to update the standards by which electronic medical data is secured. To paraphrase Charles Schumer, perhaps there are just something things that should remain private.

This post has been adopted from an essay I wrote for Professor Steven Bellovin’s “Computers and Society” course at Columbia University.

--

--

Matt Piccolella

Product Manager @Lyft, formerly CS @Columbia + Partner @DormRoomFund