“Sensitive” says who?
Authored by Elizabeth M. Renieris on Jan. 19, 2019
Predictions abound for federal privacy legislation in Washington in 2019. Despite the fever pitch desire to solve the personal data governance challenge, many of the proposed solutions are falling for an old trap — a drive towards neat reductionism (perhaps symptomatic of a meta-drive towards certainty in the digital age).
One clear example of this trend is the idea of dividing the personal data world into “sensitive” and “non-sensitive” data (if only it were that easy!). Take the Data Care Act, a bill introduced last December by U.S. senator Brian Schatz, which imposes a higher standard of care for “sensitive data,” defined by an exhaustive list rather than a qualitative description or definition. Similarly, the Information Transparency and Personal Data Control Act, a proposal from Reps. Suzan DelBene (D-WA) and Hakeem Jeffries (D-NY) establishes two categories of data — “sensitive personal information” and “nonsensitive personal information.” These kinds of measures are often well-intentioned but woefully short-sighted (others, like this one from the ITIF, are less well-intentioned but suffer from the same kind of reductionism).
There are at least three problems with this categorial approach to our personal data. First, the idea of pre-determined sensitivity, of dividing the personal data world into “sensitive” and “non-sensitive” data is old thinking — a relic of the past. Per this old thinking, “sensitive data” includes things like your social security number (SSN), financial account information, biometric data, geolocation data, and health-related information, among others. But your SSN is readily available on the dark web and no longer secret. And how do we define health-related data when your biological markers are now collected not by entities in the healthcare sector but rather by private sector manufacturers of wearables and smart devices? The separation of “sensitive” and “non-sensitive” relies upon what these data points signified in the (often distant) past.
Not only is the approach dated, but it’s ineffective, as future uses of our data are often unforeseeable. On a recent podcast, I heard the CEO of Whole Foods John Mackey (in discussing their recent merger with Amazon) say that the Company is interested in opening Amazon Go-type locations that “follow you around the store with sensors to track and scan what you are looking at and what you ultimately purchase.” This is obviously marketed as a drive towards customer service and convenience (i.e. to bill you without the hassle of checkout). But it’s not a stretch to imagine our purchases (and even the things that we pick up and consider, but ultimately decide not to buy) being fed into real-time health insurance policies and rates, or metrics around our carbon footprint, or some kind of social scoring — systems with significant personal consequences.
And we don’t have to imagine it because it’s happened before. Remember when Target predicted a teenage girl’s pregnancy and sent baby-related coupons to her home before she herself knew that she was pregnant? They did this through reliable “purchase indicators” of ordinary products like lotion, cotton balls, etc. Research has shown that credit card retailers can predict our divorces, affairs, and other events in our personal lives, based on our spending behaviors and mundane household purchases. These things likely wouldn’t be classified as “sensitive data” under any existing or proposed legal frameworks, despite their potential to reveal deeply intimate details of our personal lives, reinforcing the short-sightedness of the pre-deterministic, categorical approach.
But broadening the list of what’s included under the umbrella of “sensitive” data or information, or even attempting to predict future (or less foreseeable) uses of our data, doesn’t address the third and biggest flaw in all of these proposals. Fundamentally, the biggest problem with all of these efforts is that they approach privacy from the wrong perspective. In a self-sovereign world — a world in which there is still a degree of human autonomy — only I (alone) can determine what data about me is “sensitive” or “non-sensitive.” As one of my personal mentors Doc Searls likes to say, “privacy is personal.” And if I fully appreciated the technologies that surveillance capitalists have available to them, I would deem it ALL to be sensitive given the myriad unforeseeable uses they will make of it and the ZERO control that I have over those uses.
— — — — —
Originally published here.