What Happens To Your Data When You Die?

Alex Senemar
13 min readJul 27, 2015

--

“Right To Be Forgotten”

The race to “cure death” has gripped Silicon Valley. In 2012, Google hired Ray Kurzweil, the ‘futurist’ inventor best known for popularizing the idea of the “technological singularity,” a hypothetical ‘super-intelligence’ that will one day vastly outstrip the capacities of human beings. As Google’s Director of Engineering, Kurzweil’s job is to turn the fantasies of science fiction into consumer products — and Google has invested billions in hopes that Kurzweil’s dreams could one day become reality. One notable project, called “Calico,” was announced the year after Kurzweil joined Google: a secretive biotech firm researching age-related diseases and developing anti-aging technology. Soon, Kurzweil promises, age and disease will disappear altogether, giving way to “software-based humans” with holographically projected bodies. If Google has its way, mortality will be conquered by human machines.

As the world grows increasingly dependent on internet technology, many human beings have already begun living like ‘cyborgs.’ Your electronics are collecting detailed information about you every second, not to mention the enormous amount of data you share about yourself while navigating the internet — meanwhile, internet corporations have harvested that data to assemble detailed profiles of your habits and behaviors, in hopes of more effectively predicting and analyzing human activity. In Kurzweil’s most optimistic predictions, human beings will not be able to upload their brains to robot surrogates for at least another thirty years. Let’s assume it’ll take a bit longer for the elixir of immortality to appear at your local drug store (if this technology is ever affordable to those outside the ‘one percent’). Odds are, if you’re reading this article, you are going to die someday. But, when that day comes, what will happen to your electronic “persona” — your digital doppelganger that has made its life on the internet?

Ray Kurzweil: “Immortality by 2045”

The data that ‘represents’ you on the internet, for the most part, does not belong to you. In fact, it often isn’t even created by you — it’s generated passively, an idle consequence of your living in a world in which all ‘things’ can be connected by the internet. So who really owns ‘your’ data? That was the question at the heart of a recent lawsuit in Spain. In 1998, the newspaper La Vanguardia printed an auction notice from the Labour Ministry on page 23, listing all the property seized by the Social Security Department, with their locations, descriptions, and owners. Among the listed properties was a house owned by a young lawyer named Mario Costeja González and his wife — who, at the time, were saddled with debt. Ten years later, La Vanguardia decided to digitize its entire archive; Google’s web crawlers consumed the data, and soon the historical documents could be accessed via search. By that time, González was divorced and had paid off his delinquent bills — but he now discovered that the top search result for “Mario Costeja González” was the notice in La Vanguardia that his house had been foreclosed.

González filed a complaint with the Agencia Española de Protección de Datos (the Data Protection Agency, part of the European Union’s computer privacy regulation system) demanding that the article be taken down — but, because the information contained in the article was accurate, the complaint was denied under ‘free speech’ protections. So, González filed another complaint against Google, insisting that they remove the link from search results; this complaint was approved. Google sued in the Audiencia Nacional (the National High Court), which referred the case to the European Court of Justice (CJEU), the ‘Supreme Court’ of the E.U. In 2014, the court finally ruled in favor of González, determining that González’s privacy was protected by the ‘right to be forgotten‘: “the right of individuals to have their data no longer processed and deleted when they are no longer needed for legitimate purposes.”

The auction notice published in La Vanguardia.

The decision in González’s case was a reversal of previous rulings made by the CJEU. In June 2013, the court’s Advocate General had said that enforcing a ‘right to be forgotten’ would “entail sacrificing pivotal rights such as freedom of expression and information,” by placing search engines in the role of ‘web censor.’ Although Google is doing ‘personal data processing,’ he argued, Google itself is not the ‘data controller’ — that is, search engine providers only locate and cache data in order to index them, but cannot actually distinguish ‘personal’ data from ‘non-personal’ data. The judges in González’s case disagreed: in the process of caching and indexing websites, Google interprets, transforms, and sorts the information to figure out which results are ‘relevant’ — therefore, Google must also be responsible for the content contained in the links.

Google is now tasked with deciding, on a case-by-case basis, which ‘takedown’ requests are valid under the European Union’s privacy laws. The infrastructure for such requests already exists, for censoring sites illegally hosting copyrighted content, child pornography, or malware. The difference here is that the company must make a judgment in each case whether censoring a link adequately balances the ‘right to be forgotten’ (covering data that is “inadequate, irrelevant, or no longer relevant”) with the right to ‘free speech’ — a much more ambiguous sets of concepts. Google was clearly unhappy with the result — Kent Walker, Google’s general counsel, told The New Yorker: “We like to think of ourselves as the newsstand, or a card catalogue…We don’t create the information. We make it accessible. A decision like this, which makes us decide what goes inside the card catalogue, forces us into a role we don’t want.”

But the analogy of a ‘card catalogue’ is a little misleading — after all, a librarian must still decide which books will be stored in the library, and which books will be left out. Google’s PageRank algorithm is a mechanism for regulating the flow of information; it is so effective at interpreting and ‘ranking’ web content that ninety-five percent of all search traffic ends on the first page of Google’s search results (leading to the meme: “The best place to hide a dead body is on page 2.”) The ‘card catalogue’ analogy breaks down even further when you consider the amount of information Google collects from its users in order to improve its services, like personalized search and targeted advertisements. Google is in the business of consuming information, not just ‘making it accessible’ — and they have lofty plans for the data they consume. One day, when the internet’s vast supply of data is fed to Google’s artificial intelligence, will we be celebrating the technological utopia of Ray Kurzweil’s imagination? Or will we be looking back longingly at the days when Google was ‘just’ a “newsstand?”

Jules Polonetsky, executive director of the Future of Privacy Forum, had this to say about the González case: “[F]or the Court to outsource to Google complicated case-specific decisions about whether to publish or suppress something is wrong. Requiring Google to be a court of philosopher kings shows a real lack of understanding about how this will play out in reality.” The concern about censorship is very real — but, from another perspective, it could be said that the titans of Silicon Valley already resemble ‘philosopher kings,’ on a much greater scale than Polonetsky has in mind. As cryptographer Bruce Schneier wrote in his book Data and Goliath: “Our relationship with many of the internet companies we rely on is not a traditional company-customer relationship. That’s primarily because we’re not customers. We’re products those companies sell to their real customers. The relationship is more feudal than commercial… We are tenant farmers for these companies, working on their land by producing data that they in turn sell for profit.”

“Move Fast and Break Things”

Schneier’s comparison of contemporary surveillance to ‘feudalism’ is not exactly historically accurate — but Schneier isn’t an anthropologist, he’s a computer security scholar with a deep understanding of the global surveillance apparatus. The ‘feudalism’ metaphor, like the analogy to ‘philosopher kings,’ is a way of illustrating the degree to which our activities are monitored and regulated by alien powers beyond our reach; and in this ‘surveillance society,’ it’s often not possible to ‘opt-out.’ However, it’s important to point out that Silicon Valley’s ‘philosopher kings’ aren’t just interested in ‘profiting’ off of your data — corporate strategists have ideological agendas, ambitious plans for transforming society and shaping the future. As Mark Zuckerberg told potential investors before Facebook’s IPO: “Facebook was not originally created to be a company. It was built to accomplish a social mission — to make the world more open and connected.”

In pursuit of this ‘connected’ world, Internet users have become subjects of a massive social experiment — well, many experiments, actually. Following Google, Facebook employs academics trained in sociology and behavioral psychology to study its users and learn more about their habits. Cameron Marlow, head of Facebook’s Data Science Team, sees Facebook’s corporate aims as equivalent to the aims of scientific inquiry: “The biggest challenges Facebook has to solve are the same challenges that social science has,” he told the MIT Technology Review. Facebook’s analysts want to learn why some ideas spread and take hold, while other ideas fade away; they want to know how our attitudes and beliefs are affected by the people in our communities; and, eventually, they want to understand how to predict our future actions by observing our past.

For example, in 2012, users were prompted to check a box on their Timeline indicating they were registered organ donors, which triggered a notification to their friends; Facebook tracked the organ donor enrollment database and discovered that enrollment increased by a factor of 21 in a single day. In 2014, Facebook’s researchers conducted a psychological study to understand “how emotions are spread” by manipulating the News Feeds of half a million randomly selected users, altering the number of ‘positive’ and ‘negative’ posts they saw, to see how these changes affected users’ emotional states. They learned that people tended to ‘mimic’ the sentiments of their News Feed: those who saw more ‘positive’ posts wrote ‘positive’ posts of their own, and those exposed to ‘negative’ vibes responded in kind. Other experiments have more immediately practical application — for instance, using ‘tags’ on users’ photos to ‘teach’ computers how to recognize faces. Marlow told the Technology Review that the purpose of these studies is to support the ‘well-being’ of Facebook users — and, he hopes, to “advance humanity’s understanding of itself.”

Facebook’s new motto.

When it started up, Facebook’s motto was “Move Fast and Break Things.” The idea was to “set aside standard, conventional rules,” and not be afraid of failure — the developers wanted to build innovative tools quickly, so they could begin testing them out on users, even if there were still some bugs and things weren’t working ‘perfectly.’ Last year, the motto was changed: “Move Fast With Stable Infrastructure.” “In the past we’ve done more stuff to just ship things quickly and see what happens in the market,” Brian Boland, Facebook’s VP of product ads, told Bloomberg. “Now, instead of just throwing something out there, we’re making sure that we’re getting it right first.” The change in Facebook’s corporate culture reflects its changing relationship with its users; no longer a web novelty, it has become deeply integrated into the infrastructure of the internet, accounting for twenty-five percent of all internet traffic and 500 billion API calls to other applications each day. Zuckerberg’s vision is becoming a reality: an ‘open’ society where every minute detail of our lives can be translated into ‘data,’ stored on a server, and interpreted by machines.

We still haven’t really addressed the question posed in the title of this article: What happens to your data when you die? To be honest, there isn’t a clear answer. The ‘right to be forgotten’ is enforced through the “distributed regulation” that is characteristic of privacy laws today. Rather than implement regulations themselves, governments have ‘deputized’ companies like Google and Facebook to interpret and enforce the laws — where that fails, consumers can resort to litigation to hold the companies accountable. As such, each corporation has its own set of protocols for handling the data of the deceased: Google allows users to designate a beneficiary who will ‘inherit’ their account for three months, with an option to download the data before it is deleted permanently; family members can provide Facebook with proof of death (a certificate or obituary) to request deletion or ‘memorialization’ of an account; and so on.

The American Association of Retired Persons (AARP) has published a few guides to ‘preparing your digital estate’ for after your death. They recommend taking an ‘inventory’ of the websites where you have an account, documenting your usernames and passwords, and writing a list of instructions into your “last will and testament” — telling your heirs where your passwords are written down, and what you want them to do with your data. But, in many instances, AARP’s solution is not be feasible: it’s a violation of most companies’ terms of service to access another person’s account, dead or alive. In one case in 2005, the mother of a 22-year-old who died in a motorcycle accident got her son’s Facebook password from a friend, and emailed the company to ask if she could log in to his account — within a couple of hours, the password was changed and she was locked out. The mother was forced to sued Facebook for access, and the courts had to figure out for the first time how to handle a person’s “digital assets” upon death.

She lost — besides the problem of the terms of service, Facebook’s lawyers argued that the Stored Communications Act prohibits sharing personal information, even if a request is made in a person’s will. A related case unfolded the same year: a widow wanted access to her husband’s Gmail account, which contained information necessary to operate the business they ran together; she lobbied senators in her home state of Connecticut, resulting in the first ‘digital remains’ law in the United States, a legal framework for data access by survivors’ families. Since then, a four other states have passed similar laws to streamline internet companies’ obligations to the deceased; the Uniform Law Commission has also proposed a Digital Access Act which would allow “digital assets” (like Facebook photos, YouTube videos, e-mail conversations) to be treated as ‘tangible’ assets which can be legally inherited — like photographs or letters found in your parents’ attic.

But these legislative solutions only address a small dimension of the problem of ‘data ownership.’ Although you may get access to some of the ‘assets’ you shared on the internet (photographs, videos, e-mails, etc.), you don’t get to see any of the insights that were generated as a result of analyzing that information — the detailed behavioral profile assembled based on your social media activity, for example, or the population-level insights obtained by analyzing many profiles in aggregate — the ‘product’ that was manufactured using your data as ‘raw material.’ Furthermore, all the remedies described above place the burden on the user to protect their own privacy; unless you plan ahead for how your data will be handled after you die, or your family members are exceptionally persistent (and possibly litigious), it will sit on the companies’ servers in perpetuity. If the data was truly ‘yours,’ this arrangement would be nonsensical — or, as Bruce Schneier put it, “Privacy should be a fundamental right, not a property right.”

John Oliver: “Right to be Forgotten”

Google has refused to enforce the ‘right to be forgotten’ ruling outside of the European Union — meaning that even if your information is censored at google.de, google.fr, and so on, it can still be found at google.com. Their reasoning makes some sense; if information censored in one country must be blacked-out across the world, Google attorney Peter Fleischer wrote at Google’s Policy Blog, “We would find ourselves in a race to the bottom. In the end, the Internet would only be as free as the world’s least free place.” But the issue reveals fundamental limitations of any data privacy system enforced at a ‘state’ or ‘national’ level, in a world of data networks that flow across borders. At this point, whether the data is ‘public’ or ‘private’ — whether it can be seen by anybody, or is hidden behind a password — is irrelevant: in either case, it’s often beyond your control.

With the emergence of more sophisticated corporate surveillance technologies like ‘supercookies,’ and the proliferation of ‘data broker‘ firms accumulating information about you in bulk, it has become difficult to know who has access to ‘your’ data, or where that data is being kept. In the “End of Privacy” issue of Science, political economist Abraham Newman speculated about the potential emergence of ‘data havens,’ like ‘tax havens,’ where companies stash data in jurisdictions with weak privacy protections, so they can continue to make use of it even when it has been ‘deleted’ elsewhere. And we haven’t mentioned government agencies that may have secretly obtained your information and hidden it somewhere ‘regulators’ can’t see. So, even when legal ‘protections’ are put in place, you have very little say over what happens to ‘your’ information when you die — largely because you have very little control over this information when you’re alive.

As with most of the topics covered on this blog, the future is unclear. Even as some of these ‘policy’ questions get resolved in litigation and legislation, the philosophical questions will remain. Data has its own ‘life’ and activity — for all this talk about ‘ownership,’ “1s” and “0s” don’t actually ‘belong’ to anybody. The best we can hope for is effective means of securing our most sensitive data, and mechanisms of transparency for understanding exactly what information is already ‘out there.’ In this brave new world, we have to worry about spies and corporations taking advantage of our electronic doppelgangers today… but we may also have to consider the anthropologists and economists who will be browsing our e-mails and ‘profiles’ hundreds of years from now, looking for insights into how we lived our lives, digging for clues about the world we inhabited. In the past, being ‘forgotten’ was an inevitability, not a ‘right’ — today, it seems, the internet never forgets.

Check out our app at Sherbit.io

--

--