The right to not be remembered

“Data is the new oil” is a deeply flawed metaphor. In addition to being dehumanizing, it’s flat out misleading.

“Data” is not some dormant material that gets excavated from us, it’s whatever other people have chosen to observe, interpret, and remember about us. Data is not who we are, it’s what falls off of us as we interact with the world.

I prefer to think about data as being like the shadows we cast as we move around:

  • Our shadow is not us, but it is made possible by us.
  • Our shadow changes shape depending on time of day, surroundings, and our position and orientation in the environment.
  • Our shadow is visible for all to see, leaving a fleeting impression in physical space, but does not directly reveal anything about our inner workings.

When we move around in public, our data, like our shadow, is exposed for all to see. But the default social contract is that the people around us aren’t trying to remember these details in the first place. Sure, if other people make it a priority to track our behavior, they can pick up on all sorts of details and patterns: Where we are, what we’re wearing, what we’re buying, who we’re with, how we’re communicating, etc. But it’s impolite to stare and downright illegal to watch our every move. And what’s more, human memory is full of little errors that offer us plausible deniability and reasonable doubt if and when someone tries to point a finger at you. These memory errors are like little pieces of armor against the fear of a looming permanent record, gossipers, and tattle-tales.

Thanks to social contracts and imperfect memories, when we leave the house and move about the world, we expect that our data is appropriately fleeting and impermanent, just like our shadow.

Unfortunately for our digital data, the opposite has become the status quo…

  • Everywhere we go, we leave a trace.
  • The “default setting” is to be recorded, scrutinized, and gossiped about.
  • 1’s and 0’s don’t afford us the same plausible deniability that comes with the fallibility of human memory.

Our digital selves are now under constant observation, fueled by the incentives of surveillance capitalism. The social contract that’s helped us explore our authentic selves has been broken. But it can be mended… or at least, we have to try.

Common sense privacy

People shouldn’t be forced to constantly wrestle with the ramifications of their actions when they’re online or around connected devices.

Lurking questions like “who does this system think I am (when they show me this ad)?” or “what am I committing to becoming (if I watch this video)?” or “who might be able to re-identify me and steal my identity (if I sign up to this)?” are just too damn existential to be handled by a few swipes on your phone (or a buried link to a jargon-filled privacy policy).

Furthermore, privacy controls for AI-powered systems are fundamentally insufficient if they presume that people are capable of understanding the tradeoffs that go along with in each setting of a complex, dynamic, and tightly coupled system. It’d be like asking Amazon users to weigh the impact of each online order against the impact it might have on locally-owned neighborhood businesses. We’re simply not used to having to start by distrusting the motives of the big systems we rely on.

So let’s strive for common sense. Privacy by design should mean going beyond the “right to be forgotten” and returning to the social contract we already know and love: the right to not be remembered in the first place.

Earning the right to be trusted

Asking people to entrust their data with us shouldn’t be something taken lightly. We should expect—and even encourage—our users to operate with a healthy dose of skepticism about our intentions.

Here are some recommended questions to ask when setting out to design with privacy as part of your product’s design language:

  • How have we earned the right to be trusted by the user?
  • What is the benefit to the user—what is the “value exchange”—of storing their personal data outside the device of origin?
  • How might we afford users the ability to perform a “test drive” where they can evaluate direct personal benefits before being asked for consent?
  • Have we given users the ability to verify that nothing is being remembered about them?
  • How might the system support an “airlock” capability where personal data disappears unless explicitly authorized to transition from the device of origin?
  • How might consent operate in situ instead of a priori?
  • How might we make users aware whenever their personal data are in use?
  • How will users be able to verify that when they delete their data, it’s actually gone?

Head of Design for Ethics & Society at Microsoft / Formerly People + AI Research at Google /

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store