Stop watching me, Jeremy

Data brokers and the new panopticon

janeruffino
8 min readJun 22, 2017

--

In the main building of University College London, there’s an embalmed corpse in a glass box, dressed in a black suit with an on-trend ruffle down his front. This is Jeremy Bentham, now known as the ‘auto-icon’, and when he died in 1832, he left his body, along with his archive, to UCL. His real, preserved face was considered too creepy to display, so the one you see is made of wax; just about the only ‘real’ part of the body visible today is a clump of hair that peeks out from under his hat. Two years ago, a team of researchers decided to put a surveillance camera inside him. Now this waxy, glassy-eyed approximation of a dead dude is watching you watching him.

Bentham was a philosopher and social reformer whose most famous innovation was the ‘panopticon’. It was a design for a circular prison that would have a guard tower in the centre, allowing the guards to see all around them, and leaving the prisoners with a constant awareness that they were being watched, even if nobody was actively doing the watching, with the effect that they would alter their behavior for the better.

We never got panopticons in prisons (although Kilmainham Gaol is pretty close), but we’re increasingly familiar with the feeling that we’re being watched all or most of the time, online and off. We know that people who feel observed are often more compliant, and more conformist. If someone is looking, most of us will do whatever we can to perform in an acceptable fashion, even if we’ve done nothing wrong.

And this has always been the purpose of collecting data at a large scale.

Data collection has always been about power

For example, surveys and mapmaking have always been inextricable from social control and political power, especially as part of so-called civilizing projects, which have almost always started as pen-and-paper and quantitative exercises. Cataloguing the landscape and all the things in it is performed by people, and until recently, these were often military representatives, traveling through a place, and making sure the people who lived in it knew they were being measured, counted, and watched.

The mere act of altering the optics has often been enough to make people feel observed, and even if they didn’t change their behavior, or they resisted that change, the power dynamics had shifted. In the early years of Google Street View, the vehicles were often met with resistance, usually from residents of well-off neighborhoods who were afraid of maps being used to case their homes.

My favorite example is from the early 1600s, when the cartographer Richard Bartlett, one of Queen Elizabeth I’s most talented mapmakers, traveled through the far northwest of Ireland. This part of County Donegal, then known as Tír Connell, was unmapped, and had long been considered unmappable. It was densely forested, rocky, difficult to traverse, and the locals were more than a little resistant. It was shown that way on maps, both because that’s probably how it looked, and also to fill in the blank space of a physically inaccessible place. But this time, they let Bartlett make his map — then they chopped off his head, rather than have their landscape ‘discovered’.

My least-favorite examples all have to do with just how much access I’ve had professionally to people’s personal activities, many of which were unnecessary to do my job, or at least were discomfiting privacy overreaches — spending habits, browsing habits, where you live, a phone number you never gave me. Or the time an investor for a company I worked for suggested we voluntarily monitor customers for potentially suspicious activity (thankfully, I wasn’t the only one to shout that emphatically down). With very little effort, anyone whose job touches on digital marketing knows a whole lot about you that they didn’t ask for and that may benefit a company more than it benefits you.

It’s getting a lot chillier, a lot faster

What’s different today from the pre-Internet days of physical mapping is that we’re being watched — definitely. We’re not often sure where, by whom, or to what end, and it’s causing us to change ourselves in ways we’re not sure we chose for ourselves. The modern name for this is a ‘chilling effect’, a pernicious form of self-censorship that is difficult to measure, but also impossible not to notice in ourselves and others. The panopticon analogy doesn’t hold up perfectly, not least because the complex power dynamics make it unclear who’s even in charge of the optics. It’s clear there isn’t one singular Big Brother, and we’re being watched by an increasing number of people and perspectives.

Oxford University researcher Jonathon Penney published a study showing pretty compelling evidence of these effects after the Snowden leaks in 2013. He used millions of Wikipedia searches for a list of 48 keywords that the Department of Homeland Security has specifically associated with terrorism. These terms include things like “chemical weapon”, and “improvised explosive device”, but also “nationalism” and “Pakistan”. That’s right, a web search on one of the ten biggest websites on earth for information about the sixth most populous country in the world could land you on a watch list.

Penney found that searches for these terms declined by 20 per cent after 2013 and remained that way until he stopped collecting data in November 2014. It’s not uncommon for information-seeking behaviors to change temporarily after a major triggering event, but Penney’s study shows something more troubling than even the most worried among us expected. Our behavior changed and stayed changed, and at a level so basic that it’s preventing people from seeking factual, peer-sourced information.

One of the reasons previous studies have been met with scepticism is that self-reporting of behavior isn’t seen as reliable, and sample sizes have been too small. There’s also no way to find a control group of internet users who have never been exposed to the threat of surveillance. The pervasiveness of surveillance culture is exactly why it’s impossible to study it in even a marginally objective way.

Penney’s conclusion confirms what we already knew, what’s been known for hundreds of years, what was literally the political underpinning of most of our academic disciplines ending in ‘-ography’. What he did that was different was take a giant sample size that had a clear before and after, and didn’t involve self-reporting. Will it be enough to prove that chills exist? And if so, will anyone relevant care?

Big brother, little brother, data broker

Last year, Robert S. Litt, a lawyer for the US National Intelligence Office wrote a terrifyingly confident article in defence of bulk surveillance in the Yale Law Journal. He referred to transparency as the ‘third leg of the stool’ and admits the agency ‘fell short’. He says the agencies are learning the lessons about it now, but then goes on to speak in positive terms about surveillance.

We’ve been so repeatedly shocked at the reach and depth of surveillance that it takes a lot more to make headlines in 2017 than it did in 2013. For example, the National Security Administration gathers five billion mobile phone signals per day (yes, you read that right). That’s two trillion pieces of data per year about where people’s mobile phones have been, even if they’ve got their GPS switched off. Are you surprised? Probably not.

And we’re at least, if not more affected by corporate surveillance, the network of ‘little brothers’ that works together to build our digital identities, that get bought and sold. Privacy, in terms of controlling who watches us, is now so elusive that it’s becoming a commodity, and is on its way to being a preserve of the powerful.

Recently, Tijmen Schep, technology critic and privacy designer, has launched Social Cooling, which outlines, in compelling and simple ways, just how much of an impact this kind of data collection has on us. What are you afraid to search for? What are you worried about typing into a search field, for fear that it might be used against you? Is the worry itself becoming normalized?

Facial recognition software is in use in China, for shaming jaywalkers. The WebSummit has said it plans to use it to find out what you’re looking at in the exhibition hall. What will you think twice about looking at? If I look at a crummy startup booth for too long, will I end up in an endless email drip campaign offering me limited-time free offers for all eternity, or will someone from Sales spend months badgering me to ‘hop on a call’?

What happens when companies that collect one type of data start to partner with other companies who do something that data wasn’t intended for? Cracked Labs launched a report on Corporate Surveillance in Everyday Life, which outlines some of the ways that companies work together to do things we didn’t ask for, some of which yield benign-sounding services like ‘personalization’.

Collecting, analyzing and acting on data created by humans isn’t necessarily a bad thing, but once we lose control over how our data is being used, or the identities these companies create on our behalf, it leads to things like calculating our financial risk based on who our friends are, or sorting us into categories we aren’t aware of, and that don’t necessarily benefit us.

It’s still unclear how companies in Europe will implement the new and far more stringent demands of next year’s General Data Protection Regulation, but its aim is to address at least some of this power imbalance between the surveyor and the surveilled.

One small thing we can do to stop offloading responsibility

One of the challenges is that business models, especially for consumer services, rely on generating value through user data, and on proprietary algorithms, which means we don’t always know how much is being collected, or for what reason, and it’s not in the business’s interest to tell us how they make decisions for or about us.

But it’s possible for people working in data-collecting companies to act back on some of it. In too many organizations, we assume it’s someone else’s problem to think about what we’re building, collecting, storing, and processing. As of next year, we’ll have a legal responsibility to think about it in Europe, but if the recent past is any guide, a lot of companies will focus on appearing compliant, rather than building more responsible services from the ground up.

Is your signup field collecting information that is more a marketing nice-to-have but isn’t needed to provide your service? Do you know if your app collects passive information about things like battery life — which can be used to identify an individual phone? How easy is it to disable location services, erase user data, or delete your app entirely?

Instead of going along with policies that enable companies to collect as much data as they can get away with, we can individually and collectively choose to redesign products, signup forms, and even system architectures to collect, process and store as little as we can get away with.

Bentham’s panopticon seems almost quaint in comparison to the growing partnerships between data-driven companies. And even Bentham believed that the panoptic view shouldn’t be limited to prisons, but that people in power should also be subject to challenges and forced transparency, if social reform (such as he saw it) was to have any positive effect.

But even if we’ve moved far beyond Bentham’s ideas, we need to stop treating things like ‘data-driven personalization’ as neutral, or even benign, and we can’t pretend that it’s someone else’s problem.

--

--

janeruffino

Relapsed archaeologist. Content designer and UX writer. I’m already friends with your dog.