Loading…
0:00
12:30

The user agreement has become a potent symbol of our asymmetric relationship with technology firms. For most of us, it’s our first interaction with a given company. We sign up and are asked to read the dreaded user agreement — a process that we know signifies some complex and inconveniently detrimental implications of using the service, but one that we choose to ignore. Our privacy hangs in the balance, yet we skim to the end of those tedious terms and conditions just so we can share that photo, or send a group message, or update our operating system…

It’s not our fault. These agreements aren’t designed in a way that would allow us to properly consider the risks we’re taking. Tech companies have no incentive to change them. Lawmakers don’t seem to know what the alternatives are. But that doesn’t change the reality: User agreements are a legal and ethical trap, and they betray the trust of users from the very start.

The Accident of User Agreements

In the late 1990s, when lawmakers started looking for ways to protect people’s data, user agreements were a convenient place to start. Regulators were still handling the young internet market delicately and came to rely on a system of “notice and choice” to protect individuals’ privacy. The idea was that if users were given notice of a company’s data practices and chose to continue using the service, then they consented to those practices. In this way, lawmakers decided, the privacy of users was respected because the users were in control.

And here is where the modern approach to data protection began to fail.

Whatever function the privacy policy was originally meant to serve, it became the default system for informing users about how their data would be used. It was convenient for companies that this approach stuck — it allowed them to transfer many of the risks of data processing to users, while maintaining their increasingly lucrative data-based advertising businesses. To this day, these agreements largely exist to legally protect companies and not to fully inform users in an intelligible way.

Those principles, developed at the advent of the commercial database, have now come to underpin almost every data-protection regime in the world.

Arguments against privacy policies have been made for years, but there’s been little progress. That’s because lawmakers and companies still think of privacy in terms of control.

The recent debate about personal data, triggered by the Facebook and Cambridge Analytica revelations, is long overdue, and with it the scrutiny of user agreements. At April’s congressional hearing on the matter, lawmakers focused on Facebook’s user agreement and associated privacy policies because it is the only document that formally explains the company’s business model and represents the user’s relationship with Facebook. The agreement is meant to inform users about Facebook’s intentions with their data and act as the mechanism that gives the company permission to proceed.

In his testimony, Mark Zuckerberg highlighted that “the first line of our Terms of Service say that you control and own the information and content that you put on Facebook…you own [your data] in the sense that you chose to put it there, you could take it down anytime, and you completely control the terms under which it’s used.”

Photo by Josh Edelson/AFP/Getty Images

This narrative is common in Silicon Valley, where every tech company has conceptualized privacy in terms of control over the data collected, how it is used, and where it goes. Google and Microsoft emphasize control in their terms of use, as well as in promotion of privacy dashboards. The idea is that if you are gifted with options about your data, then companies must be doing their part for privacy. That might sound great, but it’s exactly what allows services to turn people into data spigots.

Companies always give you the option to “allow” them to collect and process your information. But because these businesses depend upon users selecting the “permission” option, their incentive is to use every possible strategy to engineer your consent: Some companies gloss over privacy-protective options; others make consent seem quite attractive and keep asking or nudging you for permission or (in some countries) condition consent to use of the service or to its full range of features.

Privacy policies play right into this control narrative. Companies that have short and simple privacy policies can say that more length and complexity will just confuse people and deprive them of meaningful control. Companies that have long, complex user agreements can say that short and simple privacy policies don’t have enough information for people to meaningfully act on.

Either way, the notion of privacy as control benefits companies by allowing them to leverage an illusion of agency via terms and settings to keep the data engine humming.

The questions put to Zuckerberg during that hearing were premised on the assumption that boilerplate user agreements are informative for users. That’s absurd, not least because it assumes that users read them. Acknowledging this, Congress seemed stuck on the issue of whether agreements should be simpler and more accessible or longer and more comprehensive.

Senator Brian Schatz told Zuckerberg that with terms of service at 3,200 words and a privacy policy at 2,700 words, “people really have no earthly idea of what they’re signing up for.” Later, Senator Chuck Grassley told Zuckerberg, “Facebook collects massive amounts of data from consumers, including content, networks, contact lists, device information, location, and information from third parties, yet your data policy is only a few pages long.”

Not even the people making the rules can agree on what these agreements should look like.

Death by Complexity

The long, comprehensive user agreement is easy to mock. Facebook’s full-length privacy policy is 2,731 words, which would take most people more than 10 minutes to read, though comprehension is another matter altogether. Academics have hypothesized that it would take users 25 days to read every agreement on every site they’ve visited.

Policies are so broad as to be meaningless. Facebook’s terms say it collects almost everything you expose to them, from “things you do and information you provide” and “your networks and connections” to “information from third-party companies.”

The availability of knowledge doesn’t necessarily translate into meaningfully informed decisions. Lawmakers favor mandated disclosures, like the warnings on cigarette boxes, because they are cheap and counterbalance “information disparity” — the reality that companies often know much more than consumers regarding the wisdom of a decision. But in this context, users are being asked to consider the privacy implications of each post they create — an impossibly complex calculation to make about future risks and consequences.

Oversimplifying Risk

The modern data ecosystem is mind-bogglingly complex, with many different kinds of information collected in many different ways, stored in many different places, processed for many different functions, and shared with many other parties. All that nuance gets glossed over when companies try to simplify and shorten information, the risk hidden or made to seem more benign through abstraction.

The ambiguous language of Facebook’s data policy makes it hard for most of us to assess the risks of our data being shared with an abstract “other partner.” Did we anticipate the possibility that 87 million Facebook users would have their information improperly shared with an academic who scraped data from an online quiz and provided it to a dubious data broker who weaponized the data against people in a way that was corrosive to autonomy and democracy?

Photo by Justin Sullivan/Getty Images

The terms don’t meaningfully cover that eventuality — but how could they, with a situation that’s so complex and enormous in scope? And what other manifold risks are we failing to contemplate when reading these kinds of vague disclosures?

Despite this, lawmakers have begun to propose laws that demand more simplified data policies, including the EU’s newly enacted General Data Protection Regulation (GDPR). One proposal by a member of the California Assembly wanted privacy policies limited to 100 words.

Privacy Policies Are About Control

We miss a lot when we think of privacy in terms of control. First, notions of individual control don’t fit well with privacy as a collective value.

“Data privacy is not like a consumer good, where you click ‘I accept’ and all is well,” wrote scholar Zeynep Tufekci. “Data privacy is more like air quality or safe drinking water, a public good that cannot be effectively regulated by trusting in the wisdom of millions of individual choices.”

Thinking of privacy in terms of control can diminish the role of trust online. Trust is the essential ingredient of safe, sustainable, and productive relationships, and control undermines that by putting people in adversarial positions with companies.

When companies ask for our personal information, they are asking us to trust them to keep us safe. But the system is set up so that when people take “control” by consenting to data practices, they often end up giving permission to companies to act recklessly, with little or no responsibility to look out for the data subject. Companies betray the trust of users — who probably don’t even realize what they are agreeing to — under the guise of respecting users’ autonomy.

The path forward is to create rules that don’t require or even expect users to read these agreements. Our rules should make companies trustworthy regardless of the control we are given. This means changing the nature of the relationship between users and companies entrusted with their data to one that is fiduciary in nature.

In other words, because it is virtually impossible for us to be fully informed of data risks and exert control at scale, our rules should make sure companies cannot unreasonably favor their own interests at our expense. They should owe us nondelegable duties of reasonable care and loyalty by default. Binding trust rules would move us past much of the hand-wringing about the design and substance of privacy policies, which is born out of a concern that these agreements leave people exposed.

In Defense of Privacy Policies

I’m not advocating that we ditch privacy policies entirely, because they can be indispensable transparency and regulatory tools. But they are designed for lawyers, regulators, journalists, advocates, investors, and industry.

They can’t benefit the humble user, because that idea supports the larger fallacy that it’s our responsibility to police Facebook and the rest of the industry. It’s simply too big an expectation.

Lawmakers should instead direct their attention to the structure, aesthetics, and functionality of the services themselves. How platforms like Facebook are designed and the signals that interfaces, buttons, and symbols give off shape people’s behavior and expectations more than any tucked-away boilerplate. Padlock icons, privacy settings, and badges all act as invitations for users to trust companies, services, and technologies. Certain kinds of casual and playful web designs downplay the risks of disclosure online. Ephemeral media like Snapchat can make it seem like images disappear, even when that’s not really true. Online services are usually built to make you feel safe sharing — even when you’re not.

Privacy policies are useful governance documents. But users should be protected regardless of what these policies say or how long or clear they are. Tech companies shouldn’t be allowed to launder the risk of disclosure onto users by engineering permission for dubious data practices. And if platforms like Facebook want to invite the trust of users, they should be required to respect the faith we place in them.