How We Think About Privacy Matters
Privacy isn’t everything. But it also isn’t nothing.
What is privacy?
Like the Aristotelian conception of the “good life,” privacy is a notoriously nebulous concept. It differs for each of us. Perhaps it is simply, as Judge Cooley once called it, the “right to be left alone.” In their famous law review article, “The Right to Privacy,” Samuel Warren and Louis Brandeis tackled this issue in relation to property rights and the common law. They argued that:
The design of the law must be to protect those persons with whose affairs the community has no legitimate concern, from being dragged into an undesirable and undesired publicity and to protect all persons, whatsoever; their position or station, from having matters which they may properly prefer to keep private, made public against their will. (emphasis added)
Expectations of privacy differ based on context. In my home, with curtains drawn and doors locked, it can be reasonably said that my expectation of privacy is at its pinnacle. Upon leaving my cloistered environment, however, my expectation of privacy diminishes substantially. Once I step out onto the street, I enter the public domain and have willingly subjected myself to a degree of scrutiny and surveillance I would otherwise have avoided. Of course, this is not to suggest that I have no expectation of privacy in public.
Privacy is amorphous. What you consider private and worth protecting from the prying eyes of others is unlikely the same as what I would consider private. Rather than debating the metaphysical characteristics of defining privacy we should be more interested in what types of privacy we feel a sense of moral entitlement to. Defining the scope of privacy is a far more ideal starting point when thinking about our interests in privacy.
There are many facts or bits of information about each of us. Our names. Where we were born. How much we weigh. The intimate emotional details of our personal relationships. What we are doing or saying or thinking at any given time. What we read. What we like to look at on the Internet. And so on. To make a claim to privacy is to make a claim of control over these kinds of facts or information about us. When we make this kind of claim to control, we’re setting ourselves up as gatekeepers to this information. If somebody wants access to it, we think they need our consent. Otherwise, access constitutes a wrongful breach — an invasion — of this special domain of facts. For the most part, social convention determines what we can reasonably expect to fall inside of the charmed circle. There are special circumstances when I can insist on withholding my name. But I can’t reasonably expect others to refrain from using my name to identify me to others. That’s basically what a name is for.
The trouble with building an idea of privacy rights atop conventional expectations of privacy is that our expectations are based to a significant extent on technology. But today’s most pressing controversies about privacy arise precisely because technology has advanced so fast and our conventional expectations haven’t caught up. The way those conventions shake out will depend to a large extent on the rules we do or don’t put in place to regulate access to personal information.
The use of new technologies may violate current privacy conventions while also promising to deliver momentous benefits. For example, delivery drones will, as a matter of course, be able to see activity on private property that otherwise would have been entirely invisible. But it’s crucial to recognize the possibility that our interests in enjoying the benefits of delivery drones may ultimately outweigh our interests in, say, not being caught on camera sunbathing in the buff. If privacy expectations built around older technologies are allowed to rule, we may lose out on the benefits of innovation.
It works the other way around, too. If it becomes technologically trivial to, for example, listen in on lovers’ quarrels inside people’s’ bedrooms, and steps aren’t taken to prevent this sort of eavesdropping, we may well stop expecting those conversations to be private. But that doesn’t mean that we weren’t entitled to keep them private all along. Some fear that the government’s semi-secret use of new surveillance technology is like this, reshaping our expectations in a way that runs roughshod over our legitimate interests.
The question then is what, given current and foreseeable technological possibilities, our expectations of privacy ought to be. To answer that question, we need to get clearer about the nature of our interests in privacy.
Our interests in privacy are largely a matter of the likely consequences for our lives, both positive and negative, when we give some of it away. It’s necessary to weigh the risks against the possible benefits. But the risks and benefits of granting (or refraining from restricting) access to personal information depend on who we’re granting it to. That’s why it’s important to distinguish between interests in privacy with respect to the government, private individuals, and corporations.
The state, in classic Weberian terms, is the institution that holds a legal monopoly on the use of force. It is constrained (or ought to be) in its ability to invade the private precincts of life because it has the ability to use that force to coerce, imprison, and revoke the rights of citizens under narrowly tailored rules. It is for that reason that the Fourth Amendment requires both probable cause and a particularity requirement before permitting agents of the state to annex the privacy of individual citizens.
The interests we have in restraining the state’s access to our privacy are significant. Governments throughout history have regularly used intelligence and law enforcement as a means of quashing dissent, blackmailing political rivals, and disincentivizing the most marginalized members of society from voicing concerns in political affairs. Alternatively, the benefits of permitting government to peer deeper into our private lives rests on the tenuous assertion that it permits the state to keep us all more secure. That may be true, to a point, though the extent to which the tradeoffs favor security over civil liberties is a highly contested debate. We certainly shouldn’t dismiss the possible benefits out of hand, but we should embrace a healthy skepticism of delegating substantial surveillance powers to the state.
These protections are baked into the Constitution precisely because of the need for the strongest legal safeguards against government overreach into the private lives of citizens. If the state is to hold a legitimate monopoly on the use of force, the checks on that power need to be strongly in favor of restraint, judicial oversight, and congressional review. The nature of the state’s authority is also the quintessential difference between why rules governing government surveillance ought to be so much more stringent than those governing the private sector.
Returning to Warren’s and Brandies’ earlier piece, they noted that “the right to privacy ceases upon the publication of the facts by the individual, or with his consent.” Once an individual consents to relinquishing his or her privacy, the matter would appear to be settled. It seems like an easy case. In the modern era, however, what makes such determinations of consent so difficult is the question of what information about ourselves we own. Is it merely that information that can identify me? Or is it any information about me, which could, if released into the wild, cause me harm above a certain threshold? Further, can facts about me be owned at all? These are tricky questions. Rather than assess the fundamental question of property in facts and reputation, it is better to examine what actually constitutes a “harm” resulting from a violation of privacy.
My friend and colleague Will Rinehart summed up these issues quite nicely in a recent post for the American Action Forum. He notes that “people will often state a preference for privacy, and yet will be very willing to trade information for little to nothing. These harms seem to be relatively costless.” How courts determine limitations on the use of certain types of data will have massive reverberations throughout the Internet economy. As Rinehart notes, this is important because “data is the asset on which companies are built,” and “data use limitations translate into limits to innovation.” (The issue of legally actionable privacy harms is an interesting conversation currently raging over proposed FCC privacy rules. For greater detail on these issues, I highly recommend reading comments submitted by the American Action Forum and the International Center for Law and Economics.)
Perhaps, then, the ideal solution to preventing such harms lies in crafting an affirmative, “opt-in” system for relinquishing data online. After all, if information tied to the individual is to be monetized by private firms, it only seems reasonable that one should have to consent to its distribution, ex ante. Unfortunately, reasonable though this type of regime may sound, it has its costs.
As Rinehart discusses in a more recent post, opt-in regimes — for ISPs in particular — tend to increase the costs of goods and services provided to consumers. Although his case study focuses on opt-in service offerings made by the telephone company US West, costs associated with opt-in systems have similarly deleterious impacts on consumer welfare and innovation across other sectors:
In other industries where opt-in regimes have been imposed, studies have found higher costs and slowed innovation. A 2000 Ernst & Young study of financial institutions found that these mandates cost the entire industry $56 billion. For charities, the cost of compliance with an opt-in privacy law would have been nearly 21% of their total revenue. In Europe, the implementation of restricted information sharing rules decreased the efficacy of advertising by 65 percent relative to the rest of the world, cutting off the lifeblood of Internet startups. The cost of privacy regulation is one of the reasons why Europe lags in startups.
Although we may sometimes view the advertisements fueled by our data as petty annoyances, those business models are what have been the primary drivers of online economic growth over the past quarter century. As Larry Downes discusses in a 2013 research paper:
Advertisements are offers. Those that are perceived as “ads” are offers that are at least slightly off. But an ad for the right product or service, offered at the right time to the right person at the right price, isn’t an ad at all. It’s a deal.
In order for that type of targeted advertisement to work most effectively, Downes continues, “suppliers need the preferences, habits, and transactions of large numbers of users, which are consolidated, mined, and analyzed to find patterns and common behavior.” Google, Facebook, and other Internet firms rely on this data in order to provide all of us with services we value for the functional price of zero dollars and zero cents. Of course, it isn’t technically free: it still requires that we provide tacit consent for relinquishing a certain degree of privacy. Is that a fair tradeoff? The growth of the Internet economy, which is primarily driven by this model, suggests most people give an emphatic “yes.” But it’s not simply pecuniary interests at play.
Consider that the growth of large data sets, fed by individuals providing their information to third parties, is also a major driver of artificial intelligence research, genome mapping, and efforts to prevent the outbreak of diseases before they become epidemics. Big data, which is to say everyone’s data in aggregate, can have a seriously profound impact on the betterment of mankind, to say nothing of the anfractuous feedback loops that drive technological progress.
None of this is to contend that privacy harms are trivial. Rather, the key takeaway is simply that there are tradeoffs — significant tradeoffs — to be considered when attempting to tailor narrow constraints on what firms can and cannot do with acquired information, as well as the means by which they acquire that data. The benefits of granting corporate actors access to some of our personal information are well-documented. Though there are certainly situations in which harms materialize, it is important to bifurcate our understanding of what constitutes a privacy harm emanating from private actors, as opposed to the harms stemming from government surveillance.
Whereas our interests in curbing expansive government surveillance rest on fears of unwarranted repression, the interests associated with reining in private data collection seem largely predicated on baseless fears of potential abuse that seldom materializes. That is to say, the benefits of a flexible and more tolerant disposition towards private firms accumulating our private information are vast, while the costs seem minimal. This does not mean that regulations or statutory mandates should be entirely disregarded as potential solutions to corporate overreach or market failures. It is merely to suggest that a far more temperate regulatory and legal disposition should be taken when examining the privacy interests at stake with corporations accumulating data about individuals. In the hierarchy of concerns and costs associated with data collection, corporations and private firms should fall a distant second to the concerns towards government surveillance.
As long as consumers retain the option of refraining from using such services, or have the ability to opt-out, privacy-conscious individuals and those with a higher threshold for revealing personal information can both benefit in the modern digital economy.
The government’s ability to wield a legal monopoly on force compels us to constrain its ability to violate privacy through strong legal protections. Corporations also possess special powers and limited liability protections that can create adverse incentives to abuse individuals’ privacy, which may at times justify legal protections. In many situations, however, corporations can be appropriately constrained through user agreements, individual education and empowerment, and prevailing social norms. Our privacy interests vis-a-vis our fellow citizens, however, are far more heavily reliant on social norms than statutory remedies.
Though most states and localities have statutes protecting the privacy of individuals, the law is not always the most effective ex ante deterrent to such practices. For example, surreptitiously surveilling one’s neighbor in a state of undress is undoubtedly invasive, but the costs of being caught are not confined to legal recourse. In fact, such actions are more likely to be dissuaded for fear of being punched in the face. Or, barring the fear of violence, the knowledge that being caught engaging in impropriety would result in social shaming (i.e. being branded the neighborhood peeping tom) can be a powerful mechanism deterring socially undesirable behavior. Egregious violations of sanctified privacy norms often result in social excommunication, which for most people, as social beings, is usually a fairly effective deterrent.
What about new technologies and their potential to violate prevailing conventions surrounding privacy? Are these concerns as pronounced in our associations with other individuals as they are with corporations or the government? Again, social norms actually tend to keep pace with new technologies that impact those precincts of our lives where privacy is paramount. Take the camera, for example. Nowadays, cameras are everywhere, in part due to the proliferation of smartphone technologies, and we have come to adjust our expectations of photography accordingly. (How many times have you been caught in a photo unexpectedly, only to find it posted to social media later?)
It wasn’t always this way. In fact, the Brandeis-Warren “Right to Privacy” law review article mentioned earlier was actually spurred by Brandeis’s contempt for the perceived invasiveness of public photography. During his daughter’s wedding on the Boston Commons, Brandeis was infuriated that newspaper photographers had shown up to capture images of the event. These days, one expects images of socialite events to be captured and posted in the newspaper. In the late 19th Century, however, such was not the prevailing norm.
Times change, as do technology and social norms. Though our expectations of what constitutes the sacrosanct boundaries of private life fluctuate, certain expectations have held strong. Though cameras have proliferated and we have less expectation of privacy in public, we still hold fast to a strong desire for privacy in our own homes, especially when in a state of undress. The same is true in the gym locker room. Most everyone in the locker room has a camera on their phone, but strong informal norms governing the appropriate use of this technology is usually a powerful incentive for others to refrain from snapping pictures.
Social norms tend to govern our expectations of privacy with regard to other individuals quite well. As technology continues advancing, it seems likely people will continue to adapt accordingly, as long as the benefits appear to outweigh the costs of incorporating new technologies into our day to day lives.
A formal definition of privacy continues to elude us, and probably always will. But we can contextualize its value, though valuations may differ from person to person. By recognizing those areas in which we have an interest in privacy, we can better formalize an understanding of when and how it should be prioritized in relation to other values. By differentiating the harms that can materialize when it is violated by government as opposed to private actors, we can more appropriately understand the costs and benefits in different situations.
Understanding the key tradeoffs we make when deciding how much of that privacy to give away is important. That’s also why it’s important to make clear distinctions between the types of harms associated with government surveillance, the “surveillance” we willingly subject ourselves to by private firms, and those violations committed by our fellow citizens. Strong legal and regulatory guidelines may be appropriate in one situation but not another. Those same guidelines may also be better applied to one type of institution over others.
There have always been and always will be differences as to how each of us defines privacy. So, to paraphrase Hunter S. Thompson, let us raise our glasses and make a toast to privacy: whatever it is, and wherever it happens to be.
Originally published at niskanencenter.org.