By now we all know that in 2012, Facebook conducted a study where it set out to affect users’ emotional states by manipulating which status updates they saw in their feeds. For a week, a small percentage (but a large number) of users saw fewer updates with “positive” words, while others had posts with “negative” words suppressed. A control group simply lost their friends’ updates at random. We don’t know that Facebook stopped doing things like this; we just know what they told us about the experiment last week in Proceedings of the National Academy of Sciences.
People have been angry about this. People are always angry when Facebook does one thing or another. But they have not been talking seriously about leaving Facebook. I think we should talk seriously about leaving Facebook.
I think most people on Facebook have a sense of what leaving would cost. They know that it would be difficult, and isolating; many people have a difficult time with the idea of disconnecting even for a week. It can be hard to really reckon the cost of remaining on Facebook until you leave. But as someone who has left, I can tell you that the cost of being on Facebook, the cost of handing over your connections with the people you love, is real.
Why I left, and what it was like
When I left Facebook at the beginning of 2011, I disconnected in one click from the roughly 100 people with whom Facebook had been my only form of contact. I felt the need to leave quickly, and I don’t know if my wall post telling people I was leaving even showed up in everyone’s feed, so there are lots of connections that I have just lost and never recovered: Susan, who was just about to get married; Shannon, who had become one hundred times more vibrant since she’d left the social strictures of our suburban high school; Jack, who was still singing in choirs.
Do you remember, in 2009 and 2010, when Facebook was changing their privacy settings at what felt like a pretty rapid clip? In December of 2009, Facebook eliminated the option to keep your friends list private.
Facebook founder Mark Zuckerberg announced that batch of changes in an open letter. He never mentioned, in this letter, that Facebook had eliminated an important privacy option. Instead he wrote that Facebook is “focused on giving you the tools you need to share and control your information.” He wrote that Facebook’s “work to improve privacy continues.” He wrote that users would have “even more control of their information.” He wrote that Facebook had created a “simpler model for privacy control.”
Whatever Mark Zuckerberg tried to gain by eliminating the option to keep friends lists private (and then obscuring this elimination while giving the illusion of transparency), he tried to gain it at my personal expense.
By the time I realized that friends lists could no longer be hidden from public view, it was too late. In that window of time before I had realized my vulnerability, a family member I was friends with on Facebook searched for, and began a Facebook correspondence with, a dangerous person from my past.
I had not had contact with this person in years, and he had not been able to find me, perhaps in large part because he didn’t know my new last name. I had also kept my social media activities with mutual acquaintances under wraps. But suddenly this person was flooding my non-Facebook e-mail inbox with messages that were by turns cheery and sweet, and then so threatening that my whole body shook. Two computers that had never accessed my personal website before, with IP addresses in this person’s home city and state, began crawling through every page and clicking on every link in my site. This person even called my then-husband’s place of work, looking for me. I was constantly nauseated from fear.
When I told a friend of mine this news, he advised me that staying on Facebook but keeping my wall private was probably not a sufficient safety strategy. He told me he’d been able to exploit Facebook security loopholes that let him view wall content that had been marked “private.” That’s when I decided I had to get off Facebook in a hurry, even if it meant leaving those hundred or so people behind.
The loss of all those Facebook connections hit me especially hard immediately after I left. I craved the calming presence and chatter of friends to distract me from my panic and soothe my exhaustion. But this was exactly the comfort I could not have, because it was no longer safe for me to be in the virtual plaza in which all my friends were hanging out.
Most of my family understood why I had to disconnect—but also, most of my family was still on Facebook, having conversations, sharing photos, and creating inside jokes that I am to this day left out of. Other family members were hurt, and my relationship with one in particular—the one through whom the person from my past was able to find my new last name—has never recovered.
Our fight and eventual estrangement played out like this: I accused this family member of negligence. They insisted that they’d always used the highest security settings. I again accused them of bad judgement and negligence, and they said they were hurt and didn’t want a relationship with me anymore.
It took me months to realize that there was no need for us to have had this particular argument at all. They probably thought their friends list was private, but the privacy settings were changing all the time. It didn’t have to be so confusing, and it shouldn’t have been.
Here’s what shakes me the most now: Facebook’s privacy changes laid the foundation for this breach of my personal privacy, and its doublespeak facilitated it. So how was Facebook’s complicity so invisible to me that I didn’t assign it any blame? That is, why did I accuse my family member of negligence, instead of accusing Facebook of negligence?
Facebook should have made it impossible for someone who loves me to have unknowingly put me at such risk. Facebook should have made it impossible for me to unknowingly put myself at such risk. But that wasn’t a priority. What I was taking for granted as just the way things were was actually just the way Facebook wanted things to be.
Facebook wants us to think that it aims to strengthen our connections with the people we love, but this claim is just as much doublespeak as that 2009 open letter. Facebook wants to strengthen our relationship with Facebook, using our friendships as vectors.
I’m reminded here of viruses, which, as Wikipedia points out, can only replicate inside the living cells of other organisms. Facebook benefits when this relationship remains invisible. When we make the mistake that I made—when we forget that Facebook is using our friendships as hosts, and not the other way around—our forgetting is very convenient for Facebook.
Imagine, for a moment, that you must quit using Facebook forever, starting right now. No more posting to Facebook or checking Facebook for the rest of your life. But don’t worry, you can still e-mail all those friends. Does that make you feel panicky? If you’re panicky, it’s a clue. Maybe you’ve been on Facebook for most of your life, so this kind-of-addicted feeling seems normal to you. It’s not normal. I was talking with a woman in her 50s this weekend, who said to me, “I wish I could quit Facebook but it’s so addictive: ‘Oh, this person said this, that person said that, and oh, this person is taking boating lessons, let’s look at all the pictures of the boat,’ and then before I know it two hours have passed and I don’t even KNOW the person taking boating lessons!” This is what it feels like when your connections with a platform are being strengthened, as opposed to the connections with the people you love: you can spend two hours on Facebook looking at the boating lessons of people you don’t even know. This is very convenient for Facebook.
Facebook benefitted from this same kind of convenient invisibility when I argued with the family member who had inadvertently made me vulnerable. I accused my relative, not Facebook, of negligence; my relative, not Facebook, was hurt by my accusations. This meant one more person, and one more person’s circle of friends, whom Facebook keeps, but whom I have lost. This is convenient for Facebook.
There’s no way for me to know, with 100 percent certainty, that the dangerous person from my past found my new last name (which led to being able to access so much of my current contact and biographical information) through Facebook. Maybe this person guessed that I had changed my name, and then correctly guessed the state and county in which I had filed the paperwork, and then called a clerk at the correct courthouse and got them to do a records search, because all that paperwork is on literal and actual paper. It seems unlikely, but it is possible. Here’s a question, though: why can’t I know? When I was scared for my safety, why couldn’t I know for sure who had accessed my page, and when? Why was that not a setting I was allowed to toggle? Someone set up that system to encourage looking without being seen, because looking, and then more looking, and then more looking, strengthens the looker’s relationship with the platform, no matter what their intentions are for the people they’re looking at. It’s very convenient for Facebook.
Consider the clusters of friends who were unknowingly part of the “emotional contagion” experiment. Consider that one of these users might have had a desperately hard day during that experiment, and might have posted an update to their wall hoping for some comfort from friends, only to feel ignored when the update got few or no responses (because their friends who were in the study might have not seen the message). This user would not have felt manipulated by Facebook—they would have felt ignored by their friends, and then craved more comfort, which they may well have gone looking for on Facebook again. It would have been very convenient for Facebook.
These days, when I read about Facebook wanting to be the platform on which people in developing countries access the internet, my stomach churns: this would be so convenient for Facebook.
Why don’t we kick Facebook out of our friendships?
I think we can expect, if we keep trusting Facebook, to keep having our trust abused. We have no reason not to expect this, and yet we’ve been letting Facebook stay in our most intimate relationships. Facebook has so far succeeded in convincing us that we have to let it stay so that we might keep our loved ones close. It does not have to be this way.
I’ve also been thinking about Robert Lifton in all of this. He studied how it’s possible to get group members to think and act in ways that are detrimental to themselves, but beneficial those who are manipulating conditions. I realize that I run the risk of being alarmist here, but listen: Facebook is about milieu control. I think it’s worth considering what else might be happening alongside milieu control, especially because I think Facebook has given itself the mandate to extract what it can from people when they are most in need of human connection, and to offer a receptive audience to the most compelling bidders. I want us to be sensitized to the kinds of patterns and dangers that have a lot of precedent. I quote from Kathleen Taylor’s summary in Brainwashing: The Science of Thought Control (page 17 of the 2004 edition):
1. Milieu control: Control of an individual’s communication with the external world, hence his or her perceptions of reality
2. Mystical manipulation: Evoking certain patterns of behavior and emotion in such a way that they seem to be spontaneous
3. The demand for purity: The belief that elements outside the chosen group should be eliminated to prevent them from contaminating the minds of group members
4. The cult of confession: The use of an insistence on confession to minimize individual privacy
5. Sacred science: Viewing the ideology’s basic dogmas as both morally unchallengeable and scientifically exact, thus increasing their apparent authority
6. Loading the language: Compressing complex ideas into brief, definitive-sounding phrases, “thought-terminating clichés”
7. The primacy of doctrine over person: The idea that a dogma is more true and more real than anything experienced by an individual human being
8. The dispensing of existence: The right to control the quality of life and eventual fate of both group members and non-members
I’m not ready to call Facebook a totalitarian thought reform organization. I am startled to find more alignment between Facebook’s behavior and these markers than I thought I would.
Okay, but what are our options?
I want us to excise Facebook from our relationships. Honestly. But when we do that, I don’t want it to feel like an emergency. I want us to be able to get rid of Facebook because we are building something better.
I have been pretty happy on Twitter. I have been pretty happy with e-mails and snail mail and old-fashioned chat rooms with friends. Diaspora is a thing, though I don’t know if anyone is actually on it. I make art that requires people to be in the same room with one another to experience it, and being in the same room is a thing too. This is a meager start of a brainstorm.
Here’s a question: whom would we trust to anchor us? Which person or group of people would we most trust to be ethical, generous keepers of gathering places? Do we need lots of little nodes as opposed to one big one? Who would be transparent instead of trying to be invisible? Who would want us to be fully-informed and safe? Who would work to make sure that people would be able to leave the platform without losing the connections they built there, should they feel unhappy or unsafe on the platform?
What kinds of things would we have to know about the people running a social platform to know they wouldn’t start acting like Facebook, or worse?
In writing this essay, I’ve even become almost paralyzingly confused about the implications of publishing it. The thing we’ve learned to hope for, when sharing something we’ve created online, is that it goes, well, viral. We, as individuals, gain cachet when something we make goes viral, but that cachet is resting on infrastructures that I don’t trust, as a matter of course. What “going viral” means is that something has been shared widely across a platform that displays a count of user behavior. This means that our personal and heartfelt work legitimizes platforms when anything we do goes viral.
What do the platforms we legitimize with our personal and heartfelt work have to do to earn our trust? Right now, not enough. It all feels like a shady bargain.