Why I’m leaving Facebook

(Quoting from my own recent Facebook status update, wherein I announce my departure.)

donblair (+ w)
Jul 7, 2014 · 10 min read

Hi Folks! It’s been a really fun ride, but I’m going to quit Facebook after learning about the recent ‘emotion study’ they conducted. If you’d like to know more about that study, search online for news about the ‘Facebook emotion study’. In particular, I found a recent NYTimes essay by Lanier compelling.

If you’re also considering quitting Facebook, you can easily download your contacts, the photos you’ve posted, and some other FB-related material following their instructions here.

If you’d like to reach me via email: donblair@pvos.org; and I’m going to channel (wisely or not) most of my Facebook energy into posting on Twitter: @donwblair

Please understand that I still respect your decision to remain on Facebook if you choose to; for what it’s worth, here’s my rationale for leaving:

When I’ve told friends recently that I’m bothered by the idea that Facebook, as a communications platform, has been systematically filtering and manipulating the messages I’ve been exchanging with my friends in order to manipulate our attitudes and behaviors, the general response I’ve received has been along these lines: “Well, of course they’re doing that! They’re essentially an advertising company. You signed a user agreement that allows for such things; it’s perfectly legal behavior on their part.”

Their behavior may be technically legal; and I may have been naive to have assumed that such studies were *not* occurring; but I feel that Facebook’s recent trespasses have crossed a threshold for me.

As I think many others regularly do, I’ve been treating the Facebook ‘news feed’ as a community bulletin board: a place to post ideas, announcements, and pictures, intending that my Facebook friends might see them. While I knew that some algorithm must determine ‘who sees which feed items when’, what Facebook did in their recent study was the equivalent of showing me preferentially the ‘bad news’ items that had been posted on the community bulletin board — with the express intention of trying to make me feel less happy. This is more or less the equivalent of the post office deciding to withhold from me letters with that contain good news in order to make me depressed.

Whether or not they are naive to do so, I believe that most Facebook users, like, me, assume that their news feed, as a communications channel, is not biased in this way. Most users understand that some of the items in their news feeds are ads; and most users likely understand their news feed items are filtered so that the ‘most recent’ or ‘most relevant’ items are prioritized; but most of us assume that the posts that we see from friends are not systematically filtered by such qualities as their ‘emotional valence’.

Aside: the fact that millions of people are using a communications platform that may, on a whim, filter their messages in this way seems to me to be far more dangerous even than the usual concerns about breaches of privacy on FB. I think I could perhaps beconvinced that I could give up my privacy and still be a happy person; it has long been suggested by some in Silicon Valley that ‘you no longer have privacy; you’re better off without it; get over it.’ In fact, it could be argued that any one of issues that most of us consider instrumental for happiness — health, wealth, security, friendship, family, and romance, for example — aren’t, in themselves, really *necessary* for happiness. There are good examples of people who seem to have achieved happiness without doing well in some, or even all, of these areas. In the end, however, I think most of us can agree that what really *does* matter, to us is our happiness. Happiness is what we hope for; it is what we wish our friends to have. What Facebook has attempted to do with their recent study is to negatively impact the happiness of their users *directly*. And I value my happiness, and that of my friends, far more than I value the content I’m able to exchange on any communications platform. I want to be able to trust any communications platform I use not to *systematically* attempt to make me or my interlocutors *less happy*. If this requires that I take the time to send a handwritten letter, an individual email, or wait long enough that I am able to deliver the news in person, I’d much rather make sure that any of the good news I’d like to deliver to my friends does, in fact, reach them; that it won’t be arbitrarily suppressed as part of some psychology experiment. I’m not sure how FB’s behavior relates to censorship; but prima facie, the subtle manipulation of a communications channel that most consider to be free from such manipulations seems to me to be even more worrisome than more overt censorship.

And so: I’m going to explore other means of facilitating communication with my friends: email (if I can ensure that the service is not filtering my emails by content), letters (remember letters?), and in-person conversation. I am going to look into simple ways of holding private conversations electronically: encryption is becoming easier, and practicing secure, encrypted online communication seems as though it would address many of the recent concerns that have emerged around privacy, oversight, and data manipulation online.

I’ll wait a day or two before deleting my account, in order that this post, and my email / Twitter contact info are broadcast on the ‘news feed’ — if, in fact, they will be

Hope to see you / talk to you soon!

Cheers,
Don

———-

[several FB friends commented on this update — below is my reply to their comments …]

Thanks for the thoughtful replies, folks! I’ll miss these sorts of discussions, which are one of the beautiful aspects of a communications platform like Facebook. I also worry that I’ll find it more difficult to keep in touch with y’all! I share your concerns that ‘filtering’ inevitably occurs via other online media (and in our searches, our news, etc) and I’m not sure how to address that, myself; perhaps full transparency around the filtering algorithms (‘this is how we rank the items in your feed’) would go most of the way towards making me feel more comfortable. I’d love to discuss this issue more with all of you, especially those of you who are involved in academic research with human subjects, as I’m sure you’ve had to consider the ethics around such research. As for Duncan Watts’ recent essay on the controversy — I whole-heartedly agree with Watts when he suggests that more research around social networks should be conducted, and published. But I think Watts is is mistaken in portraying the bulk of those who are outraged by Facebook’s ‘emotion’ study as “insist[ing] that the behavior of humans and societies is somehow an illegitimate subject for the scientific method”, and suggests that the many critics of the study are “attack[ing] the pursuit of knowledge for knowledge’s sake”. On my reading, Watts seems to frame the choice between endorsing Facebook’s research methodology, or not, as a “choice between ignorance and understanding”. I think this take is wrong on two important counts.

First: I think Watts misreads the negative reaction among the ‘lay public’ to Facebook’s emotion study as a negative reaction to studying social networks scientifically; my impression from the general reaction to such studies in the press is that most people embrace the insights that come from research into online social networks, and that most people are generally enthusiastic about scientific approaches to researching human behavior; of all the scientific disciplines, the results of psychology and social science seem to me to be among the most popular and well-received among those outside the walls of academia.

Second, and more importantly: I don’t think it’s useful for Watts to frame, as I think he does, the concerns around Facebooks’ ‘emotion’ study as a choice between ‘ignorance and understanding’, as if it were not possible to endorse cutting-edge research methods in anthropology, sociology, political science, and psychology, while at the same time condemning research practices in those fields that are unskillful, of little scientific merit, and/or unethical. There are of course many historical examples of profoundly unethical scientific research methods that produced knowledge that have been usefully incorporated into the present-day scientific canon; it is a dangerous conflation of perspectives to paint critics of such unethical methods as ‘anti-science’.

But in any case: the more I read about Facebook’s internal research culture, the more convinced I am that a boycott of their network is warranted, at least pending significant changes in their approach: the accounts one reads of the attitude and methods used by Facebook’s researchers — see, for example, this article in the Wall Street Journal — suggest that Facebook’s ‘Data Science Team’ conducts the majority of its research with very little internal oversight or review, and virtually no external transparency or review, despite what may be an admirable and unusual-in-the-industry record of publishing the outcomes of some of their research. Historically, it has been demonstrated repeatedly that researchers in any given field are sometimes not very good at discerning unethical aspects of their own research methods — which is why oversight and review are so important, and why research institutions insist that scientists in their employ submit any research involving human subjects to an ethical review board.

I don’t doubt that the Facebook research team includes some very intelligent and capable people; I find it likely that their team also includes researchers who have extensive training in relevant best practices — even some with training in the intricate details of e.g. going through an IRB process. But the lack of any significant oversight for their research, combined with what I imagine must be strong incentives to carry out such research (strong business incentives, and/or enhanced academic reputations among their collaborators) with rapid turnaround and at a massive scale, makes me sufficiently worried about the ethics of Facebook’s past, current, and future research practices that I feel more comfortable opting out of their system.

In line with what’s been written in the comment thread above, I’m not yet sure I’ll find a better option for internet-based social networking; perhaps a service that was fully ‘open source’, revealing all of its filtering algorithms, and had a user agreement in line with my values? Anyway, I want to be clear: I ❤ social science research, and I ❤ social science researchers — and I especially ❤ those clever computational methods applied to social networks that provide novel and useful insights into human behavior; but, at this point, I feel better disconnecting from the network, and publicly registering my reasons for doing so — even though it’s going to feel lonely at first, and I’ll miss you all! I do have hope for social interaction outside FB, though — e.g., perhaps I’ll end up engaging more people in conversation while sitting in a cafe, or on the subway …

———————-

[Final comment I posted on that same FB thread before leaving, in response to friends who suggested that I stay and ‘help make FB better’:]

Ah! You’re all too kind and I do generally endorse the ‘why don’t you stay and fight to make it better?’ sentiment — but I don’t think it actually applies here. “Stay and fight” makes sense if we had formed our own community organization … or if this were, for example, an email list (thanks for the good suggestions in that regard from some of you).

But FB isn’t a platform we’ve created ourselves; it’s not a governmental service, or an institution over which we some political control, through representation / or a simple vote; it’s a corporation, and it’s providing us with a ‘free’ service in exchange for the legal right to manipulate our interactions online, however it sees fit, towards its goal of a) convincing us to purchase particular products and b) sell information about usto business interests. FB is legally obliged, as a corporation, to pursue its business model; we are not going to convince FB to stop manipulating our behavior, as this manipulation is a key component of their business model. Our only control over FB’s mode of operation might be to refuse to engage with FB until FB might adopt, say, meaningful and more transparent research and privacy policies; but I no longer feel that even these changes would be enough to assuage my anxiety at being manipulated in the ways that I likely would continue to be by FB.

I would like to see whether we can find a way to communicate as effectively as FB allows us to with one another in ways that are not so grossly manipulated by external interests. Nearly half of my news feed, and a third of the screen real estate on my FB page, is now taken up with targeted advertising; this advertising is there because quite sophisticated folks, with solid scientific research to back them up, believe that these ads have a strong effect on my psyche. And now FB is moving beyond merely presenting targeted ads, based on the information I share — it is filtering the *messages I intended to exchange with my network* in order to further manipulate my behavior, and assess my psychology.

Certainly, any speech act or publication can be viewed as a ‘manipulation’, in some sense; but the FB’s filtering of the information exchanged on their platform has now resulted in an extreme and insidious form of distortion — and, since I was a frequent FB user, this has been a distortion of one of the main communication channels upon which I’ve been relying. It is a distortion that occurs with no accountability, and in support of a narrow business agenda that I don’t myself share. The only option, as I see it, is to leave FB.

I have hope that we can all maintain meaningful social contact with one another without subjecting ourselves to these sorts of distortions: from what I can gather, humans were able to lead full and meaningful lives before FB, and I trust that we all still can ☺ Ping me at donblair@pvos.org, and keep in touch!

    donblair (+ w)

    Written by