Facebook’s Going to Be OK, but Science Is Taking a Hit

Thoughts on why the Facebook emotional contagion study blew up and why it matters

Scott Robertson
16 min readJul 9, 2014

I am a researcher in the area of sociotechnical systems. As part of my university job I study what people do on Facebook and Twitter. Lately, my research community has become involved in an imbroglio over a controversial study on emotional contagion done by Facebook. I offered my initial impression to a local tech blogger and I ventured a few comments on friends’ Facebook pages, but I haven’t written about it in any detail. Now, in a recent blog post, Stanford researcher Michael Bernstein has called on those of us in this research community to be more vocal. In fact, he charged that our silence has let others control the story, and he’s right. I am always talking about how scientists should play a larger role in public discourse and, by the way, the National Science Foundation advocates that its Principal Investigators (I am one of them) figure out how to make their work understood by a wider audience too. So, I’m stepping into it on the Facebook study.

Background

On 17 June 2014, the extremely prestigious Proceedings of the National Academy of Sciences (PNAS) published a study titled “Experimental evidence of massive-scale emotional contagion through social networks.” The study was conducted by a researcher at Facebook and two researchers then at Cornell University. In the study, Facebook manipulated the content of 689,003 users’ news feeds by showing some of them more positive posts and others more negative posts. The researchers found that people who were shown more positive posts subsequently produced more positive posts themselves and people who were shown more negative posts produced more negative posts. In other words, as measured by words in and words out, the researchers demonstrated in a social network context and by experimental control (as opposed to an observational study) the already well-known phenomenon dubbed emotional contagion.

Something of a firestorm followed concerning the ethics of this study. The main concerns are:

1. The researchers had no business manipulating the news feeds of users for the purposes of an experiment without their informed consent.

2. The researchers had no business attempting to manipulate users’ emotions without their informed consent.

3. Informed consent involves at a minimum that participants be told what will happen to them and given the choice to opt out.

The researchers said in the article that Facebook’s Data Use Policy, to which all users agree, covers the informed consent issue. They further said that Cornell University’s Institutional Review Board (IRB) had determined that the project was conducted “by Facebook, Inc. for internal purposes” and that it therefore “did not fall under Cornell’s Human Research Protection Program.” Nonetheless, PNAS issued an “Editorial Expression of Concern” about the study subsequent to its publication.

A pretty comprehensive lineup of posts and articles about the ethical issues and other aspects of the situation can be found here, with a sample of voices from the research community (as opposed to the press) here, here, here, here, here, here, here, and here (and there are a lot more).

The Big Deal

I’d like to bring something new to this debate by asking two questions: “Why did this blow up so much?” and “What difference does it make?”

First, there are two entities involved here: a company and a university. This matters because a company like Facebook can, quite frankly, do whatever it wants so long as it is legal. A university, on the other had, is bound by stricter and loftier standards. The fact that a company did this, particularly Facebook, is part of why it blew up so much. The reason is that the company is situated in a cultural milieu currently embroiled in significant debates about privacy and the power of corporations. The fact that a university did this is part of why it matters so much. The reason is that academia and the scientific enterprise in general lie at the edge of the aforementioned culture battle and are currently engaged in separate struggles for their legitimacy.

The Company

Let’s be honest, this experiment was absolutely acceptable as far as corporate research goes according to current American cultural standards. If Facebook wants to, it can decide to censor all negative content from every user’s news feed tomorrow, or censor all positive content, or only allow Hobby Lobby stories to get through, or ban all political material, or only allow posts about the Buddha and Alice Cooper, etc. In fact, the use of algorithms for selecting the content that appears in users’ news feeds is at the heart of Facebook’s business and you can bet your PayPal account that Facebook does a lot of research on this problem, including testing different algorithms on subsets of their users.

Every company with the resources conducts research on its customers aimed at increasing satisfaction and expanding business. Cereal companies place one box design on shelves in store A and a different box design in store B and then they see how much cereal they sell in the two stores. Music streaming companies use one algorithm to select music for group A and a different algorithm to select music for group B and then measure the number of likes they get in the two groups. Virtually all Internet companies test new interface designs on subgroups of their customers before they release them to all of their customers. This goes on all the time, often without any customer’s permission and, by and large, without anyone complaining. Often, we even tell ourselves that these studies are to our advantage because the companies are making the products we use better. So what’s different in the case of Facebook?

One thing that’s different is that this was presented as a research study about a social phenomenon. After all, PNAS does not publish the results of internal consumer studies. As such, some people feel that they have been used in a way to which they are unaccustomed. It seems there is a cultural agreement in our society that a magic ring exists around consumer research such that it is OK to be studied surreptitiously as consumers but not OK to be studied for any other purpose. Within Facebook this research probably is useful as a consumer study that could be relevant, for example, to using measures of emotional sentiment in algorithms. But to the public it sounds like some kind of evil manipulation of their emotions by scientists who just want to know if they can do it and, honestly, when you read the paper’s abstract it is not reassuring: “These results indicate that emotions expressed by others on Facebook influence our own emotions, constituting experimental evidence for massive-scale contagion via social networks.”

Another thing that’s different is the degree of intrusion and the general cultural context of pervasive privacy penetration. The cereal box example above seems quaint when contrasted with the more intrusive practice of using customer loyalty cards to track very personal buying habits and patterns of individual shoppers. Add to this the widespread practice of aggregating personal data from different tracking systems and the result is that the modern consumer exists in a surveillance web that is pervasive and inescapable. It’s true that many adults who use Facebook, and everyone reading this, knows that their demographics, friend networks, clicking history, buying habits, and even their very words are the property of Facebook and that Facebook can use them as it pleases. But now perhaps Facebook is figuring out a way to gauge emotional states and then turn around and manipulate them?! This scares people and feeds the general anxiety about the state of affairs in our society.

Of course, emotional manipulation is exactly what advertising does. An ad is meant to influence your mood in a way that is far beyond what Facebook ever imagined. In fact, ads are tested all the time to see how effective they are in putting you in the right mood to, of all things, part with your money. This can include not only making you happy, but also making you feel guilty (Did you buy that life insurance?), inadequate (Did you buy that makeup? that car?), or anxious (Did you sign up for that home security system?). Guess what, Facebook may now be able to tell when you are feeling happy, sad, guilty, inadequate, or anxious from your comments in real time, allowing them to hit you up with exactly the right product at the right moment. It is a better thing from Facebook’s point of view that they are not presenting you with products you’re not in the mood for, but it is arguable whether this is making the user experience better and, anyway, once again it is creepy, to use a word that appears in many articles on this topic, to most people. It is reminiscent of the days of subliminal advertising and A Clockwork Orange, and we all know where that leads.

It seems as if Facebook has at least walked up to a line where the tradeoff between what they might be able to do to measure and manipulate mood and the appropriateness of how they are using the information are not balancing each other with equal weight. Nonetheless, this is absolutely within their prerogative as a company and you would expect them to conduct research on this very capability.

Finally, an interesting twist on this is Facebook’s “bad boy” reputation. They are famous for pissing off their users and getting away with it. To tell you the truth, I think it has become part of the company’s brand, possibly to their marketing advantage. At first it was almost innocent, with undesired changes in the design of the news feed interface. Widespread complaining did nothing to change the endless updates. It escalated with the addition of advertising to the news feed, and then again with the use of users’ pictures and names in ads to their friends. Now Facebook is fighting about the right to eavesdrop through the microphones of mobile devices. At each step, alongside the chorus of complaining has been a parallel message that “we’ll all get over it and nothing will change,” which has been absolutely true. Maybe we even like Facebook better because they’re so bad, and then we are ashamed of ourselves. So to some extent I believe that this debate is also fed by a kind of company-customer abuse cycle and the simple, if transient, joy of Facebook bashing followed by renewed loyalty.

It occurs to me that much of the above seems like an apologia for Facebook. Personally, I utterly deplore the surveillance society that we have built for ourselves, an environment in which Facebook plays but a small part. If I had my way, I would make it illegal to offer discounts to customers in exchange for turning their personal buying habits into a commodity for someone else — which, switched around, is the same thing as making people pay to maintain their privacy. Personally, I think that owning a person’s behavioral data and then buying and selling what you know about the activities of citizens in the marketplace is close to owning, buying and selling people and should be outlawed as a violation of human rights. But my personal feelings are so far from the mainstream as to be nearly irrelevant. In the identity-sucking, nonstop-eavesdropping, ego-marketing society that we have built and in which we live, Facebook didn’t even come close to going out of bounds for a company conducting internal research. They can legitimately say, “Hey, that was nothing,” and I suspect that this cultural learned helplessness is also one of the driving forces behind our anger.

The University

Now let’s turn our attention to Cornell. Universities have an Institutional Review Board (IRB) that is responsible for reviewing all research by their employees that involves humans, animals, biological entities, or tissues of various kinds. Universities do this because of their commitment to the ethical principles involved in such research and also because they are required to if they want to accept funds from the U.S. government. Two of the researchers were at Cornell when the study was conducted and they correctly filed with Cornell’s IRB. Cornell’s IRB decided that the study was basically Facebook’s problem and none of Cornell’s business, at least in part because the manipulation had already been completed when the IRB application was filed and the data already existed.

At my university I can’t sneeze around another person without consulting the IRB. Joking aside, I have to file paperwork with the IRB even to get a finding that my research is exempt from its purview, and even then I still have to follow rules for informed consent or explain how it is impossible to obtain consent. I have conducted approved studies on Facebook behavior where I did not obtain informed consent on the condition that I observe only behavior that is public to all Facebook users. Even under those circumstances, I am required to anonymize all data that I collect and anything that appears in a publication. In other words, if you as a Facebook user put a rant on the news feed of a public figure that is visible to all users, then I as a researcher can use that rant as a data point in a study only as long as I don’t reveal, or even record, information that could identify you. It sounds like this was essentially where Cornell’s IRB thought the emotional contagion study fell. But notice that in this circumstance I don’t manipulate anything at all. I just observe, and that turns out to be a pretty critical distinction.

I have conducted other studies in which I have been more obtrusive, for example looking at the behavior and content in personal Facebook news feeds. But I have very significant restrictions placed on me in such cases. First of all, I have to do this in a laboratory and obtain informed consent. I have to get volunteers and then explain to them exactly what I plan to do. I have to get each volunteer’s permission to see anything in their personal news feed or even to watch them use their personal news feed. If I were going to change anything about their experience, even something simple like whether they would see pictures or not see pictures, I would have to explain that they were in an experiment and that the Facebook information they were looking at might appear different to different people in the experiment. Most important, I would have to tell them that they could leave the experiment at any time with no penalty. I would have to tell them that if they want me to discard all their data then I will do it. When the experiment was over, I would have to explain exactly what the experiment was about. If I had manipulated what they saw, I would have to tell them exactly how and I would have to explain what effects I thought it might have. I would have to secure their data so that it could not be seen by others and destroy it after a specified period of time.

If the Cornell researchers had been applying to the IRB to conduct the study from scratch, they would have had to meet all of the above requirements. But the researchers were asking the IRB if they could participate in the analysis and interpretation of data that had already been collected by the company. I’ll put it right out there and say that if I were in the same position and my IRB said “no worries,” I might have made the same misstep. But I do think it was a misstep.

Again, instead of second guessing and exploring all of the ethical issues here, let’s ask why it matters that they weren’t more carefully considered by the academic IRB. Some have pointed out that the controversy imperils the cooperation between corporate and academic research labs. With this kind of heat, Facebook and other companies, which own the data after all, might just “go dark” and conduct their internal studies without anyone outside of the company knowing. That would be unfortunate, but I think there is a bigger issue at stake. Specifically, by not doing the right thing the reputation of the academy and the scientific enterprise in general were placed at risk.

We live in a time of unprecedented anti-scientific rhetoric and inexcusable scientific ignorance that serves as a Petri dish for charlatans and hucksters. Climate scientists are characterized as liars who are only out for grant money. Biologists and geologists are tarred as atheists who want to turn children away from God. Behavioral and social scientists are targeted as prurient busybodies who study irrelevant issues at best and culturally corrosive issues at worst. The connection between data and reality is broken for many in our society, and this disconnect is being exploited. Our universities are the only defense and we cannot afford to have them perceived as careless or nonchalant at best, or as purveyors of unethical research studies at worst.

I am not much concerned about the potential harm done to unwitting participants in the emotional contagion study. Why? Because I do trust the researchers and the research tradition. Comparing this to the Milgram experiments on obedience or the Stanford prison experiments is hyperbole, and don’t forget that those studies (from the Pleistocene epoch, I think) provided the genesis for IRBs in the behavioral sciences. I have read the study and my feeling is that the manipulation of positive and negative posts had a minimal impact on participants. The term “emotional contagion” might be hyperbole in this case too. But the public should not have to trust them or me, and the public will not trust them or me. Witness lawsuits already forming on the horizon, one of which charges that “[t]he company purposefully [sic] messed with people’s minds.” Is that how we want the reputation of academia to end up? If we don’t rigorously apply our ethical principles, then that’s exactly where we academic researchers are headed.

Reflection

It is fascinating that the Facebook study refers to two recent articles which used data from the famous Framingham Heart Study to explore emotional contagion. One study looked at measures of happiness and the other at depression, examining whether these states of mind spread through networks of people who knew each other. (Yes. In both cases there was spreading out through certain friend networks to three degrees of separation.) The Framingham Heart Study is legend, and yet its parallels with the Facebook study are interesting. The Framingham researchers collected a truly breathtaking panoply of information about several thousand people over three decades, including information about their everyday behaviors, moods, beliefs, relationships, eating and drinking habits, smoking, exercise, mental states, and a variety of health measures including sexual diseases. The data is archived and available for new studies on issues that were completely unforeseen, and therefore not agreed to, at the time of data collection. Yet there is no ethical quandary. Why? Because everyone in the original study volunteered to participate with the express intention that the data would be mined for patterns. If participants wanted to drop out, that was fine. Participants knew that they were in a study and why they were in the study. They were not given different things to eat or drink, told whether to smoke or not, instructed on how much to exercise, told happy or sad things, etc.

And Facebook actually could have done the same thing. They maintain a vast repository of data about their users’ posts over which, arguably, their users’ have ceded control by agreeing to the Data Use Policy when they joined. They could have looked at the happy and sad words that people’s friends were using and then observed, unobtrusively, whether happy and sad words emerged in response — an observational study. But that’s not what happened. A little change in the experimental design, the manipulation of the mood words, made a big difference to the ethics of the study and created an outcry.

Many people are worried about the participants and the company, but I am worried about neither. Instead, I am afraid that this incident has tarnished academia at a time when we can least afford it. I am a frequent Facebook user and I harbor no illusions about the privacy of my data or the manipulation of my news feed. For Facebook I suppose this is another bump in the road which has little impact on its business. But for science I fear that it is an opening for our opponents. It won’t take influential people like Senator Tom Coburn long to move from ridiculing scientific studies they don’t like, which are especially those in the behavioral and social sciences, to claiming that researchers are dangerous and reckless maniacs. I hope we take this crisis as an opportunity for critical reflection.

How It Might Have Been Different

The emotional contagion study could have been done in a laboratory with volunteers and mocked-up news feeds. I’ll bet it would have worked, and all ethical requirements could have been met easily, but there certainly would have been criticism of the experiment’s ecological validity. In other words, people would have said it was too fake to tell us anything believable. The advantage Facebook has is that they can experiment on real users doing real things in real time, which most people agree is OK if the company is testing font sizes or interface designs, but many feel is creepy (there’s that word again) and borderline if they are looking at emotional contagion.

If they want to do this kind of research, then the onus is on the company to construct a research sandbox that is ethical in terms of the larger scientific community. I have worked at large companies that conduct user research, and in all cases they maintained a pool of willing study participants and obtained informed consent before every experiment. No doubt Facebook has such a pool for studies in which people come to their labs or participate in their surveys, but they should establish such a pool consisting of users who are willing to be subjected to unannounced manipulations of their ongoing experience and who consent to be observed in situ for the purpose of these experiments. I’ll bet that if Facebook asked users to opt in to a research pool of this type they would get tens of thousands of volunteers from whom they could derive perfectly acceptable and unbelievably large representative samples of any kind they pleased. Nobody needs 689,003 participants in their study. A study involving 1o,000 participants would be a jaw dropper in most behavioral science circles, and there is no reason to imagine why you would need more.

This pool of participants, or a subset thereof, could receive an online informed consent form before each study, or maybe just once as a blanket consent, which states that from time to time they might become part of a research experiment in which their Facebook experience will be modified from the norm and their usage data collected for purposes outside of the normal business purposes of the company. In this form they could be reassured that their data would be kept anonymous. They could be informed that an internal team had considered whether they might be harmed in any way and determined that they would not be. When an experiment ends they should be told that they have been part of it and the goals and methods should be explained to them. Most importantly, they should be offered a check box in which to indicate their agreement and consent along with with an option to “unconsent” at any time.

With that, Facebook could take part in scientific discourse without the taint of ethical compromise and academic researchers who work with them could explain to their IRBs how all ethical concerns were being addressed. (I’m pretty sure my next application to the IRB to conduct a Facebook study will get some special attention.) I see no downside to this, and I suggest it as a model for all sociotechnical research conducted in cooperation with corporate entities that involves manipulation of the ongoing experience of users.

I gratefully acknowledge and thank Dr. Mara Miller for her insights on this matter and input to the article.

--

--

Scott Robertson

Professor and Department Chair, Information and Computer Sciences, University of Hawaii at Manoa. I study HCI, sociotechnical systems, and digital government.