The Destructive Silence of Social Computing Researchers

Why are others running the Facebook ethics discussion for us? And a note on why hammering offline ethics onto online experimentation is misguided.

Michael Bernstein
4 min readJul 7, 2014

Facebook Data Science and Cornell published a study. They suppressed status updates that matched LIWC positive or negative emotion categories. This manipulation caused people to produce more updates that matched the emotion categories that were amplified in their news feed ranking.

And then the internet exploded.

Except, that is, for us. Nope, the people who are typically publishing this kind of experimental and design research in social computing have been totally silent. There are lots of insightful analyses and critiques from communication scholars, sociologists, policy folks and other really smart researchers. In fact, here’s a meticulously maintained list of scholars’ reactions. But I can only see two posts from people who area regular attendee of any SIGCHI or social computing conference (plus off-list additions), which weren’t on the list as of my writing, and ones since I posted this, yay! #1, #2, #3, #4, #5), where this kind of research regularly appears.

This silence is incredibly dangerous. Here is what is happening: the media talks to people who seem to be running the conversation, amplifying their voice. When national conversations happen and IRBs want to update their policies, they will contact those same people. We will be cut out of the conversation, and our own research will get shamed, normed, or edited out of existence. This is a very bad thing for research.

So, social computing researchers, I need to hear from you. You’ve built sociotechnical systems and run experiments on them. You’ve data mined social computing platforms. You may have even organized an ethics workshop at CSCW. Tell us all what you think.

2/24/19: I’m updating this now, nearly five years after publishing the original piece. I was wrong. At the time, the evidence that I had seen suggested that most people were aware that Facebook was filtering and ranking their news feed. The ensuing discussion made clear I was totally wrong about this conclusion. Without people understanding that the news feed was being algorithmically ranked and filtered, I now agree with those who question the nature of informed consent here. I continue to feel that we need to design modern methods of informed consent that will allow science, engineering, product, and society to collaborate effectively.

My original opinion from the article now follows:

Having located my mouth, I will now proceed to put my money in the same place. I think the general framework of the study was fine ethically. Hammering ethical protocols designed for laboratory studies onto internet experimentation is fundamentally misguided.

Informed consent seems to be the crux of the issue. Should we require it? There are many forms: opting in for each study, a one-time “opt in to science!” button on each site, or advertisement recruiting. What about debriefing afterwards?

Regardless of the moral imperatives, let me start by saying as a designer of social systems for research that any such requirement will have an incredibly chilling effect on social systems research. IRB protocols are not the norm in online browsing, and so users are extremely wary of them. Have you ever tried putting a consent form inline on your social site? I have, and I can tell you that it drives away a large proportion of interested people who would probably actually want to participate if the interface were different. It looks scary. It’s opt-in, and defaults are powerful. Forget that it’s there to protect people — it makes the entire site look like something underhanded is going on. “I just came to check out this site, I don’t want to be agreeing to anything weird.” It’s the wrong metaphor for today.

Just as we consider human subjects protocols relative to the expected standard of care (e.g., a reasonable expectation of privacy), we need the same consideration of internet applications’ user expectations. I have no reasonable expectation of transparency into a site’s manipulations. If I expect Google’s search results to be changing under my feet every time I search, then you shouldn’t need informed consent to run a study that changes the search results. Facebook’s news feed likewise adds new predictive features continuously. Now, not everyone shares this viewpoint. But, I think this is an empirical question, and I would much rather fix that situation than kludge a protocol onto every web site I visit ten times in the month before the CHI deadline.

But classical laboratory and ethnomethodolical informed consent should not be thoughtlessly pasted into online environments. Let’s not be paternalistic and declare righteously what everyone needs to be protected from. Surprise — social computing researchers and computational social scientists are trained in the Belmont Report too, and we understand and believe in the value of human subjects guidelines. But, we need to rethink our assumptions about how human subjects research plays out. This is in an environment where thousands of online experiments are being run every day by product managers at Google, Facebook, Starbucks, Microsoft, and the Obama Campaign. Let’s take a user-centered approach and understand what peoples’ expectations are. Then let’s design a solution that maximizes benefits and minimizes risks of our investigations. Because…isn’t that what we are supposed to be doing in the first place?

Note: Before I started as an Assistant Professor of Computer Science at Stanford, I spent six months as a postdoctoral scholar on Facebook’s Data Science team. I have ongoing collaborations with the team. My opinion on the Facebook study is more correlative of my time with the Facebook Data Science team than causal — in other words, Facebook didn’t brainwash me, but keep in mind that I was the kind of person who was inclined to join the Data Science team in the first place.

--

--

Michael Bernstein

Stanford, Associate Professor of Computer Science. Human-computer interaction, social computing and crowdsourcing.