Hacking the Facebook news problem

News organisations and citizens need to stay constructive and keep looking for ways to make Facebook a more informative space.

Evangeline
Global Editors Network

--

A parodic headline from The Onion

Amid growing concerns about Facebook’s role in spreading misinformation and polarising its users, these last months have seen much lively debate on how this platform could reform itself. According to Frédéric Filloux, Facebook needs to keep its users on its services as long as possible to sustain its pageview-based business. It has no objective interest in exposing them to news that contradicts their beliefs and could make them leave the site. Its algorithm is built to maintain users “in the warm, comfort of the cosy environment (they) created click after click”. That might be why we should not expect too much change from within Facebook.

But Facebook being by far the dominant social media news source, we should not accept it as siloed, opaque and misleading as it is right now. News organisations and citizens need to stay constructive and keep looking for ways to make Facebook a more informative space. The following are some recent experiments.

Combating misinformation

During the US presidential election, Facebook was widely criticised for its role in spreading misinformation. Craig Silverman showed that in the final three months of the US presidential campaign, top fake election news stories generated more total engagement on Facebook than top election stories from 19 major news outlets combined. As talk about fake news has boomed, so have proposed solutions. Many of them rely on fact-checking.

This graph presents the results of an experiments led by Nyhan and Reifler (2010) in which they asked participants to read a mock news article about President Bush’s claim that his tax cuts “helped increase revenues to the Treasury.” A random group of participants read a version of the story that included a correction noting that Bush’s tax cuts “were followed by an unprecedented three-year decline in nominal tax revenues, from $2 trillion in 2000 to $1.8 trillion in 2003.”

Could these solutions prove effective? Some researchers have found that correcting audience misperceptions can be ineffective or, worse, can backfire and make them cling to their views even more strongly. They have showed that the audience is generally prone to rejecting or ignoring statements that undermine their pre-existing beliefs. Political beliefs “seem to be closely linked to people’s worldviews and may be accordingly difficult to dislodge without threatening their identity or sense of self,” writes Brendan Nyhann.

However, some recent studies seem to challenge these claims. In 2016, after having analysed 8,100 respondents’ reactions to 36 factual corrections on various topics, Thomas Wood and Ethan Porter concluded that “by and large, citizens heed factual information, even when such information challenges their partisan and ideological commitments.” Wood and Porter, along with Nyhann and Jason Reifler, led a similar research about the US presidential election. They focused on Trump’s Republican convention speech in July, where he cherry-picked data to paint an alarming picture of homicide trends, even if in reality, violent crime increased in 2015 versus 2014 but remains significantly lower than in previous years.

When participants of the study read a news article about Trump’s speech that included F.B.I. statistics indicating that crime had “fallen dramatically and consistently over time”, their misrepresentation of crimes declined compared to those who had read the article without the correction. Trust in Trump’s misleading conclusions about crime declined among both Trump supporters (from 77 percent to 45 percent) and Clinton supporters (from 43 percent to 32 percent). These results might seem to contradict Nyhann’s previous studies: No “backfire” effect could be found and correction had proven effective in reducing misinformation among each partisan group. However, we should note that misperceptions persisted among a sizeable minority (34% in Trump’s camp and 32% in Clinton’s camp).

In light of these studies, it is with a careful optimism that we should welcome the recent flurry of fact-checking tools. Among these many experiments, one favourably impressed me. At the Público Editors Lab (a journalism hackathon in held in Lisbon in October 2016), Público prototyped Verifacto, a Chrome extension that enables users to access to fact-checks without having to leave the article or the social media feed that they are reading. When implemented, Verifacto could work on any website. The extension would also allow users to highlight fishy claims they see anywhere online and submit them to the newsroom without leaving the pages they are on.

A fact-checked quote from a news article. A similar mechanism could work on a Facebook news feed.

The most obvious limitation of this project lies in its format. Practically speaking, people who are subjected to the most fake new stories may not likely be proactive about installing a fact-checking extension. But for different reasons, this prototype could be a source of inspiration.

First, contrary to some publications that keep fact-checks buried in specific sections of their websites, Verifacto provides fact-checks on the same page you are reading. It is designed to debunk false claims as soon as the reader sees them. If the fact-checking team is quick enough, the readers can read the correction before the falsehood has a chance to take hold. According to Lucas Graves, author of Deciding What’s True: The Rise of Political Fact-Checking in American Journalism, it is more effective to correct misinformation immediately as research has proven that, once established, a mistaken belief is very hard to combat.

When you click on the fact-checked content, you have access to a curated list of fact-checks from different sources.

Secondly, Verifacto could also be very effective because it bases its fact-checking assessment on a collection of sources from different media outlets. Indeed, as Nyhan and Reifler wrote, fact-checking is more powerful when fact-checkers show there is consensus on the question that crosses partisan and ideological lines. As readers are more receptive to sources that share their ideology, the Verifacto team would need to be very attentive to include a wide range of sources in their fact-checking curation.

Lastly, Verifacto’s feature claim submission feature is very user-friendly. At a time where news organisations look to strengthen ties with their audience both to build more trust and attract more readers, tailoring fact-checking to the users’ needs is a great idea. Istinomer, a Serbian fact-checking website, has led a relatively similar experiment to Verifacto with promising results. It increased communication with their community and provided the publication with new materials to analyse.

Cast Some Light on Moderation

Facebook’s content moderation challenges are not limited to its role in spreading fake news. Facebook has also repeatedly come under fire over the way it moderates users’ posts.

When a user flags a post on Facebook, the post is routed to Facebook’s “community operations team”. It is assigned to a moderator that has ten seconds on average to choose to whether or not to remove it. In order to stress-test the system, NPR flagged 200 comments that could appear controversial. They discovered that Facebook moderators made inconsistent decisions and numerous mistakes: Facebook took down comments that could be considered acceptable speech on the platform but did not remove some truly violent content such as a call to shoot cops.

The comment in light grey was flagged by NPR but was not removed by Facebook’s moderation team

This investigation highlights issues that call for further enquiry. It is based on a small sample of comments handpicked by NPR and it would be interesting to see if there would be similar results with bigger samples and in different countries. But the methodology of this investigation may not be easily replicated for bigger samples as relies mostly on very time-consuming manual work.

How can newsrooms more systematically investigate Facebook comment moderation? A team from Der Standard addressed this question at the Süddeutsche Zeitung Editors Lab in Munich. They created Facebook Monitor, a prototype that tracks which comments are unpublished on pages where you install it. It creates a database of deleted comments that contains not only the comment itself but also information about its sender, timestamp and number of likes.

A few weeks after the Editors Lab, Der Standard used this tool to analyse the pages of the two main Austrian presidential candidates. In a widely read article, they disclosed that in a period of three weeks, approximately 24,700 comments were posted on the Facebook pages of both candidates: Norbert Hofer (far-right) and Alexander Van der Bellen (independent). Around 3,400 of these 24,700 comments were removed (around 13%). Shall we interpret these high numbers as attempts to keep the conversation civil and constructive or as attempts to censor debate?

Examining the content of the deleted comments, Der Standard noticed that, on Norbert Hofer’s page, most of them (54.6%) could be characterised as “factual criticism” (does not contain insults and is backed by arguments). Only 9.5% of these deleted comments were insults. On Alexander Van der Bellen’s page however, most of the removed comments were insults (30.1%), but 21% could still be considered “factual criticism”.

The content of the deleted comments highlights what debates the candidates are trying to stifle and which voices they want to silence. For instance, it is interesting to see that, on Van der Bellen’s page, comments criticising his decision to run as an independent have been deleted. The same goes for one comment describing him as “hugely supported by the Green party”. These deletions might be interpreted as a sign of the candidate’s desire to distance himself from his former party and more generally from his anti-capitalist past in order to attract moderate voters. It is also interesting to see the previous positions that the candidates are trying to hide: for instance, Van der Bellen’s hesitations on the TTIP or Hofer’s editorial role in the sexist publication “For a Free Austria”.

It remains unknown who deletes a post: it could be either the administrator of the page, the commenter or Facebook’s moderation team. It is however possible to guess who is doing the deleting by looking at the nature of the comment: Facebook’s focus is on hate speech whereas moderators and commenters would be more likely to delete milder criticism. It is also possible to infer who has deleted a comment based on the time it was deleted. You may try to track the working patterns of the page moderator. To check whether the commenter has removed their own comment, Der Standard has contacted some people directly via Facebook to find out. (None of them claimed they deleted their own comment.)

Despite these limitations, “Facebook Monitor” is a great tool to make Facebook’s moderation system more transparent. At a time where Facebook is accused of censorship more than ever, this tool provides journalists with the opportunity to investigate not only Facebook’s limitations to free speech but also the less-covered moderation by Facebook page admins.

Diversifying the News Feed

Half a decade ago, Eli Pariser coined the term “filter bubble” to describe how personalisation-driven algorithmic systems, such as the one used by Facebook, were sheltering us from opposing viewpoints, reinforcing our pre-existing beliefs and thus leading us to feel like we occupy separate realities from our political opponents. In the aftermath of the US presidential election, this concept saw renewed popularity and triggered the development and rediscovery of tools that could help us widen our personal horizons.

A Chrome extension to diversify your news feed

After the election, Krishna Kaliannan, a New York-based engineer and entrepreneur, created EscapeYourBubble, a Chrome extension that injects our Facebook news feed with articles challenging our worldview.

Before installing the extension, the users need to pick which side they want to know more about: Republicans or Democrats. They choose the party outside their bubble and the extension will overlay a news article expressing the “other” perspective into their news feed once per visit to Facebook.

Screenshot from my Facebook feed

Having tested it myself, it effectively exposed me to new points of views. For example, this unexpected portrait of a Muslim, immigrant, female Trump supporter would have probably been drowned in my constant flow of news without this extension.

Bursting readers’ filter bubble does not necessarily mean exposing them to sources they would not otherwise read. In my experience, the extension provided me mostly with articles that differed from my usual consumption but came from my usual sources (such as The Washington Post or The Chicago Tribune). I think it proved more effective this way as I was less tempted to discard them as biased.

Kaliannan’s goal is to create more empathy and understanding. And, to a certain extent, it worked for me. But I must also admit that the goal of the extension as described on their website leaves me uneasy: “We need to be more accepting of the views of our political opponents. And we need to understand that, at the end of the day, we are all trying to do what is best for our country.” Such a position could be obviously criticised for being naive. But we also need to be aware that such sentiments can be dangerous as they might end up normalising racist or misogynist positions.

A side-by-side look at the Facebook political news filter bubble

Last May, The Wall Street Journal developed an experiment to open our eyes to the “filter bubble” phenomenon. It built “Blue feed, Red feed, a tool that gives users a side-by-side look at two live streams about divisive topics such as guns, ISIS, Donald Trump or abortion. The blue feed draws from publications favoured by very liberal Facebook users according to a 2015 Facebook study, while the red feed pulls from sources favoured by very conservative users.

Screenshot of “Blue feed, Red feed” on 11 January 2017

“Blue feed, Red feed” has received wide acclaim as it gives a rare opportunity to see posts side-by-side that would likely never be found in the same newsfeed. It casts a light on a media ecosystem that struggles to agree on a common set of facts and helps us understand what “post-truth” means.

We should keep in mind that these neither of these feeds are intended to reflect an actual individual news feed. It does not show sources like the Wall Street Journal or most of its biggest competitors because their content is shared by Facebook users broadly across the political spectrum. Liberals and conservatives are likely to be exposed to more non-partisan discourse than we can see on “Blue feed, Red feed”. Even if they do not share the same stories or interpret them in the same way, both sides sometimes happen to rely on the same sources, thus showing that there can still be common spaces online.

We need to be cautious about the Facebook study on which “Blue feed, Red feed” is based. The study only measured the 9% of Facebook users who report their political affiliation. It’s reasonable to assume that they do not accurately represent the whole Facebook population. As Eli Pariser notes, they are “perhaps more partisan or more activist-y” than the average Facebook reader. Lastly, this study is based on results from 7 July 2014, to 7 Jan. 2015 and, given the swift changes in the US political and media landscape and Facebook own changes, we can assume that it is already a bit dated.

A tool for publishers to expand their audience’s horizons

Back in 2014, at a hackathon organised by GEN and BBC, The Financial Times developed a totally different approach to the “filter bubble” issue. They not only took into consideration the role of social media in magnifying our self-segregation tendencies but, in an interesting self-criticism exercise, they also looked at the way the “recommended reads” section could exacerbate this phenomenon. They also adopted a more nuanced perspective on the filter bubble by not only focusing on big political divides but by also taking into account the micro filter bubbles isolating people with different cultural interests.

With their prototype Blind Spot, their objective was to alert FT subscribers about FT news that they might otherwise miss because it was outside of their direct scope of interest. It offers each subscriber a list of articles on topics they are not used to read about. Blind Spot takes its data from a subscriber’s history and also guesses what they have been exposed to by looking for all news articles that had been shared on Twitter or Facebook.

Even though this prototype never got implemented, it is an idea that is worth revisiting to add more serendipity to the news discovery experience. In an era of increasingly personalised news, we need to find new ways to awaken the readers’ curiosity, foster exchanges and facilitate new social connections.

Conclusion

Our enthusiasm for these experiments must be tempered with a healthy dose of humility. As Danah Boyd, Principal Researcher at Microsoft Research and the founder of Data & Society, puts it, addressing issues like fake news and biased content “is going to require a cultural change about how we make sense of information, whom we trust, and how we understand our own role in grappling with information. Quick and easy solutions may make the controversy go away, but they won’t address the underlying problems.” How can we better inform citizens in the age of Facebook? This is an incredibly complex issue which spreads beyond journalism and can only be addressed by a long-term effort led in collaboration with educators, social scientists and citizens.

But keeping that in mind, we cannot but encourage initiatives such as those presented here, for they indicate ways in which we can gain agency over Facebook, create reflective spaces within it and not let this crucial space for news be ruled only by its opaque algorithm.

Laura Boldrini — President of Italy’s lower house of parliament

“If a newspaper publishes a fake news story that Facebook then amplifies, the initial responsibility is of course the newspaper’s. It should verify its sources. That said, Facebook needs to do more than just limit itself to proclamations and good intentions.” — (International Business Times, 13 February 2017)

Jay Rosen — New York University

“They have enshrined the individual user’s choices as a more important filter factor than anything like ‘exposure to mixed points of view,’ or a ‘rounded sense of the debate. That’s just another example of how Facebook is taking over the cultural territory journalists once held and bringing different priorities to it.” — (CNN, 30 June 2016)

Erich Sommerfeldt — University of Maryland

“Organizations who delete negative Facebook comments are perceived as less honest, less genuine and less trustworthy than organizations who simply respond to the negative comments” — (The Washington Post, 8 February 2017)

Ron Darvin — University of British Columbia

“It’s so easy, within a matter of seconds, for things that aren’t entirely true to go straight into your Facebook newsfeed, and without that critical lens, our kids will not be learning to sift through all these and find what legitimate knowledge is.” — (Vancouver Sun, 20 January 2017)

Quotes brought to you by Storyzy

--

--