The politics of platforms

Mark Bunting
5 min readJan 27, 2018

--

All surveys make unspoken, and sometimes unconscious, assumptions. As German rocket scientist Werner von Braun reportedly put it, “research is what I am doing when I don’t know what I am doing.” Facebook intended its two-question trust survey to help the social network promote “high quality news that helps build a sense of common ground” based on “having the community determine which sources are broadly trusted.” But it walked into a wall of outrage; like von Braun, many commentators alleged the company didn’t know what it was doing. The survey’s unspoken assumptions — about choice of methodology, validity of data, interpretation and value of findings — were extensively picked apart.

To recap, the survey asks users if they recognise a number of websites, and then asks how much they trust them on a five-point scale from ‘entirely’ to ‘not at all’. Only respondents’ answers about publications they recognise are counted. Adam Mosseri, Facebook’s news feed head, has explained that trust scores are factored (not clear exactly how) to give greater prominence to publishers who are trusted by a lot of people with different reading habits. So if you’re lucky enough to be well-rated by both Economist readers and Breitbart fans, you’ll get a bigger boost than publishers who only appeal to a narrow audience. Facebook is sampling users to complete the survey randomly, although only in the US so far; samples are checked for representativeness and adjusted if necessary (criteria and weights not specified). Finally, survey data are combined with information from other sources (again, not clear what, but presumably Facebook behavioural data).

There were two flavours of criticism. Some did not object to the idea of Facebook consulting users, but argued this survey was the wrong approach. Others challenged the idea of community trust ratings in principle, fearing they will lead to populism and partisanship, manipulation, and a suppression of the values of quality news.

These are understandable and predictable concerns. Even if, as Reuters Institute research director Rasmus Kleis Nielsen said, some of the concerns smack of elitism, it’s not surprising such an apparently simplistic approach has received flak.

But there are many different ways of skinning the news quality cat. Many different concepts of ‘trust’, and about the best way of assessing it. Many different views about whether trust is really what Facebook ought to be optimising for. And it’s this diversity that highlights a bigger challenge for Facebook, one that gets to the heart of its business model and shows why these controversies will never go away.

If you repeat a lie often enough it becomes politics. Photo by Brian Wertheim

Platform rules

All platforms have to balance the interests of different users to build productive and profitable environments. They’re what economists call multi-sided markets, and they must keep all sides of their market more or less sweet. In this instance, the relevant ‘sides’ are publishers, users and advertisers.

Platforms achieve balance with rules that govern how users interact. These rules are written down in Community Standards, terms of use and commercial policies. But they’re also — as Lawrence Lessig described nearly twenty years ago — inherent in the code that constructs platform environments and the algorithms that govern information flows.

For the most part, platforms’ rules optimise for their own commercial objectives. But that doesn’t mean they can set any rules they like. If they take too great a share of value, they risk driving one or more sides of their markets away — assuming, that is, that those constituencies have somewhere else to go.

So far, maybe uncontroversial. The problems arise when platforms’ rules have wider social and democratic consequences — for the quality of public debate, funding of quality journalism, freedom of expression and freedom from repression. Facebook’s rules about news, where Twitter draws the line between argument and abuse, YouTube’s removal of violent videos — will inevitably be controversial. From the platforms’ perspective, many of these issues are externalities; from a civic perspective, they raise profound concerns. The problem may not be the rules themselves, but the opportunities they create for users to skew the market — for example, Russian agents buying ads to disrupt democratic electoral processes, or, as Zeynep Tufecki’s recent essay describes, trolls and extremists exploiting open platforms to swamp the marketplace of ideas. Most of, if not all, the online content controversies of the past couple of years can be seen as an apparent conflict between private governance and public good.

So platforms’ online content problems are governance challenges, not editorial failures, despite the nagging persistence of the ‘platforms are publishers’ meme. They’re responding with new rules: automated content identification tools, more moderators, user bans and sanctions.

But platforms have two problems: the issues will never stop coming; and they’ll never be able to achieve consensus about whether they’re doing enough to address them. As Simon Jenkins wrote last year: “Not a day passes without apocalyptic wails against the internet. It promotes paedophilia, grooming, bullying, harassment, trolling, humiliation, intrusion, false accusation and libel. It aids terrorism, cyberwarfare, political lying, fake news, state censorship, summary injustice.”

Failures in content moderation are inevitable. Harmful content will evade even the most sophisticated algorithms, and legal material will be inadvertently blocked. There will always be disagreement about whether platforms have done enough to mitigate risks, and in doing so struck an appropriate balance between protecting users from harm and censorship. Every independent expert consulted will know exactly what to do — and all their prescriptions will be subtly different.

When there’s no consensus about the right thing to do, legitimacy has to be achieved some other way. And I believe that’s about how platforms designed their rules and made their decisions: did they follow due process, weigh all the pros and cons, take account of unintended consequences? In this light, Facebook’s problems with the news survey seem to be less about the details of its chosen approach, and more about the way it went about it: with insufficient information, lack of clarity about its intentions, no commitments to transparency about the outcomes.

Then there are wider questions to consider: why did it decide to seek to prioritise quality news in the first place? It’s a laudable objective, but is it more important than, say, the effects of its ongoing Explore feed experiment in Cambodia, or its frequent changes to policies on paying news providers? How did it decide? What part was played by pressure from news organisations, some of it no doubt self-interested? What’s the burden of proof for Facebook to intervene in its market in this way?

Initiatives like these show that Facebook is not a purely neutral platform. It’s positive that it’s taking greater responsibility for addressing the potential harms that arise from its use. But it gives the appearance of addressing each new controversy reactively, on a case-by-case basis. And when it changes its rules it provides little detail about why, what it hopes to achieve, and how it will measure success. I believe it would benefit from being more transparent and systematic about its governance strategy, and in further work will consider what that could look like. Information markets are too important to be governed by megaphone diplomacy, outrage and response-under-duress.

--

--

Mark Bunting

Digital strategy and media policy advisor. Wine geek and devoted dad. Also at @buntms