Facebook, the EU, and Election Integrity
The social network has been developing a process for preserving election integrity. While it is more transparent, it might also be counterproductive
In an election concluding on May 26th, voters from all European Union member states cast their ballots this week to elect their parliamentary representatives. Each consecutive parliamentary election has seen steadily declining turnout, falling from 62% in the 1979 elections to 42.2% in 2014. But in light of many pressing issues over the past five years (rising populism, the migrant crisis, economic uncertainty, and the UK’s pending exit from the EU to name a few), this year’s parliamentary election may generate more interest than usual from an electorate with over 420m voters.
However, content can take different forms, so the response will too; from FT:
Content that violates Facebook’s policies, such as hate speech or voter suppression including wrong polling data or venues, is immediately taken down when flagged, but the majority of the propaganda that Facebook sees does not violate its rules so is left on the platform, but fact-checked or suppressed by reducing the ‘relevance score’ of a post.
Herein lies Facebook’s strategy for fighting misinformation, leveraging automated systems as a ‘check’ on all content that does not infringe its Community Standards. Nathaniel Gleicher, the cybersecurity chief at Facebook, contends that the social media giant is effectively fighting false news with one hand tied behind its back. He argues that social networks “are built to build communities and scale them, and you can do it without hate speech or bullying or [threats of] violence. So if we focused only on content we would be limited.”
In response to its perceived failure in monitoring the spread of misinformation over the past few years, Facebook has set up an election monitoring effort in its European headquarters in Dublin to combat the manipulation of the election’s outcome and potential foreign interference. The social media giant has gathered a team of data scientists, policy experts, engineers, and cyber-security officers to “take down content, proactively and at scale” from the platform. Higher standards around transparency over political advertising is only the latest effort in Facebook’s quest to prevent election-meddling; from Anika Geisel at Facebook:
To run electoral ads about highly debated or important issues related to the European Parliament Elections, advertisers will be required to confirm their identity and include additional information about who is responsible for their ads. While the vast majority of ads on Facebook are run by legitimate organizations, we know that there are bad actors that try to misuse our platform.
When you click on the ‘paid for by’ disclaimer, you will be taken to the Ad Library. The library will share information on the ad’s performance, like range of spend and impressions, as well as demographics of who saw it — like age, gender, and location. The library is completely searchable and can be accessed by anyone in the world regardless of whether they have a Facebook account or not at facebook.com/adlibrary.
This rollout extends beyond campaign ads by including issue ads: relevant, important, or highly-debated topics like get-out-the-vote campaigns, ballot initiatives, or referendums. To prevent foreign interference in the EU Parliamentary elections, Facebook requires that all political advertisers go through a country-specific authorization process wherein they submit documents that run technical checks to confirm identity and location.
One shortcoming of this framework is the seemingly arbitrary process according to which issues of national importance are chosen, and how to determine if the parameters are too restrictive or too broad. Facebook’s current list for the EU’s top issues — which it admits is subject to change — includes six topics: immigration, civil and social rights, political values, security and foreign policy, the economy, and environmental politics. In contrast, the equivalent list for the US contains more than twice as many issues of importance, going to show the opaque nature of how political issue ads are chosen and defined.
This also raises questions over which ads should be identified as political. Richard Allan, the VP of Global Policy Solutions at Facebook, argues that creating a process for authorizing advertisers to run ads — by confirming identity and location through documentation and technical checks — will help prevent abuse and interference in the European Parliament elections. But this ‘trial and error’ approach frustrates ad buyers, who contend that corporate responsibility efforts — Pride Month, Earth Day — are harmed by this change since it leads to delayed campaigns.
Facebook goes on to provide concrete steps in its broader fight against fake news:
We have a three-step approach to improving the quality and authenticity of stories in News Feed. First, we remove content that violates our Community Standards, which helps protect the safety and security of the platform. Then, for content that does not directly violate our Community Standards, but still undermines the authenticity of the platform, we reduce its distribution by demoting it in the News Feed. Finally, we inform people by giving them more context on the information they see in News Feed. These context units are an example of a product where we give people additional information, by sharing more details on the article and the publisher.
Context units are Facebook’s attempt at giving its users the tools to make informed decisions about stories on their News Feed. A given story would gain credibility when readers notice certified third-party fact-checkers or Related Articles. But several external factors generally contribute to asserting trustworthiness for readers, including whether they recognize a publisher’s name or if friends and family also read the source. Context units may have opposite the intended effect, creating filter bubbles and entrenching users in their existing beliefs.
Facebook’s reverse network effects
On platforms with network effects, a growing user base can lead to a higher value of the service provided. The utility of Facebook’s platform skyrocketed as more users signed up, creating a near-monopoly that gained in importance with successive acquisitions (Instagram, WhatsApp), with the social media giant effectively absorbing the user base of those networks. According to Indian entrepreneur Sangeet Paul Choudary, network effects can also work in reverse when the scaling process effectively devalues a network for a few reasons; from WIRED:
1. Connection: New users joining the online community may lower the quality of interactions and increase noise/spam through unsolicited connection requests.
2. Content: The network may fail to manage the abundance of content created on it and may fail to scale the curation of content created and the personalization of content served.
3. Clout: The network may get inadvertently biased towards early users and promote them over users who join later.
Choudary argues that Facebook’s value proposition initially focused on connection — and as its earliest mission statement put it, to “make the world more open and connected” — but would later move into content with the introduction of the News Feed, and eventually clout with the addition of the subscriber feature. This makes a business model like Facebook particularly vulnerable to mismanagement should its content curation system fail to properly scale. Ahead of the EU elections, renewed efforts to consolidate processes for regulating content are being met with skepticism from policymakers who are considering regulating the platform outright.
Margrethe Vestager, the EU Commissioner for Competition, stated last week that breaking up the world’s largest social network should only be a solution of “last resort” since it may stifle innovation more than regulation would. Instead of pushing to split up the company, Commissioner Vestager chose a different tack: asking for access to data. She argues this would admittedly have its own problems and require significant resources from government actors, but might prevent monopolistic abuse that could cause market distortions.
But the prospect of regulation becomes more daunting when considering that Facebook Inc. is more than just a social network. When CEO Mark Zuckerberg was grilled by Congress in 2018 over the company’s near-monopoly position, he contended that Facebook “certainly doesn’t feel like” a monopoly, yet its staunchest competitors (including Twitter) provide services which only overlap with a portion of Facebook’s offerings. The number of monthly active users on the social network has continued to grow over the past few years, largely undeterred by its scandals; from Statista:
The chart above neatly illustrates Choudary’s argument that failing to manage content can be a big part of reverse network effects. With the acquisition of Instagram and WhatsApp, Facebook gained a total monthly active user base of around 5.5m from late 2017. But by adopting these massive user bases, Facebook will invariably have more trouble managing and curating content. It remains to be seen whether increasing noise on the platform will lower the quality of interaction enough to deplete value for existing users.
When the New York Times reported in January that Facebook plans to integrate the underlying technical infrastructure of its Messenger services with WhatsApp and Instagram, the objective Facebook gave was to keep users engaged within a single ecosystem and improve end-to-end encryption. But an unstated advantage of this move is to show that Facebook can ‘self-regulate’ and is taking action to fight election interference through interoperability; from the NYT:
The services will continue to operate as stand-alone apps, but their underlying technical infrastructure will be unified, said four people involved in the effort. That will bring together three of the world’s largest messaging networks, which between them have more than 2.6 billion users, allowing people to communicate across their platforms for the first time.
The move has the potential to redefine how billions of people use the apps to connect with one another while strengthening Facebook’s grip on users, raising antitrust, privacy and security questions.
In a recent op-ed, Facebook co-founder Chris Hughes argued that breaking up Facebook would spur competition, protect privacy, and make the social network more accountable to its users. In Hughes’ view, recent efforts to make Facebook’s services interoperable across its platforms are a response to criticism of how the company manages speech. Nick Clegg, the former Deputy Prime Minister of the UK who recently joined Facebook as VP for Global Affairs and Communications, contends instead that Facebook is operating “under more regulation now than at any point in the history of the company” (not a difficult feat given the lack of any regulation in the past), and that dismantling the social network would be a misguided attempt at holding it accountable.
Election security initiatives have also failed to flow into the social media giant’s other platforms. A recent report released by the online activist organization Avaaz investigated posts submitted by thousands of WhatsApp users across Spain prior to last month’s general election. The study concluded that 9.6 million Spanish voters (around 26.1% of the electorate) were subject to false, misleading, racist, or hateful posts. But in contrast with Facebook, WhatsApp has received little attention or scrutiny for the spread of misinformation despite increasing in popularity in many EU countries — and since the application is encrypted, it is impossible to determine how far each item spreads.
Another issue comes from Facebook’s decision to block pan-EU political campaigns on its network. European officials have reached out to Nick Clegg over the past few months complaining about existing Facebook rules, which claim that the submission of campaign ads on the social network must have a registered office in the country where the ads run. They argue that these rules neglect the supranational nature of the European Parliament elections; from TechCrunch:
It means EU institutions are in the strange position of not being able to run Facebook ads for their own pan-EU election everywhere across the region. ‘This runs counter to the nature of EU institutions. By definition, our constituency is multinational and our target audience are in all EU countries and beyond,’ the EU’s most senior civil servants pointed out in a letter to the company last month.
This issue impacts not just EU institutions and organizations advocating for particular policies and candidates across EU borders, but even NGOs wanting to run vanilla ‘get out the vote’ campaigns Europe-wide — leading to a number to accuse Facebook of breaching their electoral rights and freedoms.
In the end, there is little regulation covering what the appropriate response to cyber-threats on social networks should be. In contrast with Facebook’s ‘raw manpower’ approach, companies like Twitter have invested their resources into automating content moderation processes. These efforts are bound to come under increasing scrutiny as the European Commission demands full transparency and detailed breakdowns on how Facebook aims to mitigate election interference — probably to little avail.