A Ban on Political Ads: Who’s right, Jack Dorsey or Mark Zuckerberg?
Is greater transparency or an outright ban the best way to tackle misinformation online?
With an impending General Election in the UK and the on-going Democratic primaries in the US, the role of social media in politics is bound to be a hot topic of debate.
Yesterday, Twitter CEO Jack Dorsey announced he would be banning all political advertising from his platform. In his view, political support “should be earned, not bought.”
Unsurprisingly, Mark Zuckerberg has come under fire following this announcement, with demands to know whether Facebook would also implement such a ban.
Zuckerberg appears to have doubled down on Facebook’s policy and confirmed the platform would continue to run political ads. In a teleconference with journalists he said: “In a democracy, I don’t think it’s right for private companies to censor politicians or the news.” Instead the company has committed to investing in greater transparency about who is paying for ads and how much is being spent.
These responses are reflective of two common approaches to dealing with seemingly intractable problems — both of which are gaining ground more broadly. So the question is: who’s right?
Banning vs More Transparency
In the wake of the Cambridge Analytica scandal, where user data was misused for political advertising, more attention has been placed on the power social media platforms have to impact the outcomes of democratic elections.
Documentaries like The Great Hack attempt to tell the story about the extent to which personal data can be used to target users and ultimately influence their voting behaviour, regardless of the veracity of the information shared.
In Dorsey’s view, banning political advertising stops candidates and political campaign groups from paying to reach users — which effectively reduces users’ agency by force-feeding them highly targeted and optimised messaging.
Dorsey said on Twitter: “it‘s not credible for us to say: ‘We’re working hard to stop people from gaming our systems to spread misleading info, buuut if someone pays us to target and force people to see their political ad…well…they can say whatever they want!’”
Critics of Dorsey have likened his approach to quashing freedom of speech, reminiscent of the ‘cancel’ culture that has recently been criticised by President Obama as lazy activism. President Trump’s campaign team immediately criticised the move as “yet another attempt to silence conservatives.” However, even Facebook employees agreed that ‘freedom of speech’ is not the same as ‘paid speech’.
Others point out that the proportion of revenue Twitter receives from political advertising is relatively small, which enables them to take such a bold step.
On the other hand, Zuckerberg’s approach has been to introduce new measures aimed at shining a light on who is financing political ads. The company has implemented new rules that includes a mandatory ID verification process for anyone wanting to run political ads. However, this still requires individuals and organisations to self-report on whether the ads they are running are political.
These new measure do not include, however, a fact-check of political ads.
Critics of Zuckerberg have accused the company of choosing profit over the public good. And Facebook employees have even urged the leadership to go further: explicitly tackling misinformation in political ads, restricting targeting, improving visual design of ads and introducing spend caps.
Zuckerberg himself confirmed that “ads from politicians” make up less than 0.5% of their revenue. But it is not clear whether this figure contains all revenue from political ads or just those from candidates. Facebook currently does not require ID verification for issue-based ads.
So if it’s not about the money, as ever, it must be about the data.
The root of the problem
The root problem that these approaches are trying to tackle is the spread of false information. While both approaches have merits, neither is a comprehensive approach to dealing with misinformation. Misinformation can be deliberately spread on both platforms — albeit the option to pay for that reach in the political realm on Twitter has been removed.
In Dorsey’s view political ads are only part of a wider problem of new challenges to public discourse, but they carry the additional risk of affecting votes and millions of lives. But a blanket ban misses the mark — it tars all political campaigns with the same brush and does nothing to address the misinformation spread by automated accounts and even President Trump’s himself.
For Zuckerberg, companies should err on the side of greater expression and greater transparency of who is buying what will help users decide. But while transparency about who is paying for ads is a great step in the right direction, it fails to deal explicitly with the underlying problem.
What can be done?
Social media platforms undoubtedly wield a significant amount of power to influence public discourse. And as a result, accepted wisdom is that they should also have a responsibility to ensure the quality of that public discourse (although some would still argue that private companies should not have this responsibility).
The number of scandals associated with misinformation continues to grow and has not only affected elections, but also spread violence in other parts of the world.
Yet putting all the onus on private companies negates the responsibility that legislators have to set the ground rules on funding and transparency.
While there are strict laws in the UK for broadcasters about news reports and political advertising during election campaigns, the same laws do not apply to social media platforms. Political adverts in print media have to be watermarked, yet no such requirements is set for digital media. A cynic might argue there is a reason for this, yet the imperative for action by legislators if they want to preserve the integrity of our elections is essential.
As technology evolves and the velocity and sophistication of machine-learning based optimisation, micro-targeting and deep fakes increases, the spread of misinformation will continue to be an issue we face.
Tackling the root of the problem lies beyond simply targeting the platforms on which misinformation is spread. Rather it is a problem which touches areas like education and community organisation. How can we better prepare citizens and the electorate to better interrogate the information we receive?
We’d probably all be a little better off getting outside our bubbles.
Impact on UK election
The UK is set to head to the polls on the 12th December in the hope of breaking the Brexit impasse. Social media ads have played an increasingly important role over the last two elections.
The Conservative Party is set to be hit the hardest by Twitter’s ban on political advertising — they spent more on the platform than any other party in the last election. However, historically they have outspent on every platform, and that probably won’t change. The Party has hired Sean Topham and Ben Guerin to run the campaign’s digital media strategy. The pair used to work for Lyton Crosby’s CTF Partners, where they oversaw a controversial professional disinformation network run through Facebook, which had paying clients like the Saudi government and anti-cycling groups.
Despite the spending gap, Labour were considered to have outflanked the Conservatives with their social media strategy at the last election. This included using targeted ads to motivate its voter base, rather than attack the Conservatives.
But as BBC Political Editor points out the impact of Twitter’s announcement may not be so earth-shattering, as most political content is produced in the hope it gets shared for free.