Democracy and datasets

Rupert Murdoch app-lauding his elec- app-lauding. APP-lauding. I’m wasted on you people.
One of the “largest and most sophisticated datasets in the country” — including the TV viewing, internet and phone records of 13 million households — could be misused for political purposes if Rupert Murdoch is allowed to proceed with his plan to buy out Sky, six members of the House of Lords claim in a letter to the Observer.

The expansion of the amount of information held about individuals as our lives are increasingly recorded in one way or another, and the accompanying potential for misuse of that information, is gigantic.

But there’s something about the concerns in this article, and (article author) Carol Cadwalladr’s subsequent tweets about it, that don’t quite feel right to me. In short, I think they’re focusing on the symptom (and we might care particularly because it’s Murdoch), not the underlying problem.

The risks highlighted in the article are:

  • Sky could abuse this dataset to harm democracy (e.g. by targeting political ads)
  • Sky could abuse this dataset to harm its competitors (e.g. by slowing down the loading of their pages/channels), and therefore further harm democracy (e.g. by pushing viewers increasingly towards its own content, which might then become more extreme/Fox-ish)

There are terrifying potential consequences for democracy of anyone (Murdoch or otherwise) of huge datasets that could be used to give you very different information from me.

This is not a new problem: Daily Mail readers have always had a different set of facts than Guardian readers, for example. But in the past the harmful impact of this has been reduced in two ways:

  1. People have always mixed with others, at work or socially, with different views
  2. The BBC has provided a common core of facts (and probably forced some media outlets to mitigate their own coverage).

There’s a case that the increasing interactions done through social media — where we both select who we see news from, and (at least with Facebook) the platform selects who we see news from, magnifying our own selections — that this ‘mixing’ of views and opinions has reduced. After a certain point this becomes self-perpetuating: maybe you do come across people with other views, but you ignore them and just assume they’re some combination of stupid or evil. Or if you do engage, they assume the same of you.

I also wonder if this same dynamic is reducing trust in the BBC: people’s views are massively reinforced (see previous paragraph), and so anyone deviating from that — whether in interpretation or in fact — does not just disagree, but they must be wrong or biased. No-one’s views are right 100% of the time, but if you’re so inflexible in your views that any opposing fact is proof of bad faith, you’ll quickly stop trusting whoever gave you those facts altogether. And everyone will be able to poke holes in the BBC occasionally; if that leads to everyone mistrusting the BBC, we’ll completely lose that second mitigating factor as well.

If that happens, our political discourse becomes a hideous self-perpetuating cycle that will tend to destructive extremes.

This is all inadvertent: Facebook and Twitter are not trying to polarise society, or to push a particular agenda. They’re trying to give people the information that they’ll engage with, and if that happens to be information that is untrue but accords with their beliefs so they’ll like it, click on it, share it, well, that’s just too bad.

If you introduce deliberate message-manipulation, however, especially through targeted political ads that can’t be challenged by others because they never see them, you have a hugely dangerous situation.

There are separate questions here: one is ‘what if the information in those adverts is simply untrue?’ That should be heavily regulated, in political adverts as well as media. It’s currently not in any particularly effective way, and this is already damaging to democracy. If we don’t solve it, it will only get worse.

But focusing on the very narrow set of ‘blatant lies’ won’t get you very far. Much, and most, lies are ‘soft lies’—the harm is done by missing context, unfair interpretations, disputed facts, or certain info made salient or hidden. It’s astonishingly hard to regulate for that beyond a requirement to offer a ‘right to reply’, and that’s open to abuse in any case: it just squats at the end of the article where few people pay it any notice. After all, you’ve just read sixteen paragraphs of damning information, is a 30-word rebuttal going to counteract that? And ultimately, well, they would say that, wouldn’t they?

I don’t have a solution to this, beyond regulate the hell out of actual lies, have an impartial, binding system for resolving disputes of interpretation etc, and protect the BBC in both existence and reputation as best you can. The difficulty, of course, is that the media, well, mediates, and will not be particularly keen on being regulated. But that’s a political difficulty.

The second problem, thankfully, is a bit easier (at least in principle) to solve. This is that Sky might slow down access to certain websites to control what people see (either for political ends — increasing the bubble effect — or to harm its competitors). You can and should deal with that through a) competition law and b) enforcement of abuse of a network. Access to information must be ‘producer-blind’, with heavy enforcement for infringing it.

Tl;dr: without effective regulation of facts (including arbitration), protection of the BBC, and enforcement of misuse of a network, we should all be massively concerned about the impact on democracy of data and targeting. But it’s not because he’s Murdoch.