Dis/misinformation: less talking, more doing
An interview with Fergus Bell, coordinator of The Global Council to Build Trust in Media and Fight Misinformation.
It seems like we’re seeing new anti-misinformation initiatives popping up every day. Can you feel your head spinning?
On Thursday 12 April, The Global Council to Build Trust in Media and Fight Misinformation (breathe), was announced at the International Journalism Festival. Yet another initiative, you ask? In the minds of its founders, it isn’t. (Disclaimer: the Global Editors Network is one of the cofounders.) This one is a council, and its mission is to build connections between the many, many anti-misinformation initiatives that continue to pop up, in order to make sure that none of their valuable work is duplicated, overheard, or lost.
The Council was set up by the Ethical Journalism Network, European Broadcasting Union, Global Editors Network, Global Forum for Media Development, Online News Association, and the World Editors Forum within the World Association of Newspapers and News Publishers (WAN-IFRA).
We caught up with Fergus Bell, coordinator of the Council, to find out what drove him to set up this effort, whether voice interfaces can engender trust, and why we should spend less time defining ‘fake news’ and more time fixing the problem.
Behind the scenes with Bell
(The below is based on a conversation with Fergus Bell. Edited for clarity and brevity.)
Collaborate, don’t replicate
I think the right time for the Council is now because there are currently so many misinformation initiatives in process: the aim is to organise the flow of information from one initiative to the other.
In order to move forward, we need to make progress. That means picking up where others have left off and learning from what others have already achieved. If we continue to have the same conversations in different parts of the world, and continue to tackle the same problem in many different ways, all at the same time without realising it, then we’re not being efficient.
The Council itself therefore isn’t trying to fight ‘fake news’. It’s more about the power of bringing together as many of the voices, individuals, and organisations that are trying to do so, in order to get the best overall picture of the industry’s efforts to fight misinformation.
In some places people have seen an incentive to create specific initiatives to combat dis/misinformation, while in other places people just get on with it. The latter might not have the funding to create new initiatives or they haven’t had the time to think thoroughly about how such an initiative might look; they have to deal with misinformation right now. I’ve done some work training for verification in Romania, which I found to be really inspiring. The journalists there don’t have the funding to set up new initiatives or create new products. They have been used to misinformation for decades; it’s the way their journalism ecosystem has worked and they have had to tackle it. For these journalists, it’s not about making a fuss; it’s not about coming up with something complex that needs immense amount of funding, but it’s about finding ways with what they have in order to solve the problem of misinformation right now. And these methods have been built into their workflows over many, many years.
Defining happiness won’t wipe out sadness
More time and effort on defining ‘fake news’ will never solve the actual problem. It’s like saying if we define happiness we can stop everyone from being sad again. We don’t need to define it to fight the battle. This analogy came up when I asked a room of students what ‘fake news’ is. Everyone gave a different answer and everyone started to argue with one another. Some said ‘fake news’ is propaganda. Others said it was deliberately deceiving or manipulating data. Others said it’s not giving an entire view of something. Is writing a CV ‘fake news’ because you only include the good bits, then?
Everyone has a slightly different idea of what ‘fake news’ is, because everyone’s point of contact with it has been so different. Generally it’s about trust, misinformation, and disinformation. But do we really need to define it any more than that?
All of the time that we spend trying to define it is time taken away from trying to fix it. So let’s stop trying to define it. We know specific case studies, we know specific examples of misinformation and how it spreads. Let’s tackle each as it comes out, rather than trying to put it under one whole definition, because we’re never going to agree on something. What we might be able to agree on are some of the solutions, because these can be a lot more solid.
If the future of news is talking to machines, what does it mean for dis/misinformation?
I don’t know enough about information that has been shared on Google Home and Alexa-enabled devices at the moment. One of my only concerns is the inability to pick from many options when you’re searching for information. If you’re searching using a computer, you search Google, and it will give you a whole page of search results. If you search for something using a smart speaker, you’re only going to get one result.
At this current time we’re very concerned with filter bubbles and with the algorithm delivering content. Who is making the choice about the content that is being delivered to you on your smart speaker? It’s true that a lot of the content on these devices at the moment is opt in: you choose the news service that you want. But the technology is expanding all the time. The things that you can do with voice assistants and the points of access to various parts of information are growing constantly.
On the other hand, I was talking to someone from the Credibility Coalition the other day and they were telling me that they believe voice is actually a credibility indicator: voice over text may be a way of encouraging trust in content. I don’t know the reason for this, but maybe it’s because voice is more natural; it’s very intuitive. I am a big fan of it: it’s connected to my account, it is where I am, and it talks to me.
Dis/misinformation and the law
My personal view on this is that regulation is only going to go so far in fixing the problem. It’s a huge, huge issue. It’s going to require several fundamental shifts in the way that journalists work, in the way that news organisations work, and in the ways that audiences understand what they are seeing, reading, and hearing.
Governments have to try and do something and that’s why some of these laws, if they haven’t been implemented, are being discussed.
But legislation is very specific. It’s therefore difficult, in my opinion, to have specific legislation around something that is as hard to define as ‘fake news’.
There are lots of media literacy projects happening at the moment. The BBC just launched a programme, where they go into schools to train critical thinking. It’s not necessarily the job of the news industry to tell people what to read or where to read it. I think that could rightly be seen as patronising.
What news organisations can do is to be more transparent: ‘This is how we reached this conclusion’, or ‘This is how we cover this story’, or ‘This is why we’re presenting this as fact’. This gives the audience the ability to make up their own minds, rather than us telling them what to think or believe.
The Washington Post, for example, has just implemented short bios at the end of every story to detail the expertise or the experience that the writer of that story has. This allows the audience to couple that information with what they have read, and it allows them to make up their own mind about whether they believe the story or not.
As journalists, we have to really think about verification. I don’t necessarily think that we have to start by distrusting someone, but I think that we should always start with verification. I would not report something unless I had put it through a verification process, regardless of my opinion.
I personally believe that the majority of people in the whole world are not out to tell lies or tell untruths. I still somewhat believe in innocent until proven guilty as long as the process of proving either way is the first thing that you do.
The verification process
- Look at the source: Could this person have been in that place at that time? Could they have witnessed the event? Could they have captured that video which you believe relates to the event?
- Look at the content: Does the video they shared with you verify the details of the event? Did it actually happen? Is the video showing what the person is showing?
- Talk to external sources: Ask other people who witnessed the event to see if their account matches what the video is showing.
- Seek help from experts, authorities, and other witnesses to add context to the story.
Once you’ve determined that: yes, that person was there at that time; yes, they captured the event; yes, you can use the content they gave you; and yes, the event did happen and you’ve confirmed it from multiple sources, only then can you consider something verified — and only then can you run the story.
Fergus Bell is the founder of Dig Deeper Media, newsroom and media consultancy specialising in strategies for digital newsgathering, newsroom workflows, UGC and verification.