How to create a better Internet — part one

Ed Thomas
Factmata
Published in
10 min readMar 12, 2019

--

At Factmata, our mission is huge: to give everyone a better understanding of online content — creating a better Internet for everyone to enjoy.

Over the past few months, I’ve read hundreds of papers and articles about the spread of misinformation and hateful content online. I’ve reviewed dozens of tools claiming to help users sort good content from bad. And I can’t help thinking they are getting us no closer to the answer.

So I started, as always, with research.

I wanted to understand if there were any big insights we were missing, which will help us achieve this mission. And there were.

The study made it clear to me why the simple ‘nutritional labels’ that emerged in 2018 aren’t working, why platform tools are broken, and how we need to approach the problem in a different way.

Here’s a quick summary.

This study was conducted with surveys of 200 users in the USA and UK, across a balanced mix of ages and genders, followed by a series of eight 1:1 interviews.

1. The problem is Facebook

It was clear from our early research, and existing studies, that the problem with misinformation and hate speech lies primarily in social networks, and platforms for user-generated content.

Although our participants use the full spectrum of social media, when we ask them where they see hateful content, or where they see hate speech — it’s always the same answer: Facebook.

“I see hateful stuff on Facebook and Instagram, from friends-of-friends. Hateful responses to political posts.”

“Post-Trump people are comfortable spouting racist views on Facebook, and in person.”

“Bullying. We think it’s just an issue with kids, but a lot of adults do it. On Facebook.”

The other two platforms that cropped up frequently were Instagram, and YouTube — specifically, how quickly the comments descend into hate.

Some platforms were seen by participants more favourably: Twitter is seen as less of an issue because each user’s feed is their own curation of voices, and Reddit is continually praised for it’s simple but effective downvoting feedback system.

“I get connected with people on Facebook who I wouldn’t normally be friends with. Whereas with Twitter, I choose who I follow.”

“Reddit is good because the bad stuff will get Downvoted.”

“I like Reddit — Downvoting is more benign than Reporting.”

That supports our assumption — that we must tackle this problem by assessing content items individually. Other organisations, like Newsguard, trying to address the same mission as us, approach this through a very simplistic approach, classifying domains. But that doesn’t help when the majority of the problem lies on the same few social platforms, the same few domains.

While we applaud the approach these organisations are taking, we decided never to have any rating based on a subjective human judgement on a publication based on its history. This is just unfair and biased, and also unscalable for all the millions of new websites that might appear in the future.

2. The problem is mobile

Social media use, and news-reading, are both activities people engage with on their mobile devices.

“I’m mostly on mobile: it’s for social. The computer is for work.”

“I’ll use my phone. The laptop is work, the phone is for brain breaks.”

Participants on average spend 2–3 hours per day reading content online, and much of this on mobile. This is especially true for reading the news, where they primarily consume content through a news aggregator — the most popular being Apple’s News app. Second to this, people learn their news through their own social feeds. And much less, through publishers’ own websites. Of these, CNN, New York Times and BBC are the most popular.

3. Trust is complicated

Most participants say they stick with the same news sources they prefer, and with the same familiar topics of interest. This familiarity with authors, publishers and issues, means people are willing to trust the content they read from these sources — with little doubt. And they are happy to trust content when it’s in line with their view of the world.

This is a particular problem when considering regular readers of biased sources — they’re comfortable with them and unwilling to question. Indeed, many say they can distinguish false content on their own, without any help. They don’t want a more rounded, objective view.

Fortunately, there are others who are more investigative. Those who want to get the whole story. These are the groups we’ll engage with first, to help fight this problem.

4. It’s about sharing

When receiving content from others, participants are keen to understand: is this legitimate? The reasons are self-centred. Firstly, they don’t want to fall for a hoax, don’t want to be seen falling victim to a piece of misinformation. And second, they want to verify a story before they share it — to ensure they share content that is genuine and won’t damage their social currency.

That’s why we’re keen to understand ways to discourage arbitrary sharing, to make false, hateful and malicious content less prone to amplification.

5. Views differ with age

Older participants, 55 and over, had different views from the younger groups: they were not affected by racism, sexism, homophobia or transphobia — and didn’t express the desire to reduce this. They do care more about fake news and are more susceptible to this. Often, they will rely more on visual signals of truth — such as a logo that appears legitimate.

6. It’s not about scores

Some tools for checking credibility give scores or classifications. Participants want to know how these are calculated, and why they should be trusted. Scores attract a high level of scepticism, and people want to see evidence to back these up. Seeing a ‘bad’ classification for a site they trust makes people react negatively toward the system giving the score, not towards the site itself.

For us, this further reinforces the belief that tools providing simple classifications and scores are not helping to solve the problem — but often exacerbating it. Rating an article credible or not, won’t help.

The right approach is to make the classification transparent, explaining the process and highlighting passages of text that led to specific decisions.

Even better, we need to educate readers and encourage their critical thinking — providing subtle ways to improve their behaviour over time. Providing fact-checking sources, websites offering an opposing view, and opinions from credible individuals will all help, and allow people to become confident exploring new sources of information.

7. People want to take part

More than just consuming information about content, participants want to actively take part in the mission. The most inspiring finding from our study was the genuine desire of people to fight this issue: they want to feel part of an inclusive community, tackling the problem together.

The study highlighted how people focus on people, not content. Participants’ concerns are never about the post itself, but about the person behind it. Who is writing this? What is their intention? Are their views aligned with mine?

What we’re solving is, ultimately, not a content problem — but a social problem.

8. Current tools don’t work

A large group of participants already spend time engaging with hateful content or misinformation, in an effort to try and reduce it. They take one of three approaches:

Responding in public

“Mostly I Reply, to people who are misinformed or wrong. I engage by replying or commenting in public.”

“I try restraint, but if others are responding I jump in and unleash my opinion.”

They often post factual links to back up their response.

“On Facebook, I will comment with a Snopes link.”

“I hop in and say “that’s not true” here is an article to prove it.”

Responding in private

“If it’s someone I know personally, I Private Message them. Non-confrontational. In a submissive way. It will get them to listen.”

“I Private Message the person who posted, asking for a chat. Some ignore. Talking works in some cases.”

Using a platform’s own tools

But of those who try using these tools, they often find it requires too much effort or is just too difficult.

“Reporting is too much work for a social post.”

“I might use Facebook Report function for hate speech, but there’s no option for ‘factually inaccurate’”

The overall response from participants was a feeling of futility. Whatever route they choose, they don’t trust platforms to do anything helpful, or feel like their own actions won’t make a difference. This problem was even worse among minority groups, who were keen to make an effort to reduce hateful content online, but none felt that their actions are having a positive effect.

9. Bad actors should face consequences

After I spoke with participants about their motivation for engaging with bad content, it was apparent people have three very different goals. The first is that people who post bad content should face consequences for doing so.

“I want to stop the sick people out there, making teens feel bad about themselves.”

“I would like to see more aggressive ways to block someone’s ability to share hate.”

“I want repeat offenders to be banned — like a driving license.”

Participants are keen to correct others or see them face some form of punishment — especially for those who post hateful content. At the most extreme, this may be a form of silencing their voice or even shaming them in public. The less aggressive approach would be for them to face some consequence, some reduction in the ability for others to hear their hateful views. There’s a sense of injustice among participants that people can post hurtful content, hiding behind their online profile, and face no punishment.

Another variant of this motivation is the desire for others to change and realise their error. Or, to force others to see beyond their filter bubble.

“I want to change the mind of someone who influences others. It will affect several.”

Participants who are motivated in this way are burdened by the reality that they can’t change someone’s mind, and more feelings of futility:

“Some people don’t think they are doing anything wrong.”

“I can’t change someone’s biased belief.”

“I’ve lost faith in Facebook’s ability to actually do anything about the problem.”

And they feel their success rate is low:

“Their response is usually ignoring me. No response. Indifference.”

“More people jump into the conversation, and no-one sees my comment.”

“If you think your goal is to change the mind of the original poster, you won’t win.”

Participants said they want an environment in which they can report bad content, and make criticism, without it escalating into a hateful, biased debate. They want to create an environment in which people fight for their reputation, and feel a sense of ownership over the content they spread.

10. Friends should be protected

The second motivation is for people to protect and educate others around them.

“I want my friends to read things that are true and accurate.”

“I want to add contrast to a discussion, give facts, so people don’t take something at face value.”

“I want to educate the person, and ensure other people take on the correct message.”

As well as protecting their friends from seeing false or hateful content, they want proof their work is having an impact — they want to see that bad content is being removed. This group is driven by social good, the need to help others and live within a better community.

11. People just want to be heard

The third and most popular motivation for participants is the need for their voice to be heard or some variant on this.

“I want to influence other readers.”

“If I can change one person it’s OK. But if I can change public opinion, that’s more important.”

“If it’s a controversial piece, I want to say: you’re all on Team A, well I’m on Team B.”

These needs encompass the desire to be perceived well by others engaging in the conversation:

“It’s never about winning, but voicing my opinion — don’t leave with a negative lump in my throat.”

“Healthy debate. An argument, like in real life. I don’t need to win, just voice my opinion.”

“Replying is more satisfying than reporting. Perception of others is more important than right or wrong.”

Another big motivator for this group of participants is the need for validation. They want to understand if the wider population agrees with their view on any given piece of content.

The strong underlying motivation behind these actions is a person’s desire to broadcast their own opinions. They are performing to an audience. People’s behaviour in public is very different from that in private.

Summary

People currently see lots of hateful content and misinformation, usually on Facebook, and usually on mobile devices. It’s a problem that can’t be solved by tools available today. To change a person’s view on what they trust is a big challenge and one we’ll solve through education and providing context, not by presenting simple classifications or scores.

People are keen to be involved in our mission, and help solve this problem together. Their current ways of tackling this at present are futile: public commenting, private messaging and limited platform tools.

Some are motivated by the need to do good for the world. Some are keen to know that purveyors of bad content will face consequences for their actions. Most want their efforts to be visible, to be perceived well by others, be seen as having superior knowledge, or to win a social challenge.

Get early access

If you found this interesting, you’d probably love to get early access to the tools we’re developing. Alternatively, you can email me: ed.thomas@factmata.com

--

--