A Democratic Problem Requires a Democratic Solution

Andrew Pierce
Extra Newsfeed
Published in
3 min readNov 10, 2017
Facebook, Twitter, and Google testifying to Congress on Russian Ads | Photo by Chip Somodevilla/Getty Images

There is obviously a Fake News problem. And no I am not talking about the “Fake News Networks”, as Trump calls them. I am talking about actual fake news. Wether it is stories created by the Russians to sow division in our country or fake news shared by your grandma on Facebook that was normally reserved for emails with a subject line that started with “FWD:FWD:FWD:FWD:FWD:”.

Large tech companies are being held accountable not only by its users but by the US government for the recent malicious use of their systems. They are being grilled by congress who is asking things like “How did you not know Russians were using your system for propaganda?”, “Why can’t you stop Fake News from spreading?”, “Don’t you know whats fake and whats not?”. Sure to a certain extent these companies can be doing better, but not only is it a huge technical challenge but a moral one. Should tech companies be responsible for controlling content that is shared? Even if it is damaging to our country?

There was a NY Times article today that talks about what Facebook is doing to combat fake news created by anti abortion groups. The article explains that Facebook is only tackling “obvious” Fake News. I am 100% for removing fake articles from Facebook, however can they really fact check every article? Are there guidelines to say what is “obviously” fake and what is not?

It is a very slippery slope to start restricting the content your users see. Especially for a company that supports Net Neutrality. Restricting content is everything Net Neutrality is against. So what is the solution? How can Facebook remain neutral and protect its users, and our country, from Fake News and propaganda?

Unfortunately I don’t have the solution to that but I have an idea for what one could look like. While there is a slice of the population who are susceptible to this kind of propaganda, there is also a large part of the community who can identify Fake News pretty easily. Why not utilize the user base to identify Fake News? Let the users dictate what is fake and what isn’t.

Implement a flag that users can click on to identify posts as fake news and have some sort of metric that indicates to users the number of times it was flagged. This way Facebook doesn’t dictate what is real and fake but facilitates the system for the users to do so. Then if you see a Facebook post with the headline “How Hillary Clinton caught Parkinson’s from a secret underground sex trafficking ring run by Bill Clinton” but you also see on the post “This article has been flagged 1 Million times” you may think, huh maybe this post is fake.

The counter argument to this is that the rise of Fake News was driven by the users themselves, why should we trust the users to dictate what is fake and whats not? And that is a tricky problem but I think there could be a work around. Maybe allow users to opt in to be trusted reviewers and show them posts suspected of being Fake. Let people see posts being shared outside of their normal interests.

None of these are completely thought out solutions but I think these kinds of solutions are the ones tech companies need to think about. Morally they have a responsibility to protect its users from propaganda but they also have a responsibility to remain Net Neutral. Fake News is one of the biggest challenges of our democracy today and it requires a democratic solution.

--

--