Why I think Facebook is not going far enough to address Fake News
Facebook has started to flag (some)“Fake News” articles that appear in your feed — but it will do little to help society avoid making poorly informed choices thanks to the “news” in their feed.
I’ve taken the flagging system for a very quick test. And I mean quick — so this is not exhaustive. I suspect that my findings would be more worrying if I dug a little deeper. Here’s my take:
Facebook’s approach provides consumers with a false sense of security
- It doesn’t capture most known Fake News sites. Why not? I mean, don’t they know infowars.com is fake?
- Facebook allows known Fake News websites to have a Profile page on Facebook — in fact, many have “Verified” accounts. Why?
- “Disputed” is the wrong term in my opinion. These websites are confirmed as Fake. They are note being “disputed”. These sites are not up for debate. The are proven to be Fake. Facebook’s approach provides consumers with a false sense of security — this could mean that non-flagged content will be assumed as true, by even more people.
- Facebook is working with too few organizations — there is a need for a central source/API that all Fact Checkers can use to create new flags/labels so they are recognized by sites like Facebook as quickly as possible. But this is a great start.
Here’s what Facebook needs to do:
- Change their flag from “Disputed” to “Fake” or “Hoax” or “Fabricated” — or something similar.
- Flag all known (confirmed by the sources that they use) Fake News sites/articles.
- Remove all profile pages that belong to known Fake News site owners form their own site as that’s where a large percentage of crap comes from.
- Remove Fake News from their suggested posts.
- Include a flag for Alt-right content such as breitbart.com
There’s a lot more that can be done to improve the existing system, but this would be a great first release.