Fact Checked! Are automated fact checkers the solution to fake news? (Part 1)
Maybe you’ve seen something like this:
Or came across this on your feed:
New features that flag questionable content is a recent phenomenon. But, before platforms started labeling misleading information, fact-checking began on independent websites, then leveled up into automated fact-checking (AFC) systems. Most AFC initiatives are independent, non-profit organizations that develop tools to combat potentially false content online.
So, how do AFC systems actually work?
Put simply, AFC technologies are a type of artificial intelligence that find factual claims, verify them, and correct them in real time.
Step 1: Identification
An AFC tool starts by scanning content from text and live speech to find “checkable claims.” AFC separates check-worthy claims from opinions or other miscellaneous statements through machine learning and natural language processing (NLP). NLP is a type of AI that deciphers human languages.
For example, ClaimBuster trained about 20k sentences from US presidential debates to differentiate checkable claims from non-checkable ones using NLP and machine learning.
Step 2: Verification
Once a claim is found, the AFC tool has to verify whether it’s true. The claim can be compared back to previous fact checks within the AFC system; it can be cross-checked with external libraries from other AFC efforts, or it can be verified against official databases. Some AFC tools merely label the claim as true/false, while others rate it on a scale.
Step 3: Correction
Now that a claim has been labeled as untrue, the claim needs to be fixed. Bot-generated fact checks can appear instantly or this step can be passed off to human fact-checkers in newsrooms.
Seems pretty straightforward, right? When it comes to the actual practice, the waters get a little murky. Fortunately, researchers and practitioners are relatively in line with what AFC does well and where it falls short.
The Good News:
- AFC tools can scrape and verify vast quantities of content in seconds, instead of hours.
- They can assess simple, factual statements fairly well. These are statements that contain a noun plus numerical value, like “The US GDP rose 4.3% in 1998”.
- They are great assistants to human fact checkers who need to streamline their work! Identifying potential claims to investigate voids hours of work for journalists and newsrooms.
The Bad News:
- To fact check, there needs to be… well, facts. AFC tools rely on access to human-compiled reliable sources of data and information. When this falls through, their efficacy hits a roadblock.
- NLP is not completely up to speed with human speech. Humans speak in roundabout ways, we refer back to previous statements, imply things, and have linguistic differences. AI just isn’t advanced enough yet to keep up and extract checkable claims. “Relative” statements that link multiple nouns prove troublesome, like “the state with the smallest population in the US is Wyoming”.
- Verification of a claim might rely on understanding context, combining information from multiple sources, and having a sensitivity to implications. Again, AI isn’t up to the task.
- Fact-checking implies there is one ground truth, but many claims don’t fall on a true/false binary.
AFC technology still heavily relies on human intervention and this should stay the case for the foreseeable future. Full Fact, the UK’s most notable fact checker, states on its website:
“There are a lot of people who say that artificial intelligence and machine learning is a panacea, but we have been at the front lines of fact checking since 2010…Humans aren’t going anywhere anytime soon — and nor would we want them to be.”
When it comes down to it, the journalistic process is too nuanced to be handed over to still developing AI. AFC tools are only designed to empower human fact checkers by prioritizing claims and whittling down repeated checks.
Misinformation is an urgent concern, but AFC tools should not be mistaken as a solution to fake news, even if their scale and speed increase. Rooting out the major purveyors of fake news, like bots, and looking at how a platform’s infrastructure helps spread misinformation is a start in the right direction.
humanID is a new anonymous online identity that blocks bots and social media manipulation. If you care about privacy and protecting free speech, consider supporting humanID at www.human-id.org, and follow us on Twitter & LinkedIn.
All opinions and views expressed are those of the author, and do not necessarily reflect the position of humanID.