Tell me how an algorithm will “verify sources?”
You’re suggesting that an algorithm can determine not just what’s fake (untrue) but also what’s “hyper-partisan?”
And that’s just for starters. You’re suggesting it will generalize those atomic judgements, eventually making a binary ruling on sources, flipping some to “unverified.” This is the Internet we’re talking about. There are tens of thousands of sources, new ones every day.
Thousands of years of philosophy has not produced a clear way to know what’s true. Most philosophers consider it impossible. You can’t program it. You need judgement of a human editor to get close.
You imply that crowd curation will play a part. The crowd already curates. It’s called the share button. When I get a share from crazy Aunt Sally, she’s the primary source. I have already “un-verified” Aunt Sally as a source. I can even block her stories from my feed. (True story, name changed.) The same masses that are blamed for sharing all the fake stuff will be the curators. What’s changed?
And tell me how you program a “hyper-partisan” ruling. Tens of millions of people consider MSNBC to be hyper-partisan. Tens of millions of other people consider Fox News to be hyper-partisan. Every day The New York Times and Washington Post publish arguably untrue statements presented as facts. Much more of what they publish is indisputably partisan, and not just the op-eds. Just when does partisan become “hyper?” This is as subjective as you can possibly get.
The idea that an algorithm could neutrally “verify sources” is an epic fever dream.