True, But Proposed Solutions Are Useless
Like you, I’ve taught at University level for decades. The idea that somehow people are going to become critical thinkers on their own isn’t going to happen. Nor, given the pace of life and firehose of info going to lead to triple checking content before sharing. And every journalist is too busy trying to figure out how to survive.
You referenced, Howard Rheingold who, is a friend and does know a lot about this problem. But he also knows the problem is far deeper and wider than the current handwringing about fake news.
The base issue is not about today’s media platforms and their failings on fake news. It is about the direction and nature of civilization and the asymmetry of knowledge. In this respect, everyone on the planet now shares this problem. It either gets dramatically better or dramatically worse because the half steps as now being discussed are actually meaningless inasmuch as they merely kick the can down the road.
What’s clear is that we’re moving to ambient knowledge systems. For these to succeed in the market they MUST provide reliable content. Said differently, if you now think reading fake, erroneous news is bad, what will you think when your environment is whispering in your ear not only the news but information about anything you think is important.
And, as for those like FB & G that want to give us an algorithm to fix the problem, the only way they succeed ultimately is to have AI systems that literally understand spoken and written language, its nuances and intent. Of course, when that happens, and it will, we’ll have a different problem — power sharing with machines that may or may not want to be truthful.
So, short of reverting to authoritarian retaining camps, or pausing the pace of civilization, we need an institutionalized human-machine solution. My Medium publication, A Passion to Evolve, has a number of articles that address these topics in detail. For example: