“Fast News Nation”: How to Find Truth in a World of Misinformation

Cameron Stow
Valence Labs
Published in
5 min readJun 27, 2023

--

On May 22, 2023, reports of an explosion outside the gates of the Pentagon in the US began to circulate. Social media platforms were ablaze with the news, and an image purporting to depict the fiery blast at the Pentagon was being shared by official Twitter accounts. This led to a stir in the stock market as prices plummeted. The air was thick with uncertainty — was America under attack? Twitter users were in a frenzy, trying to unearth more information and identify possible attackers. However, it soon emerged that it was all a false alarm. The image, which had ignited the flurry of reports and the ensuing panic, was a phony. It was an AI-generated illusion of a massive smoke plume near the Pentagon.

The phenomenon of fake or photoshopped images and misinformation is not a novelty. However, the easy availability of tools like ChatGPT and Midjourney has now enabled users to generate ultra-realistic images akin to the one discussed above within mere seconds. But is this an indictment of AI tools? Absolutely not. Instead, we now live in a culture that is quick to share and slow to verify the accuracy of that information. The fact that the image in question was AI-generated is irrelevant to the larger issue — its portrayal as a real event. The media outlets failed to exercise due diligence in verifying the image’s source, illustrating how a single narrative can kindle a wildfire of misinformation across all channels.

So what is the solution? If we cannot rely on media outlets to authenticate the information or content they broadcast, who — or what — can we trust? This article proposes to instill transparency in content publication and tracking across media outlets. It underscores the urgency of media and publisher verification and outlines how these outlets can bolster consumer trust.

Current Impact of Misinformation: How Trusting Are Consumers?

According to the 2023 Edelman Trust Barometer Global Report¹, only 43% of people trust the media in the United States. Gallup reports that 38% of people have “no trust at all” in newspapers, TV, and radio.² With the rise of reporting across social media, political bias, and rapid technological innovation, it is imperative that media outlets and consumers find ways to bring back trust in the information being shared. One way to accomplish this is to add transparency to the publishing process, offering readers a way to verify source material, interviews, images, and even changes being made to the published media.

The BBC just launched its BBC Verify³ initiative saying, “BBC Verify comprises about 60 journalists who will form a highly specialized operation with a range of forensic investigative skills and open source intelligence capabilities at their fingertips.” They say these journalists will “be fact-checking, verifying video, countering disinformation, analyzing data and — crucially — explaining complex stories in the pursuit of truth.” This initiative is an excellent step in the right direction, but traditional documentation by journalists can be modified and still present bias which can be interpreted as misleading.

Our Solution: Full Transparency

Our previous articles discussed the power of leveraging Decentralized Identities (DID) and Certificates of Authenticity (COA) to authenticate and verify people and assets on-chain. The power of combining these two innovations unlocks added security and transparency for users across many verticals. The ability to not only verify assets but track the lineage of that asset opens up possibilities to protect users from more than just fraud and theft. It can protect users from something we continue to face on a daily basis: misinformation.

Unlike BBC Verify, our approach to verifying content involves leveraging blockchain technology and verifiable credentials, creating a trustless system offering full transparency into the lineage of the information provided. Valence and AliceNet create verifiable proofs in COAs and seamlessly integrate these with existing platforms. Take, for example, Verizon’s Full Transparency Initiative. Verizon partnered with AliceNet to bring transparency to their newsroom. Their site says, “Our Full Transparency initiative sends a clear message that transparency and accountability should be top priorities for all organizations. It should also speak to their entire audience base — from employees and customers to investors and journalists.” If organizations want to build trust with their audience, they must first be transparent and deserving of that trust. This can be accomplished with our COA technology.

How a Full Transparency Model Works

The Verizon Full Transparency Initiative was established with the aim of offering content publishers a blockchain-powered authentication system for all forms of media or content. In light of the recent advancements in AI, the prevalence of deep fakes, and instances of false representation, public trust in the media is declining. Existing media verification and revision tracking mechanisms are proving inadequate and fail to provide the level of security necessary to restore consumer confidence. This initiative enables content publishers to authenticate their organizations and their approved publishers, thereby ensuring the authenticity and verifiability of all published content. This is achieved through digital decentralized identities and certificates of authenticity. Organizations can use this system to publish content and assign a unique digital identifier (COA) linked to their verified publisher and author identity before publishing on the blockchain.

COAs unlock the potential to verify and authenticate data, media, digital or physical assets, and more. By leveraging COA technology, we offer readers transparency into the important information of every piece of content published. We can verify the organization or publisher, author, time of publication, and more. By doing so, readers know that the information they receive is valid and has not been tampered with.

Other Industry Use Cases

The use cases for verification systems like Full Transparency expand beyond the media landscape. Art, supply chain, big data, science, academia, and even healthcare can all benefit from a transparent verification system offering individuals and institutions valuable insight into the lineage and veracity of information.

With initiatives like Full Transparency, users can be sure that the content they engage with is secure and verifiable. We must create a system of transparency offering users insight into when content is created, who created it, and when content is edited or altered in any way. Only then can we resolve the chasm of trust between media and consumers.

--

--