Can Censorship Make Things Worse?
The case for “Pre-Bunking” in a post-truth world.
We have come to a tipping point in our society that causes us all to question what and who we can trust. We are also starting to ask about the potential dark forces and motivations behind inauthentic narratives. The topics of misinformation and disinformation have flooded airwaves and the net, and yet little has been mentioned about how to stop the hemorrhaging of reason. People are experiencing frustration, disbelief, and vulnerability, and it’s time for all of it to end. The only way that this can happen is for individuals to stand up and disrupt the manipulation that is occurring.
To take action, we must first think of misinformation as a kind of disease that spreads like a virus. Once a contagion begins to expand, it can wreak havoc as it spreads from host to host, from the Black Plague to the flu pandemic during World War I, viruses gather momentum with each person that it takes down. The method needed to not only curb misinformation but to inform viewers of sources and potential incentives for the lies is to develop a kind of “information vaccine.” The goal is to ensure that we empower everyone to recognize the falsehoods and make the personal choice to be inoculated or to continue to spread the contagion.
It has to be noted that the misinformation problem has gone beyond just the initial stage and has entered all-out pandemic proportions. To address this we must understand that there isn’t a single solution, but a set of multiple steps that will need to be taken. Psychologists have indicated that even when fake news is called out, once it enters long term memory, it sticks. Instead of waiting for the false narratives to be launched and then trying to fact-check, there need to be actions involving “pre-bunking” that are proactive and are preemptive strikes that can assist the human brain for processing information accurately.
In this internet-driven age, people are receiving mass quantities of information every day, and let’s face it: most are too lazy even to bother to try to fact check. A majority of the populace hides behind their keyboards and shares information without knowing where it was sourced from and if there is a root purpose to manipulate their thinking.
Bring Accountability Back
One of the big successes for social media hasn’t been around its original purpose of “connecting people”, but instead on how they have developed their trillion dollar business models. While many feel that free and open exchange is a good thing, they forget that at no time in history have organizations chosen to do what’s best over the potential for profit. Such is the case for each of the social media platforms. Their shareholders expect consistent increases even if this means having carte blanche for propaganda that shows up under the guise of “news and information.” Once nefarious individuals and countries figured this out, it was open season to sign up and pay for outright lies that have affected every aspect of life. Without accountability, the money continued to flow into the social media companies, and since this is the primary source of information for many, it was a perfect formula for brainwashing.
After months of protest and even appearing before the House Committee, Mark Zuckerberg and Facebook are finally making changes. But are these really the changes that are needed?
In 2018, Facebook instituted a list of policies for their platform:
* Expanding our fact-checking program to new countries
* Expanding our test to fact-check photos and videos
* Increasing the impact of fact-checking by using new techniques, including identifying duplicates and using Claim Review
* Taking action against new kinds of repeat offenders
* Improving measurement and transparency by partnering with academics
It’s questionable as to whether these initiatives are more for Public Relations or are being executed properly. For example, Snopes recently left Facebook over differences. However, Facebook is only one segment of the social media problem, and the rest are moving at a snail’s pace to address the necessity of manipulating the public with misinformation. In 2018, Google, Facebook, and Twitter announced an agreement to fight fake news in the EU. While there are lofty plans to accommodate the requirements, including establishing a “code of conduct,” there haven’t been any reports on results.
YouTube has been at the heart of some of the worst misinformation campaigns, and their business model is probably the worst of all. They depend on long term viewing and will take a viewer deeper into a rabbit hole that has nothing to do with their interests until they are watching propaganda and even violence. A more recent example of this is the white supremacist/nationalist shooter in New Zealand that watched YouTube videos that contributed to his brainwashing.
We cannot point to only social media, as the mainstream media has crafted their own business models to be based on ratings profits rather than accurate reporting. The saddest example of this was during the 2016 Presidential campaign when then-candidate Donald Trump was given more air time due to his outrageous claims and rhetoric. News reporting agencies of the past would have called him out on his debunked claims as well as given balanced viewing time for all of the candidates.
The actions of MSM to whitewash news reaffirmed the lack of integrity in reporting and was the final blow for loss of trust.
Censorship Creates Martyrs
In March 2019 Facebook announced the removal and ban of white nationalism, white separatism on its platforms.
Our team just returned from a week in Auckland discussing the state of misinformation with our partners at Centrality.AI and were interviewed by researchers in misinformation space.
The actions of social media companies are suspect, as it begins a path of censorship, and if other social media organizations follow suit, they will demonstrate that they aren’t interested in devoting the time and effort to help create a solution, but are just taking the easy way out by banning specific topics, people, and searches. Other tech companies such as Pinterest, Instagram, and YouTube are joining this banning bandwagon…and it’s not the answer.
Censorship isn’t just a “slippery slope,” it’s throwing down a gauntlet that will result in people digging in on their ideologies and holding their positions while they shout “bias” on anything they disagree with. Removing access to content will launch even more in-depth and potentially radical movements to relocate people to nefarious and dangerous websites such as 4chan and 8chan where the worst of society will amplify bad intentions. When tech companies begin banning, there is a knee-jerk reaction to automatically create martyrs and people that will support the causes of the censored.
“Pre-bunking” through alerts, identification, and source acknowledgment empowers the individual and is our approach to adress the growing threat of the so-called “post-truth” world.
- Wasim Khaled / CEO @ Blackbird.AI
The tech companies have refused to take this stand when they were requested to do so for international situations. There have been repeated ideology battles with countries such as China, where everyone from Google to Facebook were required to make drastic changes to their product offerings. For tech companies, specifically social media, to make a complete 180 degree turn around and offer censorship and banning at this junction, it simply means that they aren’t interested in helping to move the internet to a more positive place, but instead just putting a bandaid on a broken arm.
We have witnessed what happens when topics are banned. Even if the initial steps are successful it doesn’t take long for it to go wrong. Individuals with personal opinions will start making demands and the banning continues down a never-ending rabbit hole.
Will there be any end to censorship?
Who decides what is right or wrong?
What criteria will they use?
To address the topic of “trust” in what we see, read, and understand, we are in dire need of a set of tools that are more sophisticated than those being used by the purveyors of misinformation. The last few years allowed the Russian IRA (Internet Research Agency) to ramp up the programs that they have had in place for a very long time. We have seen proof that they interfered with our election process with a full-on war of misinformation and lies, and now the rest of the world is aware, and it’s believed that other countries, such as Iran and China, are launching their attacks. We stand at the edge of a cliff, and we have the choice to jump over or turn around and take action.
Addressing the misinformation is a vast landscape, but it can be done. There is a need to “pre-bunk” information before it enters the stream of communication. This means the technology tools to analyze each article, post, and meme. For videos, there is a more in-depth need as deep fake videos are becoming so sophisticated that only higher level technology can tell if they are real or AI.
We are in a social realm where artificial intelligence can bring improved context to our lives. We use these AI search tools to get answers to questions, expand our minds and horizons, explore alternatives. What we DON’T want AI to do is to make decisions and choices for us. An example of an incredibly useful AI tool that has caused this problem is our GPS. We rely on a GPS to get us from point A to point B and have now created a condition where, as people, we no longer know how to get around on our own. A.I. can actually reduce your Critical Thinking depending on its use.
When we add additional potential true or false….real or fake AI problems into the mix, we are radicalizing the AI algorithms to a point where we have lost critical thinking.
There are currently some tech companies making attempts at identifying false information, but we need to dig further. As we have seen in some of the fake narratives, they add small tidbits of truth and then wind their own lies, so that viewer/reader believes it. Because these individuals or groups have their own devious purposes, we need to identify the “sources” of the information, including articles, blogs, memes, and videos.
Under no circumstances do we ever want to relinquish our personal ability to choose what to read, see, or view. However, when technology tools are integrated so that they take us beyond just standard fact-checking and enter into the “who, what, where, and why,” each person can make a choice as to whether they want to continue or not. As these tools become part of our everyday existence, they will continue to be enhanced with new abilities to keep us informed of the viability and truthfulness. It’s anticipated that over time, the purveyors will lose their ability to influence as well as profit, and will try to turn to alternative approaches. The results will be that as a society, we will slowly regain confidence in the information that we are receiving, the memes that we are sharing, and the videos that we view.
Blackbird.AI is addressing the misinformation problem head-on with solutions that give the individuals the choice of being informed. We provide “Credibility Labels” that include proof of information that might be suspicious as well as sources that could have an ulterior agenda for their message.
We believe that every person wants to be in control of the decisions that they make and our tools aim to provide the ability to identify misinformation while ensuring that everyone can choose what they want to read and see, just with superior context. The Blackbird.AI philosophy is to unmask sources and data uncertainty and give people the tools to make their own decisions.
This Is Why We Fight,
We are fighting in the war against misinformation to create a more empowered critical thinking society.
To find out more about our team and the Blackbird.AI Mission, visit us at www.blackbird.ai