2019 is the year we finally realized that our Silicon Valley heroes can’t save us.
Too Big to Fight Misinformation
“Power is in tearing human minds to pieces and putting them together again in new shapes of your own choosing.”
- George Orwell
Social media giants, including Facebook, Twitter, and YouTube, were well-aware of misinformation and lies that were being spread across their platforms. Their business models were built upon profit with unfettered abilities, and the sheer volume of engagement was making them billions in ad revenue. It’s pretty fair to say that this behavior would have continued were it not for public and government outrage; and even then, consequences were denied rather than admitted initially. Over the last months, each of the social media organizations has put on a good “front face” to try to appease the accusations, but instead of taking actions that were valuable in the fight against information disorder, they have stumbled and failed. Their own inner fight between what is right is countered by what is profitable.
Rather than address the issue of nefarious sources designed to manipulate the viewer, social media platforms had a free-for-all selling demographic data to those whoever had the money for it. Every decision made was for that of the shareholders which left them turning a blind eye to the massive amounts of lies that were being crafted to change personal and public opinion. To say that things spun “out of control” is an understatement. It wasn’t until the cries of anger were heard regarding the deliberate manipulation of the 2016 campaign and election that the social media companies were called to task by the press and governments.
Anyone that has worked in a business environment knows that there are individuals experienced in damage control. They can review what has happened, make recommendations for methods of change, and then craft just the right message to soothe cuts and bruises of those most affected. This doesn’t mean that the “methods” will correct the issues. What it does mean is that they usually seek the path-of-least-resistance, which equates to the quickest and most cost-effective. In the case of the social media companies, initially this was the “report” button, but now they have transitioned to censorship.
Once you walk down the road of censorship, those in control take on more power than they should be allowed. Social media staff becomes the administrators of their personal ideas of what you can and cannot see, and those that are adamant about pushing their own agenda become martyrs for their cause. The questions begin to arise as to where censorship starts, but more importantly, where does it end? At what point does something get censored due to the difference of opinion versus a difference of facts?
The only valid censorship of ideas is the right of people not to listen.
- Tom Smothers
Failed Promises Started in the EU
Due to the pressure of threatened higher regulations, in October of 2018, the major internet platform providers of Google, Facebook, and Twitter established a voluntary code of conduct that they were going to implement in the EU. Their promised goal was for the reduction of the threat posed by fraudulently purchased political advertisements and the posting of articles that were listed as “fake news” which meant to serve as a template for the 2020 Elections in America. While this sounds like a grandiose and honorable direction, it was fraught with pie-in-the-sky promises, and little in the way of results. By January, 2019, there had been only minor follow-through in preparation for the elections happening across Europe in May. The situation was so bad that a joint statement was made by the European Commission, The group called the social media platforms out for not providing policies, details, benchmarks, reporting, and the fact that all of the companies lacked sufficient resources to accomplish the obligations that they stood by.
The report cited that Facebook hadn’t provided details on their policies and efforts regarding political advertisement and placement, and didn’t include the promised Europe-wide archive for “issue and political advertising and they didn’t supply the number of fake accounts that were removed due to “malicious activities targeting specifically the European Union.”
In the case of Google, the reporting supplied to the EC did detail actions that the company had set in place for the improvement of it advertisement oversight for targeting EU country citizens, but the EU was not satisfied with the way that Google measured their results. The data given was vague, without clarity as to what type of actions were taken to address the disinformation. While Google did issue a new policy regarding election advertising in January 2019 to include a transparency report, there wasn’t any evidence that they had done anything to implement it.
Twitter was probably the worst of the three companies, as they didn’t even bother to supply a report to the EU. Instead, on February 19, 2019, Twitter announced their plans to expand their political advertising transparency report. The company DID release five additional state-backed information campaign data sets that included posts from accounts that were connected to campaigns by Russia, Iran, Bangladesh, and Venezuela, which were publicly downloadable. However, no details were given by Twitter on their methods of measuring progress on identifying such activities.
The bottom line on this is that the commissioners are requiring full compliance with these major internet organizations by the May 2019 European Parliament elections and will be reviewing and assessing all actions within 12-months. If the commissioners find that the companies have not complied satisfactorily, they will propose further measures, including those that will be within a regulatory manner.
Although YouTube wasn’t included on this list from the EU commissioners, it is owned by Google, and YouTube has been a significant contributor to misinformation, propaganda, and radicalization of many extremist groups, including recruitment of those for Islamic extremism. Instead of stepping up to the plate and offering due diligence, they fought against the passing of the EU Article 13, which was designed to assist in diffusing misinformation. The industry lobby’s recommendation to require user-uploaded content platforms to install upload filters to detect infringing content before it was publicly available was adopted as part of a rewritten Article 13. Apparently this didn’t make YouTube happy as they sent out blistering announcements that the amendments to Article 13: “threatens to ‘shut down”’the ability of millions of people to upload content to sites like YouTube, could prevent EU users from viewing content that is already live on the platform, while threatening ‘hundreds of thousands’ of jobs.”
Censorship is NOT the Answer
In an attempt to appear as though the internet platforms are doing something “for your own good,” Facebook and Twitter have taken some steps that are supposed to be designed to help stop the false representations that have been manipulating the public. FB added a new third-party fact-checking organization, additional requirements and labels for ‘issues’ ads and political candidates, user ratings to sort the false news, and a page info and ad insights section– designed to inform users where the managers of the pages are located and additional names that the page might have adopted.
Twitter has incorporated new API restrictions to offer limitations on mass actions for bots, which has been a historical problem on this platform, as well as a new badges and tools area that provides more transparency for political content. They have also been removing fake profiles and bots at an increased level. The problem with each of these actions is that part of it is embedded in censorship, with another layer involving additional steps that users/viewers must take to differentiate and identify fake news. Neither of these works in today’s internet setting and according to a Knight Foundation report, the problem of tweets from Twitter accounts continued to be linked to conspiracy and fake news publishers. Over 80% of the accounts that were found to spread misinformation during the 2016 election continuously are still active, releasing millions of tweets each day.
While the actions of the internet companies might make a small dent, none of these steps make it easy for users to understand what they are seeing, reading or watching, and they will just not take the time to investigate and research. Outright censoring puts the power of decision-making in the hands of the internet companies for what we all see, hear, and read, and this is a slippery slope that has no end. It should be thoroughly understood that the philosophy of trying to create “fixes” around misinformation cannot go against their business model for profit.
We have entered an age when the technologies that we are designing must offer additional layers of identification and control so that we can help to ensure that we win the battle against misinformation. People have an innate desire to know the truth and are becoming outraged at the idea that others are manipulating them for profit and personal agendas.
We think we’ve found a way to help.
We believe that each individual needs a way to immediately identify what they are viewing and where it is originally sourced. We have designed sophisticated technology that provides context with the validated proof for the topics, images, sites, and sources that we think are suspicious misinformation.
Our goal is to make sure that the individual or organizations can make personal choices on what they want to see and know, and we arm you with the tools to help in those decisions. We work to “pre-bunk” to expose those who have ulterior motives instead of drawing any hard lines between fact or fiction. We know that censorship creates an entirely new monster that will slowly eat away at the freedoms of choice that is the basic foundation of our Democracy and a binary “Credibility Label” often digs people into their belief systems even more.
As our proof points are allocated, those that are purveyors of fake news, deep fake AI, false and misleading memes, and websites designed to harm will lose the profit that they so eagerly desire. Each individual needs to have the ability to be in control and stop the manipulation.
This Is Why We Fight,
We are fighting in the war against misinformation to create a more empowered critical thinking society.
To find out more about our team and the Blackbird.AI Mission, visit us at www.blackbird.ai
Stay-tuned as we launch a demo that will show you how to consume your news with guidance while maintaining and strengthening your critical thinking.