Misinformation and fake news

Will it ever end?

Aroshi Ghosh
Student Spectator
5 min readDec 8, 2020

--

On November 7th, the results for the 2020 Presidential election were announced by most major news outlets, including AP, Fox, CNN, and most people seemed to breathe a sigh of relief because the uncertainty seemed to be over.

Factitious lawn signs were tucked back into the garages and the Almaden community seemed to have returned to its usual peaceful state. However, a surge of fake news began appearing on social communication channels like WhatsApp, Facebook, Reddit, Youtube, which projected that the elections had been stolen, without providing any proof or evidence. This misinformation took a more sinister overtone when some alternative media outlets politicalized mask-wearing and denied its efficacy against COVID-19 by equating the pandemic to the flu. Even the neighborhood community app, NextDoor, is inadvertently helping to spread misinformation about the pandemic and other conspiracy theories. See article.

People who were disappointed with the election results because their candidate had been unsuccessful jumped on the bandwagon and began forwarding character assassinations to friends and family in a bid to discredit the democratic process and justify the American response to the pandemic deaths.

Fake news and free speech

While free speech is the founding principle of a thriving civic life and democracy, we must recognize that misinformation or fake news, especially when instigated through technology, does not fall into the same category. In fact, the notion of free speech, which is protected by the First Amendment, is often misused to justify the spread of these alternate realities by vague unconfirmed media clips or partisan outlets.

Unfortunately, the mediums used to disseminate fake information to gullible people and the craftiness of its presentation obliterates the saner voices and pits neighbors against each other. The burden of sifting the grain from the chaff of misinformation falls to the general public. In most cases, people do not have the technical knowledge to identify how these fake news outlets operate and how they leverage the emotions of ordinary citizens to promote their agenda.

As citizens, it is our duty to educate ourselves on the nuances of ideology and issues and not let misinformation agitate us or muddy the waters. While one may argue that traditional media should not be the gatekeeper of news, alternative media outlets that do not have a standard to uphold are more often guilty of sensationalizing news headlines in their quest for eyeballs on questionable content.

How fake news operates?

When fake news is repeated often, it appears to be the truth. We lose all common sense, but we may be just the victims of propaganda and technology. Fake text generators like “synthetic text” or read fakes that use Artificial Intelligence algorithms are used to create emotionally charged messages voicing seemingly logical concerns and are unleashed on an unsuspecting audience. Such software targets people of specific beliefs based on their online presence and social media profiles. Additionally, media platforms now have the ability to quickly scale and reach a large audience with the click of a button.

During a recent testimony provided at a House Intelligence Committee hearing, Jake Clark, policy director of OpenAI, said that fake media generated using artificial intelligence like GPT-3 language modeling “has the potential to impersonate people who have created a lot of text online and may easily create troll-grade propaganda for social networks”. Unlike Google auto-complete that uses predictive text for messaging and offers one-word suggestions to complete sentences, GPT-3 technology can generate entire paragraphs in a certain style.

While the intended purpose of the technology is to enable better interaction between computers and human beings, it may be easily leveraged for more nefarious purposes and generate artifacts that may include fake Reddit threads, short stories, poems, restaurant reviews, and so on. Deep fake images and videos that are generated using machine learning have also been used for propaganda, making people appear to say and do things that they never did in reality.

These technologies usually trigger people by generating content on controversial topics like election fraud or immigration. So, while technology may be responsible for creating and spreading misinformation, it is not only a technological problem because human beings

are equally active in circulating misinformation within their social groups.

How does fake news harm us?

While fake news may not be successful in changing the minds, hearts, and fundamental beliefs of people they do have the potential of causing disharmony and violence. Misinformation and fake news is not only the “next global political threat”, but it also expands to other areas like health, finance, culture, history, and lifestyle issues.

Consider these examples, say if people already fatigued by the limitations imposed by the pandemic disregard the need for consistent precautions, or if an individual decides to invest his life savings based on fake financial forecasts and trends, or if frustration fuels the belief that certain ethnic communities are responsible for the spread of the virus and start to target them for hate crimes. In all these cases, fake news is causing indirect harm and it is up to us to discourage the spread of misinformation by not only educating ourselves but possibly use technology to counter fake news and misinformation.

Fake news not only impacts people who are elderly, but also young people who are perhaps more aware of how digital technology operates. While older people may be more prone to share and circulate fake news unknowingly, ultimately all people are vulnerable to fake news irrespective of age because fake news is geared to target the individual’s emotions.

So, how do we counter fake news?

We can leverage technology to spot fake news. Many social media companies like Facebook and Twitter are combating misinformation through artificial intelligence. Facebook has used Snopes and Google’s technology Jigsaw are examples of technology-driven solutions that can detect toxic language. However, though machine learning-driven fact-checkers can be used to identify fake stories and combat chatbots, corporations are only motivated to keep their specific platforms clear of fake information to avoid legal liability. Ultimately, it requires numerous people across the world to effectively verify the multiple sources and the contexts from which information is generated. No single entity can take the responsibility to clean up the plethora of misinformation that is available online.

--

--

Aroshi Ghosh
Student Spectator

Art, technology, politics, and games as a high school student sees it