Literit: The Internet Literacy Program to Educate the Public

Morgan Gaffney
reimaginingjournalism
7 min readDec 7, 2020

The Problem

In an age of technological advancement in news through social media, fake news is much easier to spread than ever. Studies like the MIT study published in the Atlantic have found that through these social media platforms that fake news can actually spread faster than real news. People aren’t predisposed to fact check everything they see, and the addictive, short attention span nature of social media platforms is not conducive to fact checking. So what other forms of news can’t get away with in terms of misinformation, social media excels in.

A bot farm

This news infrastructure has led to the rise of a new black market industry all over the world: bot farms. Technologically savvy individuals can create fake social media accounts in massive quantities to push whatever information they want to into millions of people’s feeds. One person in a room full of computers can convince millions of people of a complete lie that could affect how they think on a fundamental level. We saw this happen in 2016 with our election. Russia was able to spread countless bits of false information through places like Facebook regarding the presidential election. Articles like this one from Wired show that Russian meddling had a significant impact on the presidential race, preying on confirmation bias and lazy fact checking. The spread of false information through social media is a real threat to our democracy, and has been done right under our noses. We saw what happens in the Mexico elections, and how bot farms have significant amounts of sway over their election. In 2018, a month before the midterms, twitter released that 9 million tweets had been created by a single Russian troll farm. If attacks like these are able to take root in our news infrastructure the damage to our democracy could be irreparable.

An image that depicts the dangers of bot accounts

There’s a ton of different ways to try and achieve this same goal. In trying to come to a decision we cycled through things like trying to develop programs and systems within social media platforms that look for specific qualities in bot accounts, or quicker, easier fact checking tools that the masses can use to make sure what they are reading is reputable.

Our Solution

Our company is aiming to change the path that the internet has been taking by educating on these issues of false information and bot farms and accounts.

Our intended audience would most definitely be the public, but the program would become a part of the curriculums of elementary and high schools, so the initial change would start with younger generations. They would also be implemented into workplaces the same way other types of training are. There would also be programs open to people of all ages to take part in on their own time if they did not have the opportunity to learn internet literacy in school. As the internet is now an integral part of our modern society, it is important to teach people starting at very young ages as they grow up in an age of internet and misinformation. It will give people the tools and skills to properly use and surf the internet throughout their lives. Like any subject children learn in schools such as social studies, math, english, etc., this would become a required and regular program because internet literacy skills are becoming increasingly important in today’s society. We will also encourage states to work it into the curriculum of their public universities. If we can build habits in people from a young age to be constantly questioning information they read on the internet, we can begin to curb the problem. While it wouldn’t be required, we would also push our curriculum towards to 60+ community, people who are at the highest risk of being tricked/manipulated on the internet due to their extremely limited access and knowledge of the internet. It would be offered at old-folks homes. While we recognize that it would be a fine line that we would have to toe when it comes to deciding which online activities are morally right or wrong, we think it could change the way our world interacts and thinks online, which could be the answer to many of our current problems.

A quote encompassing the issue we wish to tackle

Statement of Change

It is clear now more than ever that social media has become a dangerous place for misinformation. These platforms have leaped ahead of our ability to control them and understand the threat they pose. Bot farms and other methods of spreading misinformation have seized an alarming amount of control over internet news. It’s time to flip that on its head. Our product aims to make this dangerous online world a whole lot safer. By implementing lessons on how to navigate the cluttered world of social media into our schools and the workplace we can make bot farms and coordinated attempts to spread misinformation obsolete. People will know the difference between a fake news report put on the trending page of twitter because of millions of bots and a genuine news report. People will learn the habit of fact checking any important news related post that they see. Online and in-person courses will also be provided to become certified in teaching online literacy. Additionally, by shedding light on the dark side news on social media, we can push people back to reliable, fact-checked news sources. If we can condition everyone from a young age to understand this, misinformation on the internet can be a problem of the past. After all, We have seen what can happen when politicians are able to sell themselves to the American people using lies and misinformation.

Outtakes

1.Develop programs and systems within social media platforms that look for specific qualities in bot accounts. Look for things in accounts like lack of posts, bio, automated usernames; in other words, non-personalized accounts. Social media companies have the power to take down bot accounts all the time, but do not notify people coming in contact with bot accounts. Create a system that notifies people of potential bot accounts and have the option to report them. If people are reporting these accounts more, social media platforms are looking into them more often, and they can be taken down at faster rates. In general, social media platforms should also be more transparent about bot farms instead of taking down accounts under the radar.

2.Create courses either in school or like in communities for young kids/older generations that can help teach online literacy. It might sound kind of ridiculous, but I think providing people with a foundation for how to exist on the internet is going to become increasingly important. This will help people spot fake accounts, or know that what they are reading isn’t reputable.

3.Create quicker, easier fact checking tools that the masses can use to make sure what they are reading is reputable. There is a clear demand in this market for an application that you can put a social media post into to see if it has been reported on anywhere else or if it is clearly misinformation. This could possibly be something that is built into apps like twitter and facebook as well. If it can be made more convenient for people to double check what they are reading, many more people will do it.

Resources

Meet the Team

From left to right: Olivia Weiss, Oliver Glass, Tyler Foy, Caleb Peck, and Morgan Gaffney

--

--