Reading 10

Noelle Rosa
noellerosa
Published in
3 min readNov 5, 2018

Fake news refers to the continued posting and sharing of information meant to delude or incite people in some way. I think the term is used pretty broadly nowadays to include content that might even be true at its core but is titled or labeled in a misleading way. I have found, in my own life, that fake news is annoying but not yet dangerous. I think I am educated enough to know not to believe everything I see and read on social media so I look to other outlets when something seems questionable but I understand that this is not the case for all society, making this content dangerous. Whether it fall into the annoying or the truly dangerous bucket, I think technology companies can and should monitor and suppress this fake news because the alternative is people like myself abandoning their products altogether.

I generally don’t see a great deal of fake news on my social media feeds because I make a concerted effort to unfriend or mute people that share content I don’t want to see. I don’t go on Facebook to see political tirades so I try to limit my connections accordingly. Still though, on occasion I will see a shared article that seems fishy but I can’t say I do much about it.

I do very much believe that Facebook and Twitter have some level of responsibility for how their platforms are being used. As Rashmi Sinha explains, “’Even if you’re not responsible for what happened on the ground, I’d take it as a responsible leader to try to adapt my software to any harmful patterns it is being used for,” she said. “At the end of the day it’s the mob that lynched that is responsible, not WhatsApp. But if you want the software to be used and people to have a long-term relationship with it, there can’t be disillusionment or problems with the platform.’” I think the key here is driving for a long-term relationship with consumers. The more people see and hear about tragedies driven by the content on these sites, the less people will be willing to engage, refer their friends, or allow their children to engage. Social Media platforms revolve around user experience and the feed content is a part of that. Strictly from a business perspective it makes sense for them to find inflammatory and unpleasant content and take it down.

I am a little less confident that news aggregators such as Facebook and Google need to monitor fake news but I think a lot of the same principles apply. Maybe in news circumstances they need to be more careful about what constitutes over the line or fake, but I think anything that can’t pass a fact check or can clearly be looked at as inflammatory or hateful should be filtered out. Though they may be news aggregators they are still platforms that consumers will abandon if they feel they are being misled or shown distasteful content.

It has been proven that fake news, and foreign governments, played a role in the 2016 election. As noted in the New York Post article, we cannot know to what extent and if it genuinely affected the outcome, but we do know that there was some level of social media manipulation. For this reason I think the current focus on social media is more than justified. As Professor Weninger explains, “The ideas we’re talking about apply equally to liberals and conservatives, Yankees fans and Red Sox fans.” Social Media tools can be manipulated to help or hurt any group, any cause, and any person. Though the major focus right now is a politically-driven one, people need to realize that this is a pervasive issue that goes beyond the Democrat and Republican parties.

I try my best to get my news from a diversity of sources both in medium type and position on the political spectrum. I think that people have gotten too comfortable reading articles and viewpoints that reflect the beliefs they already hold which in no way educates us or broadens our social understanding. When I read a compelling article about some hot topic, I do my best to find an argument for the contrary belief. I don’t think the rise of fake news means we are doomed to a “post fact” world. I think tech companies are going to have to keep working to monitor the content on their sites or people are going to abandon them. I think that people are going to get increasingly educated on the risks associated with believing everything you see online. I think that ultimately there will be a solution for this problem I’m just not sure when.

--

--