How the laws unfairly protect big social media companies

Rhys Wastell
SI 410: Ethics and Information Technology
6 min readFeb 18, 2023

What started out as a way to pass time after a sleepover, turned into a horrific experience that no one should endure for a 12 year old girl. This girl, who will not be named due to privacy concerns, encountered a pedophile on the popular website Omegle. A relationship grew to the point where this girl was sending the man, whom she met over the internet, elicit photos up until she turned 15. Eventually the man in question was arrested and is now serving an 8 year prison sentence. However, a current lawsuit against Omegle has brought up a discussion over if social media platforms are liable for what happens on their sites. Currently sites are protected for lawsuits for the content users post on them through Section 230, however most sites treat this protection as an excuse to do little to no moderation of content.

Section 230 of the Communications Decency Act was originally passed in response to a defamation lawsuit from Stratton Oakmont, made famous because of the movie The Wolf of Wall Street, against an internet service provider, Prodigy. Prodigy was found liable as it had moderated older posts before, meaning the court looked at them as a publisher. After this decision was released, Congress stepped in by creating and passing Section 230. Section 230 allows websites to moderate their platform as they see fit, without fear of being legally liable. Since the passing of Section 230, large websites have risen, such as FaceBook, Twitter, and Omegle, which present entirely new problems the government didn’t foresee.

The biggest of these problems is how websites are using Section 230 to avoid numerous civil lawsuits. Websites have years of precedent in their favor, as judges have the trouble trying to decide if something a user posts/does on the website is simply them exercising their right to free speech online or if the site being used has some severe flaws in it. Some examples of other lawsuits that have been thrown out due to Section 230 are a harassment case filed against the app Grindr, and a lawsuit against Google claiming YouTube helped radicalize the perpetrator of the 2017 Paris Attacks. While the idea of allowing these websites to moderate as they seem fit may seem like a good idea, companies have started to rely on the clause to get away with ethically unsafe practices. In 2009, the strength of Section 230 was slightly reduced in a change known as FOSTA-SESTA, which doesn’t provide protection from federal trafficking laws.

Personally I can recall in school learning about, what at the time was referred to as internet safety, or in other words how to act on the internet. However, everything that was taught during this time, was the opposite direction in which the internet was evolving. Namely the advice to not interact with strangers over the internet. This advice soon grew harder to follow as more and more social media companies were operating where their primary goal is to increase those connections between strangers. The simple fact is that even a few years after Section 230 was passed the internet was still changing rapidly to the point where there became far more dangers on the internet in a few years than previously.

Returning to the Omegle case, the most interesting aspect from it, is that Omegle is being sued over the fact that there is a product liability, meaning the plaintiffs are arguing Omegle’s design is at fault. In a different sense the argument is similar to if you bought a child’s toy, which had a small piece consistently break off leading to a choking hazard, a defect in the design that the company is liable for. This essentially bypasses Section 230 of the Communications Decency Act, which protects social media platforms from being sued for what’s posted on them. The argument that Omegle has a design flaw is quite a strong one too, as there are reports dating all the way back to 2013 raising this issue (Tidy, Joe. “Omegle: Suing the Website that Matched me with my Abuser”). “They’ve made it actually easier to navigate, easier to jump in. It was a bit difficult last time. More warnings were available, but now you can primarily sign in anyplace, from your phone or wherever you might want to” said Charleston Police Officer Doug Gallucio, after investigating Omegle in early 2013, after initial claims surfaced of pedophiles using the website to find victims (Jacobs, Harve. “New Concerns over Pedophile Paradise”). Simply, Omegle is one of the worst examples of an unsupervised/unregulated platform can do to our society.

While Omegle isn’t the only platform with these ethical and possible legal issues, Omegle does serve as an example as to how much protection companies felt they had due to Section 230. For years Omegle was aware of the issues their site faced, and instead of trying to implement new features to keep users safe, they simply added a warning on the loading screen. Possibly if there was an actual threat of legal action, a solution would have been implemented. Additionally, the moderation of the site was severely lacking allowing things like graphic pornography, violent extremism, and hate speech. The allowance of graphic pornography is so rampant there are always jokes surrounding the amount of dicks you will see while browsing Omegle.

Until this point the main perpetuated I had discussed was Omegle, but other companies also have been using Section 230 as a shield. These companies haven’t been as blatant in their disregard for their users. The other big examples for this would be FaceBook and Twitter struggling to moderate all the misinformation that gets posted to their platforms. If these companies knew that the way they moderate their platform was going to be scrutinized, would that cause a significant change in how companies do moderate their platform? I personally know that most companies want to prioritize moderation of content that is being talked about from people outside of the site. I can recall most platforms not having a problem with someone like Andrew Tate, until there was an outcry against him. Until that point there was a higher level of engagement on the site, because of people trying to denounce him in the replies of videos. Additionally a large amount of misinformation is spread by bots on these platforms, and by spreading these narratives they increase engagement on the site.

Considering the main goal of any company is to bring in a profit, then there is an issue at hand when companies are profiting off events that hurt society. A prime example of this is when Jamal Koshoggi died a bot-net was deployed to downplay the involvement of Saudi Arabia in the killing. By deploying this bot-net on Twitter, it increased engagement on the platforms, generating a profit for the company. Other more benign uses bots have on a social media platform would be to artificially increase the number of users on the site, or simply for one user to seem more popular by having a large following of fake people.

The Omegle case will certainly be used in the future when inevitably the Supreme Court takes a case involving Section 230, but there are some other important cases that have already happened. One in particular, involving Snapchat and their speedometer filter was ruled against Snapchat. The judge agreed that the filter encouraged users to reach faster and more dangerous speeds than if it were otherwise not there. That decision showed companies can be held liable when their products are severely and repeatedly damaging society, but this case had clear impacts to be drawn upon within the world outside of the digital space. Currently, the main issue facing the most notable of these platforms is the misinformation being posted to it, which doesn’t have as clear of an effect on society. That lack of affect could also be traced back to more misinformation to convince members of society there is no problem.

While Section 230 does help the freedom of information on the web, in its current state it is more a barrier helping companies continue unethical practices without fear of legal repercussion. The fact that a company such as Twitter, can allow large amounts of misinformation to be spread with little to no effort put in to stop it, but can’t be held responsible for the causes of that misinformation is alarming. As consumers we should be trying to demand transparency from the companies we use on a daily basis. Currently these companies have a way to be less transparent in how they moderate their sites and many other aspects (expand later). While this debate rages on, the Supreme Court gears up to hear cases regarding Section 230, potentially leading to its demise or strength in the future of the internet.

Links for sources used:

https://www.bbc.com/news/technology-64618791

https://www.eff.org/issues/cda230#:~:text=Section%20230%20allows%20for%20web,what%20content%20they%20will%20distribute.

https://www.theverge.com/2022/7/14/23216386/omegle-lawsuit-section-230-district-ruling

https://www.nytimes.com/2020/05/28/business/section-230-internet-speech.htmlhttps://www.cnbc.com/2023/02/21/supreme-court-justices-in-google-case-hesitate-to-upend-section-230.html

https://www.cnbc.com/2023/02/21/supreme-court-justices-in-google-case-hesitate-to-upend-section-230.html

https://www.cna.org/our-media/indepth/2021/04/social-media-bots-and-section-230

https://www.techdirt.com/2020/08/10/section-230-isnt-why-omegle-has-awful-content-getting-rid-230-wont-change-that/

--

--