Should We Assign Morality To AI?

Hailee Beth
Thoughts Of A 20-Something
3 min readFeb 6, 2023
Photo by Shawn Fields on Unsplash

Twitter is bad, Facebook is corrupt, TikTok takes advantage of people, Snapchat is good — Why are these computer programs running on algorithms expected to follow our human understanding of morality?

Bad things happen on the internet, that’s not new information to anyone here and I’m not trying to debate that. People use the internet, especially social media, to do and say crazy and heinous things. But that’s just it, PEOPLE use the internet to do those things, the computer and the algorithm don’t act on their own to traffic minors or spread hate speech, they only do what humans designed them to do, promote popular content and offer a platform with about as free of speech as you can get.

When something dangerous, newsworthy for the wrong reason, or simply not socially acceptable happens online, all it takes is for one person with a large enough following to comment on it and suddenly everyone is searching for the context of the comment, leading to thousands or millions of views which tells the algorithm that it’s popular and desirable content so it pushes it out to more people as recommended content. The algorithm didn’t consciously decide “Oh I think everyone should see this violent video promoting the overthrow of the U.S. government” after the insurrection on January 6, 2021, it only logged the data that millions of people were searching for that content to receive context for the news stories they were watching. The algorithms would then show similar content like other videos of the event, more content from those who posted from the event who may have been promoting forms of hate speech on their pages, and so on. Calling Twitter “bad” because it showed you this content is not a helpful observation, because it’s simply not true. It’s only doing what it was designed to do, but because PEOPLE decided to post and popularize harmful content that means that algorithms will take notice of those views.

This also ties in with my last blog post about YouTube’s autoplay feature and how it radicalizes viewers. I’d like to correct that statement here, YouTube is not radicalizing anyone, the content people choose to post and give their attention to so that they become popular even if people are “hate-watching” is what is radicalizing them. Autoplay uses an algorithm to show popular videos as recommendations loosely based off of the content you already consume. In short, you and me and every other internet user created this problem and continue to be the cause every time a video of a person dying or a hateful tweet go viral.

What can we do about it? We cannot engage with the content, that includes looking it up to get context on a news story. This only works if everyone works together on this, but if we simply do not intentionally engage with or even lay eyes on this content then the algorithm won’t spread it, it’s that simple. In theory of course.

--

--

Hailee Beth
Thoughts Of A 20-Something

I am a graduating senior studying strategic communication at High Point University. I mainly write about women's rights, with a few extra thoughts sprinkled in.