Ethical Pitfalls of Generative AI

How AI Can Hurt Us

Visualwebz
7 min readMar 1, 2024

Generative artificial intelligence algorithms and their usage as tools are inherently not ethical. But why? Is artificial intelligence, not some futuristic thing to help us all? Why isn't helping ourselves with these tools helpful? First, artificial intelligence is just algorithms that learn from data sets to simulate humanity and what they might do. These allow us to delegate tedious tasks to computers that aren't worth using on human workers, freeing time. Now, this all sounds great, and that would likely be a correct assumption! Typical artificial intelligence is incredible and allows us to do so much good for our society, such as helping with cancer aid and other various issues. The real problems will arrive when we get to generative AI, the form of artificial intelligence that is actively hurting us and will continue to do so.

https://medium.com/@bradleydeacon/mastering-the-art-of-ai-your-guide-to-power-prompting-in-the-age-of-automation-a2f416efbb99

What is generative AI?

So, to begin with, what is Generative AI? Generative AI is artificial intelligence with basically the sole goal of creating something. This is text or images, or even something such as voices or videos, and the output is generally determined by a "prompt" input specifying what the user would wish for. For example, someone would type in "cat in a meadow," and the algorithm would output "exactly" that, a cat in the meadow. Due to the nature of it simply being an algorithm, there tend to be many distortions.

Distortions of generated images can include blurry backgrounds, messes of amalgamated flesh, incorrect finger counts, and the list continues. For text, the primary use is currently AI chatbots. You type in a response, and the algorithm will respond and go back and forth.

Looping back to our image-generated text quickly, it is essential to note that these distortions have gradually become less prevalent. This makes it much harder to differentiate artificially generated art from everyday art.

Usage of Generative AI Tools

So, what are generative AI tools being used for? The main uses of text generation are chatbots, articles, translations, voices, and, even more recently, videos. Algorithms can process text in real time and then do the translation. This is very convenient and would allow easier communication between languages.

  • For image generation, it's just being used for that. There are AI images and AI art. Images are closer to reality and are pictures or photographs. Art is less realistic and goes further into drawings and such. An example could be a company using artificial art for promotional material. This is also very convenient; you put in a prompt and have artwork.
  • Articles and text — A prompt will respond with whatever you requested, with relatively good accuracy and whatnot. Of course, since it is still just an algorithm, it will run into distortions similar to image-based algorithms. Still, these show up in weird grammar, or something such as straight-up misinformation, or the AI simply freaking out and going on insane tangents. The most extensive use of text-based generation here is ChatGPT.
  • You input a sample for voices, and you can create audio that sounds like the sampled voice. This is very convenient if you wish to avoid paying a voice actor but still wish to use their voice. It's very weird, very gross.
  • For videos, this is a far more recent development. OpenAI recently announced they would have a new model for generated content called Sora. Sora takes prompts and turns them into videos. This is very convenient of course, type in dogs walking and you get a video of dogs walking, marvelous. This, too, would have had to have been trained on people's videos, once again taken without permission, being asked, nothing. OpenAI does not say if they did this, but they hide behind fair use and say they must train their models.

Data Training

All of this is very convenient. But now, the issues. To begin with, for artificial intelligence to work, it needs to be trained on data. When it comes to text, this is obvious. Just scan the internet, articles, and such. Then you have all the data you could ever wish for. For art, it's the same. Scan the internet and lift people's art and photographs; you have your data. However, these datasets use people's work 99% of the time without permission. Most artists are already against people reposting their artwork; taking their work and mushing it into amalgams with others is frankly insane. OpenAI, the most extensive generative AI, has discussed that creating these tools would be impossible without copyrighted material. This equates to stealing work, and as such, just using these artificial intelligence generators is almost equivalent to stealing.

Sora

Onto further issues, Sora, the OpenAI video generation algorithm, will be incredibly problematic. The ability to generate videos is exciting, for sure. Why not create those true electric sheep? Doesn't need to be a machine's dream anymore, it's reality. Faking criminal activity is also an authentic future, however. People will be accused of crimes they never committed, not actual crimes, to see someone's reputation ruined. Horrible videos of other things will spread, too, which is not good. The ability to create videos already exists, and generated videos do not need to exist and probably shouldn't.

https://openai.com/sora

Conflicts with Current Jobs

The next issue is that these generative artificial intelligence tools will steal jobs if they haven't already. Duolingo, an extensive website/application used for learning languages, recently laid off several employees to move towards some AI approaches. Those are real people who had their jobs taken due to the company wishing to maximize profits. While this is, of course, an apparent turn of events, it is most certainly not ethical. Some companies have even started using AI tools to make art for promotional material. Loki, a television series by Disney, had used a promo poster that showed all of the trademarks of generated art- showing that this was likely another case of more jobs being lost, where an artist could have been hired. Artists are already in an incredibly tough spot financially most of the time, and companies being able to skip hiring them to use tools built off their art is usually not ethical.

Possible Job Openings

While you could most certainly say, "Doesn't this open up opportunities for "AI Artists" to have jobs in the space?" You would probably be correct, but is that a good thing? Taking humanity out of creative jobs and tasks could sound better, especially since it is all built on stolen information and material not belonging to the owner. Why not leave the tedious, everyday tasks to algorithms and leave the artistic functions for people to do? Additionally, there are already generated articles in newsfeeds or just when simply googling something important. These are further jobs taken from people who can write.

Data Scraping

Everything about generative AI should also raise concerns about anything you post being scraped for an AI algorithm. You could post a video of yourself dancing, which might be part of the algorithm now. Artists should not be forced to post their work to avoid theft. A new technology called nightshade was recently created, allowing artists to "poison" their work to mess with scrapers. But again, this should not be something people should be forced to worry about; so many people's entire careers are online, so many people's entire families are online, so many people's whole lives are online, and being able to scrape this is absurd.

https://www.datamation.com/big-data/data-scraping/

Energy

All of this doesn't even mention energy. While certainly a bit of a nitpicky thing, we have yet to learn at the moment what the current iterations of something such as ChatGPT use energy-wise, but the least optimized engines use around as much as you would need to charge your phone to generate an image. Generating text is a lot lower, but that is a high amount of energy to use for something that could have just been a job for a person with artistic experience to fulfill. Especially if there are a lot of images being generated- the amount of energy used is most certainly something to be concerned about in the future. Especially with how little knowledge we currently have about it, we have already seen that technologies like cryptocurrency have environmentally damaging effects due to the sheer amount of energy they use.

Rapid Growth

On top of everything, generative AI capabilities have grown extremely fast. Just around a year ago, all it could make was bizarre creations of melting colors, but now it makes pictures with far better quality, being almost hard to figure out sometimes. This speed looks impressive and good initially, but while it is undoubtedly remarkable, it is not good. Due to this rapid increase, companies have been practically chasing generative AI, allowing them to lay off artists and article writers. Due to the reasons already discussed, there is no reason for generative AI to exist in the way it is now due to its thieving nature and lack of human interaction in what should be a humanity-filled field. It is wholly unethical and should not have gotten to this point.

Final Thoughts

Generative artificial intelligence tools are fundamentally unethical with how they are used now. They exist as a way for large companies to make more money while paying fewer workers, as a novelty for the greater society, or as a way for con artists to make money from generated art. More typical AI should advance instead of continuously churning out slop to benefit large companies and people who refuse to better their skills and rather leech off those who do.

--

--

Visualwebz

A Seattle web design and online marketing agency that delivers high-end websites. A passion for web development and SEO.