Thesis on AI

P4ssw0rd
Mr. Plan ₿ Publication
3 min readAug 9, 2024
Photo by BoliviaInteligente on Unsplash

I have found this French thesis on my Notion.

Are you afraid of AI? Me? No, even though I have no reason to trust it.

Indeed, there are plenty of examples of AI gone wrong:

Take Elaine Herzberg, a 49-year-old woman who died in 2017, struck by an Uber in Tempe, Arizona. The car was self-driving. Or consider the massive data leaks from OpenAI, a company specializing in AI. Then there’s the generation of explicit images with DALL-E or Midjourney, AIs that create images from simple text prompts, and the creation of ultra-realistic videos with Sora. Actors are protesting, fearing they’ll lose their jobs, as well as many translators, voice actors, and teachers. But what’s making the most noise are the artists, who complain about their work being copied by AI, particularly in artistic styles, both graphic and musical.

All signs point to AI being more likely to do harm than good.

This is not very reassuring, especially when you hear that, according to Arte, Bing, Microsoft’s AI, once told a user that if they tried to harm it, it would retaliate in self-defense.

But very few people actually understand how AI works.

LLMs, or Large Language Models, only generate text. Nothing more. Examples include the famous ChatGPT, LLAMA, or Mistral. They are all trained on massive datasets, including books, forums, videos, and the internet in general. As for Bing, Microsoft’s AI, it simply pulled that response from its terabytes of data. It’s not conscious, not inherently good or bad, and doesn’t think about what it says. The information it provides can be wrong, like if I say, “the bell will ring in two seconds,” one, two… There, what I said wasn’t true, and I didn’t really mean it — it’s the same with AI.

So what’s the point of using them? Well, AIs can help us in our daily lives. For instance, I’ve used ChatGPT to get ideas for decorating my room. I’ve also used Perplexity, an AI that searches Google similarly to Microsoft’s Copilot, for those searches. And I’ve used Endel, an AI that generates music to help me quickly get into a flow state, where my concentration is at its peak. These are just a few examples of the countless possibilities AI offers, like having an AI on your PC for total data security or on your phone for a smoother, deeper experience with this now-ubiquitous device.

This has allowed companies to thrive, like Nvidia, the largest graphics card seller, which saw its stock soar to make it the most valuable company in the world, worth $3.4 trillion. Then there’s Mistral, a small French company that’s the main competitor to Facebook’s AI LLAMA. Or innovative projects like the Rabbit R1 or HU.MA.NE’s AI pin, which both aim to replace smartphones with AI-powered devices.

But how do we explain all the flaws in this technology? First, there’s the data security at OpenAI — the data breach was a real blow to the company, forcing them to rethink their security to avoid a repeat of that bad publicity. It’s also worth noting that in 2017, when Elaine Herzberg was struck by a car, it happened in a very dark area where many accidents occur. Add to that the fact that Elaine Herzberg didn’t cross the road correctly, and you have the perfect storm for an accident. Moreover, image generation with DALL-E and Midjourney is becoming safer, with “white hat” hackers testing these AIs to help developers fix vulnerabilities. While some issues can be justified, others are irreversible, like the replacement of certain jobs, controversies with artists, and the spread of fake news.

In reality, AI is like fire — it’s a tool. Yes, it can and has caused “fires,” but we must remember that fire is also the reason we’re here today. And AI may just be the fire of tomorrow.

Translated by Chat GPT 4o

thank you for reading 🤗

--

--

P4ssw0rd
Mr. Plan ₿ Publication

"I’m a French guy 🥖 who talks about self-improvement and how to become a better person. Follow me, and together we’ll become the best version of ourselves."