Artificial Disinformation: Can Chatbots Destroy Trust on the Internet?

Welcome to the age of free deception

Nabil Alouani
Geek Culture

--

Image via Midjourney and Canva.

Soon after ChatGPT came out, a clock started ticking inside my head. It’s like when you see a bolt of lightning slice through the sky and realize it’s only a matter of time before you hear the bang.

“If these systems aren’t used to create propaganda and misinformation yet, I don’t know what certain governments are doing with their time,” ex-Google engineer Blake Lemoine said. “We’re letting the engineering get ahead of the science. We’re building a thing that we literally don’t understand.”

Lemoine’s last sentence may be hinting at sentience, but even with sentience out of the picture, chatbots remain a black box we can’t fully understand, let alone control. Chatbots rely on Machine Learning models, and Machine Learning models write their own code.

Unlike traditional algorithms, ML systems don’t require you to write every single instruction for them to follow. Instead, you show your system a bunch of examples of what you want to achieve and ask it to find patterns and replicate them. It’s an iterative process where your system “learns” from feedback until it arrives at the desired outcome.

--

--

Nabil Alouani
Geek Culture

I drink coffee and write prompts || 100% human-generated content || Weekly mails: https://nabilalouani.substack.com