The Dark Side of ChatGPT — Why You Should Avoid Relying on AIs like ChatGPT

Jerren Gan
Geek Culture
Published in
7 min readFeb 25, 2023

--

Hint: it lies and sounds really confident when it does

Photo by Jonathan Kemper on Unsplash

Recently, with the release of ChatGPT, Microsoft’s integration of the chatbot into Bing, and Google’s beta testing of Bard, everyone is buzzing with excitement about a future where automated chatbots and text-generating AIs are commonplace tools that everyone uses.

And that excitement is completely understandable.

I was extremely excited about their release too. I actively use Dall-E now to generate article cover images. I jumped up at the opportunity to be able to explore and try out ChatGPT when it was released (before concluding that it wouldn’t be able to take my job as a writer). I attempted creating scripts to fine-tune the GPT2 models and trained them (sufficiently) well to talk about movies (and less successfully, tried to create a model to talk like me).

So you know, I get the hype. I was on the hype train too.

But as I used, tested, and read up more about the systems, the more I realized that these AIs are extremely dangerous to rely on.

And at least in this iteration of release, I’ve concluded that these AI technologies are tools that we cannot adopt yet.

The AIs Can Lie Convincingly

--

--

Jerren Gan
Geek Culture

Systems Engineer and Physicist | Writing about the environment, mental health, science, and how all of them come together to create society as we know it.