ChatGPT’s Fabrications Make Me Feel Like There Are Alternate Realities

What my semi-deep dive into the tech’s capabilities and inner workings has taught me about its “hallucination” problem

Ronni Souers
The Generator

--

Photo by Airam Dato-on: https://www.pexels.com/photo/sand-dark-dirty-outdoors-15940011/

Like so many other writers, I’ve tried to approach ChatGPT with some enthusiasm. I know the tech is here to stay and that it would be silly to resist it when its many applications could benefit me and my workflow.

I went through all the typical phases. At first, I was in absolute awe at ChatGPT’s abilities. It could do anything.

After a while, its limitations started to become more visible. Our honeymoon phase neared its ending.

Now, after encountering so many inaccuracies written in confident, fluent language, I feel completely disillusioned, and I finally understand, more than ever, that ChatGPT’s “competence” is an illusion. ChatGPT is not a knowledge model with abilities to validate its own responses, after all; it’s a large language model (more on this later). And I originally approached it with a huge degree of naivety. Only after experiencing its inadequacies firsthand did I start to do more research on it.

The large language model’s drawbacks are made clear right on OpenAI’s site. One limitation listed says that the AI…

--

--