Why Is ChatGPT Making Up Facts? And How To Deal With That?

Agata Cupriak
4 min readFeb 17, 2023

--

Created with DALL-E

Ever since OpenAI released ChatGPT to the public, the chatbot has become the object of fascination, apocalyptic visions, and various prophecies. Quite quickly, under the influence of delight at the ease with which ChatGPT responds to queries, it was announced that this was the end of search engines. Tremble Google, no one will ever pay to be on the first page again. However, it soon turned out that the bot is fabricating the facts! And what now?

For Google’s sake!

In general, using ChatGPT is more difficult (even if more fun) than using the search engine. For complex tasks, it turns out soon that you have to teach it a lot before you get something valuable. Most answers are very simple. Too simple. Worse yet, sometimes they are false.

I started typing extensive instructions (so-called prompts), giving outlines, main points, translating, pasting model texts, and even ending commands with “please”… and it turned out that I need to know exactly what I want, and first feed the algorithm with the right input (and it is voracious) to get a satisfactory output. When writing expert texts on tech, I honestly couldn’t go beyond general, generic answers… I really tried everything and eventually went to Google. Not to mention the fact that in the search engine, I can verify the truth of theses and the correctness of the data much faster. It got to the point that I googled ChatGPT answers for correctness anyway. Admit the paradox!

Page not found

Finally, I installed a plugin for Chrome (Aiprm — try it, it’s free) that allows you to choose from dozens of ready-made prompt templates. What a relief! I went to work and… broke down again. I needed data on cloud computing usage in Poland and ChatGPT just made it up. It gave me suspicious-looking numbers, the title of the report (for example conducted by Deloitte, very cheeky!), and even a link to it. Only, this report did not exist, and the link, although in the real domain, led to a 404 page. I wrote to the bot that the reports do not exist and I would like real data, it apologized and did the same again. And again, and again.

I gave a ChatGPT the question of how many companies in Poland use the cloud and asked for the source of data. I was given a made-up number, a fake report name, and it created a non-existent link.
I replied that this study does not exist and asked for real data. Chatbot apologized and… made up another 4 reports.

At first, I was annoyed and asked myself where this admiration comes from, it is impossible to work like this? THIS is going to replace me? Seriously? Then, I came to the conclusion that my thinking about this tool is simply wrong. I need to change my perspective and stop the tug of war.

Logics is not enough

Try to understand before you criticize — I told to myself. So I started to delve into how ChatGPT works. Large Language Models, such as GPT-3, are trained on lots of internet data and can generate text like humans. Simply put the answer is generated using a process called autoregressive modeling, where the model predicts the next word in the sequence based on the words that came before it. What is logical for artificial intelligence is not necessarily true. That’s why the answers may not always be accurate, as humans use background knowledge and common sense to choose text which fits the situation.

ChatGPT for sure is a game changer, and the next version will probably knock our socks off, but it is also true that we should not think of this tool as an alternative to search engines, but a complement. Maybe combining the Bing search engine with ChatGPT by Microsoft is the right direction?

Read: How do humans exceed AI? And how to use that superpower?

Human Thinking still has a future

We shouldn’t use ChatGPT as an encyclopedia. It’s a great tool to inspire, organize, summarize, rewrite, and transform content. Generating content with it is now much simpler but human editors still need to review and fact-check the content for accuracy. I see great potential for such bots for customer service, online education, entertainment, translation, and content marketing.

Finally, it is worth remembering that writing is not just the process of typing and arranging sentences. It requires many hours, if not many days, of research, collecting information from various sources, comparing, eliminating, verifying, looking for correlations, reasoning that requires knowledge of the context, and catching nuances. This is a thought process, which chatbot has not replaced yet for some more complex topics. But does it have to? Maybe instead of trying hard to eliminate human contribution, it’s better to think about how to use AI to support our superpowers and improve things we already know and use.

And do you see a role for such bots in your industry? What part of the work would you like to give them, and what part would you leave for yourself? And why?

Was it interesting and useful? Clap, share your thoughts, and follow me!

--

--