ChatGPT, “Bullshit” and the State of Artificial Intelligence

Patrick Feeney
8 min readFeb 1, 2023

--

I don’t know if you noticed but the world suddenly changed 61 days ago. November 30th, 2022 was the launch of ChatGPT, the world’s first versatile chatbot launched (chat.openai.com).

To see why this is a watershed event, go to ChatGPT and ask it almost anything. You could type “Please summarize Hamlet in 2 paragraphs”. The answer will come back instantly and the results are generally very good. Now ask for “Hamlet in 15 words and in the style of Richard Nixon”. You never get the same answer twice but I received: “Hamlet, a play about a prince avenging his father’s murder, folks, let me tell you, that’s what it’s all about.” Impressive. But scary too. For example, if you ask it to write an essay explaining the causes of WW1 at the level of a 14 year old, it will do that perfectly. School will never be the same again.

In reaction to ChatGPT, teachers and university professors are scrambling to revamp their course requirements. ChatGPT can be automated and can write customized letters at scale. So cybersecurity experts are warning about chatbots’ ability to write millions of scam emails. Legislators see a risk to democracy if cynical political groups begin sending millions of individually-tailored, manipulative emails to constituents. ChatGPT also knows how to code, so there is the possibility that the internet will be swamped by misleading web content generated in large quantities by mischievious users.

Should we panic? Perhaps. But before we run for the hills, let’s stop to understand what ChatGPT is, and in particular, what it is NOT. First what a chatbot isn’t:

It isn’t consciousness. ChatGPT does not feel things. It has no concept of beauty even though it can craft a good poem. It has no sense of humor even though it can write jokes.

Now what ChatGPT actually IS: it is a computer program that has been trained to scour the internet for relevant information when asked. It does not know what it is saying or doing. It finds appropriate text online, splices it together, and serves up the results to the user in a clean and clever manner. It sometimes generates nonsense but overall it provides reasonable and quite useful answers to our questions. So it’s a smart system; amazingly smart. But it’s also “bullshit” as Ezra Klein of the New York times recently said.

“[Chatbots] have no actual idea what they are saying or doing. It is bullshit.”

Klein is not being vulgar. He is making a reference to the philosopher Harry Frankfurt. Frankfurt is known for giving working definitions of what he called “truth”, “lies”, and “bullshit”. A lie, according to Frankfurt, is an attempt to disguise the truth by someone who knows the truth. “Bullshit” is the effort to persuade without any regard for the truth, often by someone who likely does not even know the truth.

ChatGPT cannot self-moderate. It is not able to gauge the purity of the intentions of its user, nor evaluate the truth value of the statements that it produces. So in the wrong hands it could be dangerous. As we know from internet echo chambers and conspiracy theories, misinformation can be as dangerous as outright lies. However, it is important to realize that ChatGPT itself never tells lies. Since it is simply rehashing information found online and it has no real understanding of any subject, it is merely generating “bullshit”, not lies.

Photo by Nice M Nshuti on Unsplash

What would a chatbot look like if it were designed to go beyond “bullshit”? It would look like a human. Human minds carry a model of the external world that is constantly being updated. When someone walks up and asks us a question our mind goes into multitasking. It looks for the meaning of the words in the question but it also attempts to understand the questioner’s intent based on inference and context. When answering the question we would try to measure the impact of our reply, to track the evolution of the interaction with the questioner, to identify and moderate our own emotional reaction as the conversation proceeds etc. etc. This is what it means to have a “mental model” of the world and to be continually updating and improving it.

What would it take for machines to learn to think like we do? Many AI experts think that we are well on our way. This includes Sam Altman, the founder of the company that created ChatGPT. The fact is, however, “intelligent” machines in factories, homes, and on the roads do not have a well-developed model of the world around them. They have not yet evolved past sensing, identifying, and reacting to nearby objects (which is already quite a feat!). We are still waiting for Teslas to become talking cars and for automated vacuum cleaners to evolve into real humanoid robots. In the meantime, we have Siri (Apple’s voice assistant), Alexa (Amazon’s virtual assistant), and now chatbots like ChatGPT.

Photo by Jan Antonin Kolar on Unsplash

Will machines ever behave like us, think like us, and understand things the way we do? There are credible voices among the experts who think not. Prominent among these AI sceptics is Sir Roger Penrose, the noble prize-winning mathematical physicist. He sees human thinking as some sort of emergent phenomenon that is non-replicable by any algorithm. He is convinced that there is something “non-computational” about human understanding; some bit of transcendent magic in our thought processes that would make it impossible to build a truly intelligent robot.

“The judgement-forming that I am claiming is the hallmark of consciousness is itself something that the AI people would have no concept of how to program on a computer.”

Fittingly, since Penrose is a mathematician, he bases this assertion on a mathematical theorem named after Kurt Godel, the Austrian logician and philosopher. Godel’s Incompleteness Theorem says that things that we know to be true in mathematics are not always provable… and yet they ARE true. In other words, there is something mysterious in the way we establish logical truths. Taken to its logical extreme, Penrose tells us, Godel’s Theorem says that it is not possible for an algorithm to mimic the mind’s ability to form judgments logically.

Sam Altman of ChatGPT scoffs at such scepticism. “You are energy flowing through a neural network”, he tells everyone. (For readers not familiar with the term, a “neural network” is a computer algorithm modeled after the human brain and nervous system. It is the basis of most of the AI applications that we interact with today.) Altman and other AI innovators have put their money on the notion of human/computer equivalence. Algorithms learn like kids do, and like kids they soon begin to learn on their own. After that, the sky is the limit:

“There is no upper bound how far that can go as you think about increasing the size and scale…In a very deep sense, I think the biggest miracle that we need to create the super powerful AI is already behind us. It’s already in the rearview mirror.”

Sam Altman of OpenAI and ChatGPT. Source: Flickr. Attribution (CC BY 2.0)

The belief that the brain is a complex machine and that computers can be built to the brain’s specifications obscures a basic fact: the effort to create a blueprint of the human brain is still far from proving its feasibility. In 1965 Warren McCulloch and Walter Pitts created the first theoretical model of a neural network. The idea was based on a single neuron, but there was great hope then as now, that we might be able to create a complete circuit diagram of the human brain or “connectome”. Even McCulloch, however, was cautious about what could ultimately be accomplished:

“Can you design a machine to do whatever a brain can do? The answer is this: If you will specify in a finite and unambiguous way what you think a brain does with information, then we can design a machine to do it. Pitts and I have proven this construction. But can you say what you think brains do?”

The last sentence is the fly in the ointment. If we had a better grasp of “what brains do with information”, we might dream of building an AI that approaches human levels of understanding. Yet the challenge of mapping the human connectome is daunting. The effort to build technology that goes beyond “bullshit” generation may hit upon the limits of neuroscience itself. In any case, building an artificial brain is one thing. Creating an artificial mind is another kettle of fish altogether. A mind has more than brainpower, it has consciousness too. Though it is conceivable that an AI without consciousness could nevertheless match human levels of understanding, the next step, full robotic consciousness, is a giant one.

In order to see why this is true, let’s delineate more clearly between consciousness and understanding. Take the example of a sunset. A computer with understanding might be able to form judgments about sunsets. It could, for example, confirm true statements such as “the sunset was pink” or deny false statements such as “the sun is falling into the ocean”. Consciousness is a step beyond this. To have a consciousness is to have a “sense” of the beauty the phenomenon. As we perceive the light coming off a sunset we experience a sense of awe, wonder, or emotion. We may even be overwhelmed. This is known as phenomenal consciousness and it is more than mere understanding.

Photo by Ben Sweet on Unsplash

Will we ever see an AI with phenomenal consciousness like the humanoid robots in the movies? If neuronal models, algorithmic power and a good map of the human brain are sufficient to build a creature that has artificial “understanding”, what would it take to design a creature that is artificially “conscious”?

To gain insights into this fascinating question, I suggest we travel in two very different directions. First towards spirituality and specifically towards the Buddhist concept of Anattā or the “illusion of self”. Secondly, towards quantum physics and electromagnetic waves that are at the root of the neurons in our brain and nervous system. These are deep topics and they are better left for a follow-up to this article: “Creepy Chatbots, Strong AI, and its Skeptics”.

Oh…and about this question ChatGPT and the risk to civil society…Well, unfortunately, there is no reassuring answer to that question. We have to trust that human ingenuity will provide a solution to the problems posed by human ingenuity. As Ezra Klein puts it, the cost of producing “bullshit” has gone to zero. We can order machines to produce text, code, audio, and even video…fast and for free! Welcome to a brave new world.

--

--

Patrick Feeney

Writing about personal growth, development and other human topics from Brussels, Belgium