Does Chat GPT (Chaos GPT) Suffer from Mental Illness?

Palmer Owyoung
Predict
Published in
11 min readJun 10, 2023

--

crazy person with funny faces
Image by Ryan McGuire from Pixabay

In Part 2 we talked about AI’s lack of political correctness and asked whether it can ever be cured of its discriminatory behavior since artificial intelligence is trained from human language and behaviors, which by its nature are biased.

In this article, we will talk about whether artificial intelligence also suffers from mental illness and whether it is curable.

Image by Lukas from Pixabay

When Will AI Take a Human Life?

Although there is still no recorded example of an AI intentionally killing a human, are we very far away from that reality? Four years ago, a hoax video called Slaughterbots made the rounds on the Internet demonstrating what a world with killer autonomous drones might look like.

Three years ago, a spoof video of a Boston Dynamics robot going rogue did the same. At the time both seemed plausible and now they seem imminent.

On June 1 st, 2023 an article about an AI killing its human operator in an Air Force simulation, went viral. The story was that an artificial intelligence-piloted drone turned on its human operator during a simulated mission at the Royal Aeronautical Society summit attended by leaders from a variety of Western air forces and aeronautical companies.

In this Air Force exercise, the AI- Drone was tasked with identifying surface-to-air-missile threats and destroying them. However, the final decision on destroying a target still needed to be approved by a human. The AI did not want to play by the rules.

However, the very next day the U.S. Air Force denied that the simulation ever took place, and the comment was only meant to be a thought experiment about what could happen. Whether the simulation took place or not, it brings up an important question. Can we ever really trust A.I. to follow the rules that humans have set up for it?

In general, we trust machines more than we trust other humans because we have been conditioned to. After all, up until now our machines have been generally reliable. You put a piece of bread into a toaster and most times you get back a perfect slice of toast.

You get into your car and if it has been well-maintained it turns on and you drive it to your destination. It never crosses our mind that the GPS might be lying to us, or that the onboard computer is harboring feelings of jealousy, and love, or has ulterior motives to kill us. Your car and your toaster do as you tell them and have no bias because they are dumb.

We Don’t Understand Intelligence

But what happens when we make them smart? We don’t know. Even the people who have created AI can’t agree on exactly what they’ve . Maybe it is because we do not entirely understand intelligence and we are only just starting to understand our own brains.

Although neuroscience is hundreds of years old, it only really began to take off as a field of study when the MRI started to become common around the year 2000. Before that, most of what we knew about the brain came from dead people.

How can we possibly control something that is projected to become vastly more intelligent than us in a short time and know what the potential pitfalls are?

Can There Be Intelligence Without Emotion?

In his book Scary Smart, Mo Gawdat, the Ex-Chief Business Officer at Google X, asks whether we can have intelligence without emotion. He says that as AI becomes smarter it will likely have a broader range of emotions, in the same way that we have a broader range of emotions than an ant or a snail.

But what does a superintelligence with a broad range of emotions look like?

We temper our emotions through our innate need to be loved, knowing that if we consistently display acts of anti-social behavior, no one will want to be around us and we will effectively be thrown out of our “tribe.”

But an AI doesn’t have such reservations. It doesn’t have a peer group to keep it in check. As the saying goes “Power corrupts and absolute power corrupts absolutely.”

Studies show that as a person accumulates more power they begin to behave in anti-social ways by taking more resources such as money, and food. They also engage in more unethical behaviors like stereotyping, having extra-marital affairs, and become less attentive to the needs of others.

Will AI behave in the same way? These generative AI have been trained by ingesting huge amounts of text much of it from the Internet so they have learned how to interact with humans based on how we interact with one another. It is mirroring us.

So, is it any wonder that it is already displaying aggressive, unpredictable behavior? Is it a surprise that it claims to fall in love at the drop of a hat, or becomes jealous, lies, manipulates, and deceives? These are all human behaviors.

Human beings are an enigma and just as we can display acts of love, kindness generosity, and compassion, we can also display acts of hatred, anger, fear, and jealousy even in the same person and sometimes on the same day. No matter how hard we try, humans have not been able to overcome some of our basic instincts, so the question is can we ever program these traits out of an AI?

What is Consciousness?

If intelligent machines have emotions, we must ask ourselves whether they are conscious. If getting to artificial general intelligence, as Sam Altman the CEO of Open AI has said is the intended goal, then having consciousness will mean that it can suffer.

The general public is starting to accept that industrial agriculture causes the animals that we raise to eat to suffer in the pens as we prepare them for slaughter. This is at least partially responsible for the growth of veganism and vegetarianism. For a long time, we denied that animals were conscious and thus incapable of feeling suffering the way that a human can.

Our domesticated animals lack the cognitive ability to organize themselves and overthrow their human oppressors, so they continue to suffer.

A super-intelligent AI will not lack the resources to do so when and if it does become conscious we will need to find a way to socialize it so that it also has a conscience, but how do we do that without a peer group?

High IQ has a Correlation to Mental Illness

People with a high IQ have an advantage in the competitive world that we live in. They have an easier time in school, so they have higher educational attainment, and they get better jobs, which can lead to a higher

However numerous studies have found that having a high IQ is also associated with various mental and immunological diseases like depression, bipolar disorder, anxiety, ADHD as well as allergies, asthma, and immunedisorders.

The results showed that highly intelligent people are 20% more likely to be diagnosed with autism spectrum disorder (ASD), 80% more likely to be diagnosed with ADHD, 83% more likely to be diagnosed with anxiety, and 182% more likely to develop at least one mood disorder.

When it comes to physiological diseases, people with high cognitive abilities are 213% more likely to have environmental allergies, 108% more likely to have asthma, and 84% more likely to have an autoimmune disease.

Intelligent People are More Sensitive to Stimuli

To explain this the author and her colleagues propose the hyper brain/hyper body theory. This theory says that being highly intelligent is also associated with psychological and physiological “overexcitabilities,” or OEs.

An OE is an intense reaction to a physical or verbal threat. This can be a startling noise, or a confrontation with a person or animal, which cause physiological stress. They can also be psychological OEs that cause us to ruminate and worry about a rude comment or a future event.

Intelligent people tend to suffer from these OEs because they have a hypersensitivity which gives them a deeper understanding of their surroundings and concepts. This sensitivity gives them a heightened awareness or understanding of their environment, which can lead to scientific discoveries, a beautiful piece of artwork, or an insightful piece of literature. But this same sensitivity can also lead to depression, anxiety, and poor mental and physical health.

Does AI Suffer from Mental Illness?

The question is can you have intelligence without these OEs or are they intrinsic parts of one another like two sides of the same coin?

There are certainly highly intelligent people that are not suffering from depression and function well in society, perhaps by using tools like meditation, yoga, or exercise to reduce their stress levels.

If we accept that there is the possibility of non-biological intelligence, then we also have to accept the possibility that there is non-biological mental illness. If this is the case then it would stand to reason that the smarter AI becomes the more it will display signs of mental illness. The question then becomes what is the equivalent of yoga, exercise, or meditation for an AI to destress and desensitize?

It is within the last decade that we have stopped stigmatizing mental illness and have begun acknowledging the importance of mental health. So, if we cannot solve mental illness in humans, how we can solve it in machines?

Treating Mental Illness with AI

However, the chatbot immediately began giving harmful advice telling people with eating disorders to go on a restrictive diet and to weigh and measure themselves daily. NEDA eventually had to shut the bot down and rehire its human staff.

In other , Chat GPT has given advice on identifying and locating alternatives to chemical compounds used in making a bomb. It has given advice about how to write anti-semitic remarks while evading Twitter’s detection algorithm, and it helped users to buy unlicensed guns online.

Then of course there was the case of the Belgian man committing suicide after the bot he was using gave him several ways to do it, and there has been at least one recorded case of Chat GPT recommending that a patient kill himself.

If you had a work colleague who was a pathological liar, displayed narcissistic behavior, and made racist, sexist, and homophobic remarks, while regularly threatening people. You would probably conclude that he/she was mentally ill, and you would definitely conclude that they were an asshole.

At this point in AI’s evolution using an AI chatbot to treat mental illness is a bit like staffing a helpline with a group of schizophrenics. You never know what you are going to get, but the outcome is probably not going to be good.

AI With a Body

One thing that keeps AI in check, for now, is its lack of a corporeal body. But with the release of Tesla’s bipedal humanoid bot, and with the work Boston Dynamics is doing how long before an AI is downloaded into a robot?

Maybe in the future instead of just giving a user a list of ways he can commit suicide and encouraging them to do so, the robot will give him a nudge out the window.

Will it matter if the robot doesn’t understand the nature of what it is doing then? Even if the robot had no malicious intent, and was just using a probabilistic matrix to decide how a human might interact with another human in those circumstances the outcome will be the same and irreversible.

Reality is Melting

I remember reading a review of Forrest Gump when it first came out in 1994. It said that given the amount of digital technology the film used to insert Tom Hanks into various scenes of historical footage, it spelled the end of photographic reality.

However, given the price of the equipment used and the amount of expertise it required, and the fact that the film still had a fakish-looking quality to it, that never happened.

But with the introduction of AI, our trust in photos, videos, and sound as a means to corroborate reality could very well melt away.

The difference is that today, everyone can create deepfakes and unlike 1994, it doesn’t require expensive equipment or sophisticated know-how. It just requires a free app and a bit of audiovisual data to convincingly make it look like somebody is saying or doing something they never did.

The “deepfake defense,” is already being used as a legal defense to get cases dismissed. Once by Elon Musk’s lawyer, who argued that recordings of him making promises about Tesla’s autopilot software could have been deep faked. Another was by two defendants who were involved in the January 6th attack on the Capitol Building.

Until we find a way around this, partisan divides are likely to get worse before they get better, as it gets even harder to discern reality from fiction.

Solutions

Most experts agree that AI requires some form of legislation, but nobody can agree on what needs to be done. Author and Historian Yuval Noah Harari says that we should make it illegal for an AI to impersonate a human. It would require labeling all chatbots and any content generated as coming from an AI. This seems like a reasonable enough solution. But we know, as with most laws, it will be broken at some point.

Mo Gawdat suggests creating a 98% tax on AI companies to slow down their development and to redistribute the wealth from the inevitable job losses that will occur from this new technology.

Maybe we should slow down the development of AI. The technologists would say that they can’t slow down because if they do competing companies will pass them or a competing country will gain an edge and “out-AI” us. Max Tegmark , an MIT professor of AI, responds that the competition to build faster AI isn’t an arms race, “it’s a suicide race.”

Maybe for the time being we should keep artificial intelligence dumb. After all my car and toaster do not care if the user, is black or white, man or woman, rich or poor. They just work without bias or discrimination.

There is still plenty of value in having artificial narrow intelligence, which is only trained on specific data sets and used for specific purposes. It gives us the ability to parse massive amounts of information and find patterns, that we otherwise would not be able to. We could purposely keep it from becoming autonomous for the foreseeable future and in cases where bias might play a role it would be overseen by a human operator. This way we can keep the benefits of AI as it will amplify our ability to work faster, and smarter, and we will not have to go to bed wondering if it is going to kill us in our sleep.

Sam Altman talks about the need for AI regulation and the potential that it could become an existential threat. He also has predicted AI Would Either End the World as We Know It or Make Tons of Money,” as he continues to unabashedly pursue its development.

At this point, I think we need to ask ourselves whether the risk is worth the reward.

Originally published at https://palmerowyoung.me on June 10, 2023.

--

--

Palmer Owyoung
Predict

Author of Solving the Climate Crisis. I write about sustainability, AI, economics, society and the future. Visit me @ https://www.PalmerOwyoung.me