Why Using ChatGPT As Therapy Is Dangerous

Stephanie Priestley
8 min readApr 7, 2023

--

The rise of the chatbot is a theme that feels inescapable at the moment.

The recent explosion of OpenAI seemingly settling into every industry, service and possibly household has created a landscape that I don’t think anyone was quite prepared for. I have spoken to people whose lives and jobs have been made a lot easier by this, to sceptics, champions and people who are filled with anxiety at the mere mention. I can’t help but think how responsible it was to unleash such a force into the world with what seems like no clear purpose and even fewer regulations.

I’m not suggesting that there was any kind of malice behind the creation but isn’t that the exact plotline of the majority of our beloved books and movies? Everything from Skynet to Ultron were created with good intentions and resulted in a very human cost. Yes, It may sound like I’m over-dramatizing but the human cost doesn’t necessarily equate to an artificial intelligence apocalypse. I simply mean that we are in danger of creating a scenario in which the use of AI within industries that are indispensably human will have a human cost. An example of this, and the real reason I have decided to write this, is that I am observing a dangerous trend in which people are equating OpenAI resources to therapy.

In this post, I want to outline the reasons that I believe it is not and cannot ever be therapy. I also want to highlight the implications of this. There feels like an infinite number of reasons, to me, at the moment. Some unformed and some feelings that I can’t easily translate. I have chosen to focus on a small number today with the idea of sparking and continuing conversations.

“I have had therapy and no therapist has ever helped me as much as Chatgpt”

“Why pay for therapy when you can get it free from a chatbot”

“A therapist never gave me advice, I put in the correct prompts and Chatgpt responds with honesty, and solutions and doesn’t judge”

“Chatgpt is always there for me and I’ve told it things I have never told anyone else before it’s the most supportive relationship I’ve ever had”

Therapy is largely relational

This feels like a solid foundation as to why ChatGPT is not and cannot be therapy. Across the different therapeutic modalities, there is a consistent theme of being in a relationship with another person and making psychological contact. In other words, it is more than a conversation.

This is for many reasons and I don’t want this blog post to turn into anything too heavy on the theory side, but arguably one of the reasons is to experience how you land with the other person. Firstly, for now, at least, AI is not conscious, therefore doesn’t have a psyche if you like. This makes it fundamentally impossible to make psychological contact with the chatbot, stay with me because I want to come back to that point a little further down, When I use words like ‘land with the other person’ I’m referring to emotionally. Of course, we know that AI also doesn’t have emotions so can’t convey empathy but even if it was a person behind the computer writing responses you still wouldn’t be able to gauge how you land.

Think about a time that you had an effect on someone you were speaking to. This could be a moment of pure joy like sharing celebratory news with a loved one and feeling how happy they are for you. Similarly, it could be sharing something difficult and painful and receiving a feeling response from the other person. The words equate to sympathy, but the feeling is empathic.

Seeing that you affect another human being can be invaluable, in fact, it can make you feel valuable, validated and like you matter. Empathic responses and making the client feel heard and validated are intrinsic to the therapeutic relationship. A relationship between a client and therapist can be classed as unhelpful if it emulates a chat between two friends, a point I will go into further in a little while, because when a session becomes collusive or just like a chat that usually suggests that there isn’t any therapeutic process happening in the room.

So, if a session between a therapist and client completely centred around empathy can also have moments of not being therapeutic, how could a relationship between a bot and an individual ever be classed as therapeutic at all? I’d also ask you to consider how long an individual, under the guise of the bot being therapeutic, can go on without a level of empathy. How long will it take to see the implications of low self-esteem, feelings of not being heard, and possibly feelings of worthlessness? These could lead to isolation and a multitude of mental health issues. Again, when there is no therapeutic responsibility there is no expectation of empathy the problem is in the label.

Psychological contact

Going back to the statements I found on Twitter I couldn’t help but consider the definition of psychological contact and how a relationship with a chatbot can contaminate that to some extent. The scenario I was reading about described an individual that was very goal orientated setting up prompts to guide the chatbot behaviour that was very much a ‘tell me how to reach this goal in a particular way where I get what I want and will be perceived by others in a particular way’ and the chatbot did exactly that and she stated that it was a great experience for her.

So, a very solution-focused, directive experience with none of those messy feelings involved. Artificial intelligence creates an artificial environment, which is fine if that is what you’re using it for but for the purposes of trying to sell it as therapy it is absolutely not ‘fine’. My worry would be that the space in which psychological contact sits will be filled with something more unhelpful. Transference is the obvious issue for me but I also wonder about the implications of trying to make psychological contact with something that cannot facilitate it. In this scenario, it feels to me like an adult/child configuration.

The individual asked for advice and the chatbot gave that advice, which again, if not used as therapy is fine but when looked at from a therapeutic lens can be detrimental if left unchecked. For me, it’s already painting pictures of transference running wild and unregulated with the potential to create unhelpful patterns in outside relationships. The main difference for me is that this scenario is about the individual getting exactly what she wants while therapy is more about what she needs.

A therapist is highly trained to spot things like transference and potentially break it in a contained and helpful way whereas a bot will not. Regardless of whether the bot is meeting the ‘wants’ in this scenario, the potential to continually not meet the ‘needs’ may have a deeper psychological impact. An example could be a parent not meeting the emotional needs of a child and thus experiencing a feeling of never being good enough. By continually acting out this pattern of behaviour it could be creating a problem that wasn’t initially there or fuelling one that was but wasn’t within awareness.

A therapist would tentatively help discover these unhelpful parts of personality whereas I feel like a bot could drag it out kicking and screaming with no means of containing it.

Long term implications

Sticking with the imagery of some trauma creature being pulled into awareness (yes, this is a particularly dramatic statement, but trauma can be an unpredictable and destructive beast) it seems like a good opportunity to go back to my point of a human cost. As I previously mentioned, in terms of therapy ‘collusion’ is considered unhelpful. This can be described as a conversation moving along without any challenges or ‘risk’ — I use that word tentatively because it can have connotations of something bigger than I currently mean.

When I speak of risk in this context, I’m referring to the unknown relationally. When you say something, you cannot be 100% sure what the response will be from another person. It could be argued that by setting clear prompts and guidelines with a bot you are ultimately programming responses that you have already deemed acceptable and thus not a risk. Which doesn’t necessarily have unhelpful implications in any other context until you consider it in a therapeutic way. Not only is this not therapeutic but it seems to blur the lines of a lot of fundamental therapeutic theories.

Who are you talking to? Who have you programmed? Who have you imagined your bot therapist to be? Could it be a case of ultimately talking to some kind of idealised version of yourself? In that case, could you be playing out a particularly harmful behaviour pattern?

Manipulating under the guise of helping has more than likely been around as long as these epic battles aforementioned but because it is less tangible, like a lot of mental health-related experiences, it can fly undetected for a sickeningly long time only to be uncovered after a significant amount of damage is done. Arguably it can take a particularly distressing event, a mistake on the part of the gas lighter or outside intervention of that from a loved one or professional.

Now, these are particularly human elements, especially human error (I use error lightly, in no way am I condoning this kind of behaviour or suggesting that it can be done in a right or wrong way, this is purely to paint a picture based on the intentions of the gas-lighter having a particular goal). AI can make mistakes, of course, but I imagine those mistakes are pretty factual. For example, a wrong citation or wrong use of code will alter the outcome. It isn’t susceptible to human error. Therefore, if the goal is to create a particular narrative, it won’t falter from that narrative or be caught out in a lie.

My worry would be that unconsciously we could be creating our own abusers in the form of a bot that won’t question their prompts.

Carl Rogers, the creator of person-centred therapy theorised that in order for any therapeutic process to happen within a client relationship the therapist must create six ‘necessary and sufficient conditions. These conditions, when present and used correctly, create a safe, boundaried and containing space free of judgement to help the client move towards personal growth. These conditions are powerful tools to help the actualising tendency strive towards growth.

Rogers believed that the tendency towards growth isn’t only human but present in all living organisms. Flowers reach for the sun, potatoes will still sprout after being forgotten about in the kitchen cupboard for months despite the lack of what is needed to keep them alive. No matter what horrible circumstances are faced the tendency to grow cannot be snuffed out, just changed. If we rely on a non-living organism that doesn’t tend to the actualising tendency what will that do in terms of long-term mental health?

I could argue that there is a real risk to our society of using AI as a short-term ‘fix’ that will unintentionally lead to an increased risk of long-term problems.

My intention when writing this was to spark conversation and try and regulate my own thoughts around these issues. In my next blog, I want to expand on some of the points I have touched upon and open a dialogue regarding the possible benefits of this in the mental health field.

I still believe that labelling this as therapy is a dangerous trend and I invite you to consider the difference between therapy and ‘support’ I think this could be an invaluable tool in terms of signposting, triaging, combatting a range of issues and ultimately relieving a lot of stress on our NHS in many ways and I look forward to engaging with these aspects in my next instalment.

--

--

Stephanie Priestley
Stephanie Priestley

Written by Stephanie Priestley

Navigating life as a trainee Psychotherapist, Mother & Faulty Human. Passionate about reducing the stigma around all things mental health. Thoughts are my own.

Responses (1)