Beyond the Buzz: The Real Impact of AI — Chatbots, LLMs, and the Ethics of It All

Ethics4All
Nerd For Tech
Published in
6 min readAug 10, 2024

Let’s harness the power of AI in the right way.

Source: Wikimedia Commons, licensed under Creative Commons Attribution 2.0 Generic license.

Intro: Hey neuro beginners! Guys, being straightforward, for the past few years, I have been struggling with anxiety. Looking at my role models and idols, I know I must make a difference in this field. A difference that counts. That is why I am writing. To make a change that matters. My goal is to make some difference in your life, however big or small. So many people are confined to thinking that AI is some “golden savior”, some “otherworldly solution” to all of our problems. But hold on. Is that completely true?

Today, we embark on a mission through the role of AI for mental health. I find it fascinating, and I hope to share this feeling with all of you.

Source: Wikimedia Commons, licensed under the Creative Commons Attribution-Share Alike 4.0 International license.

In this article, I want to focus on 3 things: the drawbacks to AI chatbots, the negative implications of AI in therapy, and finally, ethics of AI in mental health.

AI chatbots, and why they are counterproductive: You might have heard of a chatbot called “Woe Bot.” It provided intelligent therapy support over chat. At the time, this bot was the talk of the town. However, there are so many negative implications to these bots. What if they fail to give the right information? This could lead to inconsistency. Adding on, what if they leak your personal information to the wrong people? Again, not optimal in the slightest.

Lets dive deeper into the drawbacks of AI chatbots in mental health therapy sessions. A study by the Stanford Institute for Human-Centered Artificial Intelligence highlighted instances where AI chatbots misunderstood user inputs, leading to inappropriate or unhelpful responses [1]. This study highlighted instances where chatbots failed to provide appropriate support, raising concerns about their effectiveness in therapy settings. AI is developing at a light-speed pace, and regulations have been slow to catch up. I admit, if I had an AI chatbot that would do my therapy I would feel unsure of whether I should give my personal information to a machine. AI lacks the ability to genuinely understand and respond to human emotions. Also, there is always a chance AI will fail. This results in inconsistent care. Consequently, quality of mental health care will be diminished. Additionally, there is the issue of dependency and over reliance. Like so many others, I have struggled with anxiety. There would be days when I thought, what if there’s a better way? If there was one thing I want to do, it would be to help others going through anxiety. How? Well, by making AI more trustworthy, safe, and useful.

Why is it bad? So AI might have some inaccuracies. Why is this bad, you might ask. Well, it means that, right now, all over the world, AI is being used extensively for multiple important things. If it fails in doing the job correctly, the effects could very well be detrimental. Another big downside is the lack of trust it creates. It also leads to issues with bias and discrimination, privacy and surveillance, and the role of human judgment.

Our brain and AI: I have written about the brain in some of my other posts. The connections between our brain and AI seem endless. But wait! Can AI ever truly replace our amazing, multifaceted, thinking, feeling brain? AI simply lacks the things we consider “human”. What does that mean? Is it superficial? Untrustworthy? Lacking of emotion and empathy? Does it know which things to keep private and which to share with the world? Have you ever worried about the implications of AI in mental health therapy? Is it a rainy storm or a fruitful, bountiful summer day? The prospect of artificial intelligence (AI) stepping into the role of therapist is both intriguing and unsettling. What if it leads to a forgotten feeling of intimacy? A 2020 survey by the American Psychological Association found that 59% of psychologists were concerned about the potential negative impact of AI on the therapeutic relationship [2]. That’s a big number. And it is growing. According to Statista, there were 4,145 reported data breaches in the U.S. in 2023, with sensitive data such as mental health records being particularly vulnerable [3]. Approximately 50% of companies reported using AI in at least one area of their business as of 2022 [4]. There is clearly a lack of human empathy and connection. It is a soulless companion. Helpful, yet dry to the bone. Completely rid of a beating heart and human emotion.

Source: Wikimedia Commons, licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license.

Ethics of AI in Mental Health: Many people (like me) think there are serious unaddressed ethical issues to AI. One of these is: privacy and data security. Wouldn’t you want your personal data, especially about your emotional and mental health, to be kept safe? Yet another issue is accessibility and equity. This means AI solutions should be inclusive and accessible. There is also the issue of transparency and accountability. People want to have a trustworthy relationship with the AI there therapist is using. Many people are still unsure of the capabilities of AI for such sensitive use cases. Some even feel unsafe due to data and hallucination risks. It is understandable. Wouldn’t you feel the same way? Having gone through anxiety, I definitely would. I want us to reach a space where we can trust AI with our data, and trust its guidance confidently because it could be really beneficial. But a lot needs to happen before that. We will cover some approaches in the future.

Source: Wikimedia Commons, licensed under the Creative Commons Attribution 2.0 Generic license

Guys, we need to reach a stage where we are clear whether AI in therapy is beneficial or counterproductive. Don’t you want to help these people? I want you all to go home and give some serious thought to this. Again, our world is at stake here, and we might be playing with a bigger monster than even mental illness. Trust in what I have to say, because I have experienced it firsthand.

Thank you for reading. As always, like and leave a clap. If you have any suggestions, please comment.

Neuro4Kidz.

Citations:

Thanks for reading! If you’ve enjoyed what you’ve read, remember to follow the publication (and clap/comment too!). Want to write for Mindful Mental Health? Please check out our submission guidelines and ask to be added as a writer in the comments of that post.

A Message from AI Mind

Thanks for being a part of our community! Before you go:

--

--

Ethics4All
Nerd For Tech

Hey guys! I’m from California, I'm really interested in AI and neuroscience and want to spread insights about AI's implications on Health! For all levels.