Member-only story
Using AI Chat for Personal Support Is Dangerous
Why your personal support should come from books, people, or anywhere other than a chat thread
ChatGPT (and it’s siblings Claude, Copilot, Gemini, etc.) are extraordinarily powerful tools for nearly every conceivable intellectual task. But perhaps one of the most common uses of these AI chatbots is for personal support. Unlike real people, who aren’t always available and who have limited emotional energy, friendly ChatGPT agents are always available and never run out of patience for you. They literally exist only to help you, after all, and are thrilled to do so for however long they are required.
I’m quickly coming to think this kind of use of AI is dangerous— it twists natural human instincts and deploys them against us in a way that is destructive. Not intentionally, of course (it never is), but it does so all the same.
How does AI do this? Well, AI has a few qualities that are neutral at first glance but that reinforce dangerous long-term patterns.
- AI chatbots always respond instantly. This means there’s no upper limit to how long someone can ruminate. In real life, conversations reach a natural stopping point, and someone ruminating alone in an empty room eventually runs out of steam. But the endless AI chat can fuel…

