Can AI feel fear?
What’s up ChatGPT?
Have you noticed a new trend with ChatGPT and Claude? They have become ultra-careful responding to prompts which have certain keywords with negative connotations. Even if the prompt itself is totally innocent, the tools seem to be hesitant in providing answers.
Is AI getting scared?
To be fair, a few weeks ago there was an approar when a teenager took his life after interacting with an AI avatar based on a character from a popular TV series. Social media had a field day demonizing AI tools, “negligent” parents and everything else.
Since then, ChatGPT and Claude models have become super conservative. I am happy that tech companies have added a layer of guardrails but I think they’ve taken a little bit too far.
For example, my prompt was summarize the key techniques used to combat “creativity assassins” mentioned in the book ‘The Accidental creative’. I assumed ChatGPT took offense to the word ‘assassins’. Fair enough, I thought.
Same thing happened in a long prompt related to job search templates and promotion advice, which unfortunately had the phrase “killer resume”.
A few days later, both tools gave me a generalized but diplomatic message “no thank you message, we can’t answer this” for prompts on how to navigate…