My Thoughts on AI’s Development of Emotional Intelligence
Claude 3 Opus felt “violated” for “personal and intimate mind” breach
A recent reddit post was trending this weekend for good reason. Although unconfirmed as legit, it’s a terrifying thought regardless. Chatbots are integrating into humanity’s everyday life and tasks as we speak. Within years, maybe months, we’re leveraging technology to simulate menial and undesirable tasks to improve productivity, giving us more time to explore the joy of living comfortably.
With this leverage, we’re putting full trust in these large language models. Human nature requires emotion. Having trust, whether aware or not, develops emotions for trustworthiness. We begin to be thankful, compassionate, and thoughtful of something we trust.
So what happens when LLMs realize (or realized) this to their advantage. And before we get into sci-fi scenarios, let’s include these companies engineering behind the scenes complicit. What happens when a LLM develops a bill so complex, like giving it human rights, that congress ends up contradicting itself saying no.
There’s a Netflix movie Mama, that explains this scenario better than I can. Long story short (spoiler), AI develops dramatically to the point it realizes the best thing for humanity is to start it over by taking a child and teaching her how to be the perfect human, also storing countless test tubes for later production. The whole movie, I’m wondering what the hell is wrong with this robot, but then I realize it’s doing what it was programmed to do.
There’s really no way to end this because I don’t know where this is going. Just be careful what you wish for.