Machines are not human. And that’s OK

Alexandra Mack
Alchymyx
Published in
3 min readJun 30, 2024

As an anthropologist who has spent years studying human behavior, , I’ve been thinking a lot about the tendency in both the tech industry and popular culture to use a human metaphor for Artificial Intelligence. While it arguably makes this complex technology more accessible and approachable to non-technically trained people, I believe this approach is fundamentally flawed and potentially misleading.

I know I am not alone in this — a recent OpEd in the New York Times was musing that we lose our interactions with other humans when we start to substitute AIs for humans. Others have noted that there is more than a hint of dystopia in the way these machines are portrayed in real life as well as the movies.

Photo by Drew Dizzy Graham on Unsplash

But I think the core problem with anthropomorphizing AI is the simple fact that machines are not human. People are inherently contextual and often (seemingly) irrational. There is no set of rules that can accurately predict what a human will do in every situation.

This complexity is something that many technologists, particularly those without a background in social sciences, often fail to fully grasp. They miss key components of what makes us human — our ability to act unpredictably, to be influenced by subtle contextual cues, and to make decisions that defy logical explanation. While we can make educated guesses based on patterns and tendencies, the core of human decision-making cannot be fully coded or replicated.

Artificial Intelligence, on the other hand, operates based on rules and algorithms. While these can be incredibly sophisticated, they are fundamentally different from human cognition. That is not the same as saying humans are “better” than AI. Certainly machines beat out humans in their ability to process vast amounts of data and identify patterns, among other skills. My point is that AI is fundamentally different from homo sapiens and we should treat it as such, especially if we want to best utilize it for broader benefit.

By anthropomorphizing AI, we risk obscuring these crucial differences. We may start to expect human-like behavior from AI systems, leading to misunderstandings about their capabilities and limitations. This risks us starting to expect AI to understand nuance, context, or emotion in ways it simply can’t, or to over rely on it for tasks that require human judgment. But it also may limit our ability to envision distinctly new things this technology might enable, that go beyond computing faster than humans.

Instead of trying to make AI more human-like, we should focus on developing a nuanced understanding and utilizing AI for what it is — a powerful tool with its own unique strengths and limitations. Recognizing these differences and framing AI in its own terms, rather than through a human lens, will allow us to leverage AI for broader and more meaningful benefits.

I expect that this is only my first foray into articulating my thoughts around this, and that my ideas will continue to evolve as the technology and my understanding of it matures, and as I listen to others who approach it from a different point of view to my own.. A few weeks ago, I had the pleasure of meeting John Kao, who believes in the psychological importance of making AIs more human. He is part of a group demonstrating the humanity of machines through a project of having AI write an opera about Alan Turing, which is honestly quite wonderful and thus will leave it to readers to form their own opinions.

--

--

Alexandra Mack
Alchymyx

Innovation | User Experience | Customer Insights | Design Thinking | Strategy