Technical & Human Problems With Anthropomorphism & Technopomorphism
Anthropomorphism is the attribution of human traits, emotions, and intentions to non-human entities (OED). It has been used in storytelling from Aesop to Zootopia, and people debate its impact on how we view gods in religion and animals in the wild. This is out of scope for this short piece.
When it comes to technology, anthropomorphism is certainly more problematic than it is useful. Here are two examples:
- Consider how artificial intelligence is described like a human brain, which is not how AI works. This results in people misunderstanding its potential uses, attempting to apply it in inappropriate ways, and failing to consider applications where it could provide more value. Ines Montani has written an excellent summary on AI’s PR problem.
- More importantly, anthropomorphism contributes to our fear of progress, which often leads to full-blown technopanics. We are currently in a technopanic brought about by the explosion of development in automation and data science. Physically, these machines are often depicted as bipedal killing machines, which is not even the most effective form of mobility for a killing machine. Regarding intent, superintelligent machines are thought of as a threat not just to employment but our survival as a species. This assumes that these machines will treat homo sapiens similar to how homo sapiens have treated other species on this planet.
Historically, we have used technology to achieve both selfish and altruistic goals. Overwhelmingly, however, technology has helped us reach a point in human civilization in which we are the most peaceful and healthy in history. In order to continue on this path, we must design machines to function in ways that utilize their best machine-like abilities.
Technopomorphism is the attribution of technological characteristics to human traits, emotions, intentions, or biological functions. Think of how people may describe a thought process like cogs in a machine or someone’s capacity for work may be described with bandwidth.
A Google search for the term “technopomorphism” only returns 40 results, and it is not listed in any online dictionary. However, I think the term is useful because it helps us to be mindful of our difference from machines.
It’s natural for humans to use imagery that we do understand to try to describe things we don’t yet understand, like consciousness. Combined with our innate fear of dying, we imagine ways of deconstructing and reconstructing ourselves as immortal or as one with technology (singularity). This is problematic for at least two reasons:
- It restricts the ways in which we may understand new discoveries about ourselves to very limited forms.
- It often leads to teaching and training humans to function as machines, which is not the best use of our potential as humans.
- Pearson colleague Paul Del Signore asked via Twitter, “Would you say making AI speak more human-like is a successful form of anthropomorphism?” This brings to mind a third major problem with anthropomorphism: the uncanny valley. While adding humanlike interactions can contribute to good UX, too much (but not quite enough) similarity to a human can result in frustration, discomfort, and even revulsion.
It is increasingly important that we understand how humans can best work with technology for the sake learning. In the age of exponential technologies, that which makes us most human will be most highly valued for employment and is often used for personal enrichment.