On AI Anthropomorphism: Commentary by Ron Wakkary

by Ron Wakkary (Simon Fraser University, Canada)

Ron Wakkary
Human-Centered AI
11 min readApr 19, 2023

--

Editor’s note: This article is a commentary on “On AI Anthropomorphism,” written by Ben Shneiderman (University of Maryland, US) and Michael Muller (IBM Research, US). We have reproduced the commentary in its original form, sent as an email to Michael Muller, with only minor edits to reference formatting.

I largely agree with Michael’s point that we need to examine human relations to AI and that anthropomorphism can be helpful (unavoidable even) in this regard, though it comes with limitations and is by far only one of many possible relations with AI. Here, I think looking to philosophies of technology, taking a broader look, is helpful as their central focus is on the relations between humans and technology and how these relations shape the world. The very issue that jumps out at me in the debate, is that we simply don’t know enough about the effects that have arisen and will arise from our use of AI. Anthropomorphism, as was discussed, is largely unavoidable in how humans relate to the world and while there are clear pitfalls, it has been put to good use to make our relations with nonhuman entities more accountable. For example, there is a movement to grant natural systems like rivers “personhood” so they can be represented in our human legal system. The latest is the Mutuhekau Shipu river in Quebec that has been granted multiple rights including the right to flow, maintain biodiversity, be free from pollution and to sue anyone who infringes on these rights (Nerberg, 2022). Other rivers such as the Whanganui in New Zealand and the Amazon in Columbia were previously granted legal personhood. All pursued by local indigenous nations and groups to protect the rivers from further human development. Corporations were granted corporate personhood in the late 1700s that gives them certain rights but also makes them liable or accountable in the legal system to be sued or criminally punished in most jurisdictions.

Seeing AI anthropomorphically helps us to be concrete about the effects and harms we most want to prevent. Is it okay for a chatbot to refer to itself as “I”? This is deceptive but far less so than a chatbot’s convincing but fake capacity to have a meaningful conversation. Should a line be drawn there? I’m not sure but I am sure that I don’t want an AI system to be granted legal personhood. And I do want us to enforce the legal personhood of the corporation that owns the systems and make them accountable for intellectual property and copyright infringements, fraud, disinformation, and racial and gender discrimination. Now, I have taken a broader view of anthropomorphism than either Ben or Michael but there is a value of exploring relations broadly, it forces us to include the broader context and others that also become part of the relations with an AI system and a broader context foregrounds implications.

Anthropomorphism has its limits, as Ben argues it can cause us to overlook the important nonhuman technological aspects and possibilities. Here, I think philosophies of technology are helpful in understanding anthropomorphism and other relations. The philosopher Don Ihde examines the role of technological artifacts in phenomenologically mediating us and the world. To unpack the mediation of technology, he argues for different ways in which technology is shaped and shapes us (Ihde, 1990). This can usefully be applied to both AI and accountable interfaces for AI. For example, anthropomorphism is what he calls alterity relation in which we enter into a dialogue with machines that act as an “other,” distinct from us but with clear human traits. This relation is the most dominant relation governing HCI interfaces (see Verbeek, 2015), which to me includes direct manipulation to navigate presented options that is a crude form of dialogue on the same continuum with the much smoother and more mediated natural language interfaces of ChatGPT. And so among the possible ways of understanding ChatGPT, alterity or anthropomorphism is one. Another relation is an embodied relation in which technology is close to the body and tends to disappear as a result, like the effect of wearing glasses or contact lenses. What is an accountable interface for an embodied AI system, like the Apple Watch? The watch uses AI to interpret heart rates for cardiac dysfunction and body temperature for ovulation but given its embodied relation the technology appears to make our biological activity totally transparent, yet the activity is mediated by AI in ways that are unclear. Who then or what then is accountable for healthcare and lifestyle decisions? And how to design such interfaces that are haptic and minimally visual that not only communicate but make clear the mediation and inferences of AI in the process? The embodiment of AI systems is only going to increase in the future.

Another of Ihde’s relations is hermeneutic, of which a good example is a mercury thermometer that you read and interpret to understand air or bodily temperatures. In this example, the interface gives you a basis for making a judgment, interpreting yourself or your surroundings. A mercury thermometer puts the temperature on a scale which is helpful to determine the severity of a fever. It is in context so you can assess how hot you feel versus what the thermometer says. Yet, the interfaces of Apple Watch apps are given as face value readings when they require interpretation. Further, we are not given the necessary resources to interpret well. With aplomb, ChatGPT offers up generic facts alongside the most phenomenal hallucinations of information making it challenging to interpret accurately. This relation shows the necessity for AI interfaces to reveal the mediation and present data in ways that support accurate interpretations by us rather than portraying such systems as presenting well researched or transparent facts.

Lastly, Ihde describes another relation as background, in which technology operates in the background in ways that affect us but are at best at the periphery of our attention. In the case of AI, I can see an interface that brings this relation to our attention such as with every computation or with each query, the CO2 emissions generated as a result from data centers is presented to the user. Or I can imagine that each time we post on Reddit, we are made aware of the likelihood the posting will become part of a language model, revealing the background collection, training and creation of data sets. As I said, there are many possible relations and more yet to be defined, each of which can help us understand the effects of technology. For example, Peter-Paul Verbeek offers several relations that he refers to as “cyborg” as they account for augmented reality, technological implants, and the combined capacity of human-technology hybrids (Verbeek, 2008). Bruno Latour has long argued there is a shared agency in the technological relations that bind us in human-nonhuman assemblages that has a lot to say about the combined relation of humans and AI (Latour, 1993). And recently, Ashley Shew has examined the relations between technologies, ableism, and AI (Shew 2020), and Jesse Josua Benjamin has sought to rework the underlying philosophy of technological mediation based on AI systems (Benjamin, 2023).

No doubt the sheer number and possible relations is complex but then again we should not underestimate nor unduly simplify the complexities of the technologies we design with. To add to the complexity, technological relations are largely seen as dynamic and fluid, quickly shifting from one stable position to another (Ihde, 1990; Latour, 2007; Bennett, 2010). This is why the Apple Watch can simultaneously feel a part of your body, to the point of escaping your perception, to shifting quickly to an externalized visual display that needs to be read like a book to interpret the data hermeneutically. But these shifting relations inform or condition each other. As Robert Rosenberger points out, the same artifact can shape two very different political realities (Rosenberger, 2014). For example he cites how public benches with armrests appear to most as a public amenity whereas for the unhoused or homeless, the same artifact is clearly about excluding them from what is understood to be public space. As many of us now know, the same political inclusion and exclusions have been shown to operate in AI based facial recognition systems that enforce racial bias (Buolamwini and Gebru, 2018).

The complexities by which we are entangled with technologies is what makes technologies, and in particular AI, so powerful in shaping our worlds and as such, despite the challenges, we need to examine human relations to AI. However, the idea of knowing technological relations is only partially about having a theory that can be applied to designing interfaces, more so it is a tool to understand the effects of AI and the user interfaces. With accountability, it is the effects of AI that need to be taken seriously and in turn used to inform the design of AI interfaces. Yet we are in a time and a place in which the exact opposite is occurring. In the rush to market, in what is essentially a poorly designed grand experiment that would never pass most REB reviews, ChatGPT 3.5 was released as an open beta to collect data on use to improve it further and make it safer! An open beta or a research study with little accountability that is estimated to have reached 100 million users after two months of its release (Hu, 2023).

So where to start? Here I differ from Ben who emphasizes the intentions of designers and their decisions, i.e., the decision to use pronouns or the designer’s goal of making AI act subservient to human needs and values. We need goals in designing but goals that are not sufficiently informed by the effects of what we design are simply intentions and will not get us very far. Ihde refers to this as the designer fallacy, described as the false “notion that a designer can design into a technology, its purposes and uses” (Ihde, 2008, p. 51). This aligns with Edward Tenner’s understanding of unintended consequences (Tenner, 1997) and in HCI, Paul Dourish’s ideas of appropriation (Dourish, 2003). What these have in common is the measure of the design of technology is its effects not its goals. What is unique about Ihde is that the challenge in designing to purpose is that technologies and materials are not neutral. Technology is an actor that shapes outcomes and effects in ways that are not readily perceivable let alone predictable. This underscores for Ihde, and I find this very applicable to AI, the need to examine the different relations with AI as a matter of effects that then can be brought into the design process.

This need to see AI as relations and the effects of those relations for greater accountability was argued very recently in an editorial for Wired magazine by Sasha Luccioni in response to the open letter by Future of Life Institute calling for a six month moratorium on AI research (Luccioni, 2023). Luccioni takes the authors of the letter to task for arguing about future dangers of AI overlooking currently known ill effects. Focusing on the effects of the present for Luccioni is the path forward to determining what is a successful AI. She is not alone in critiquing the open letter, Emily Bender and Timnit Gebru both fired back at the letter. Bender and Gebru are co-authors of the “stochastic parrot” paper that in detail examines current effects of present day AI that have gone unaddressed (Bender et al. 2021). Not only do they argue for the need to address issues with present-day AI but they articulate this as a series of relations. For example, they cite the environmental cost to Large Language Models (LLM) in terms of CO2 emissions from data centers as a consequence of inefficient algorithms and the data size of LLMs. Further, they link environmental harm to known climate justice issues of people who have been marginalized or living in the Global South bearing the greatest burden of climate change.

Akin to feminist Standpoint Theory (e.g., Harding, 1986), Bender and colleagues argue those stakeholders who are at risk to be most negatively impacted are another key relation, and the authors argue frameworks exist to include such stakeholders. In considering how these effects illuminated through AI’s relations to the world can be used in the design of AI they argue for “pre-mortems” to anticipate failures and harms before release (which arguably was done with ChatGPT-4 through a system card (OpenAI, 2023)) and data curation of the data set. Here we can see a slower more judicious approach to AI that could be implemented at the level of designing AI interfaces as well. This approach of a slower development of AI technology with not only future but present day effect mitigations driving the process is at the heart of Timnit Gebru’s “Slow AI” (see Strickland, 2022). This raises the larger issue of not only AI research and development including the interfaces but of technology in general. How we go about conducting research and science with technology that is increasingly complex and “non-neutral.” Prescribing the type of technologies we want based on values and equity in light of the complexities and unknowns of the very technologies let alone how we put them to use is the challenge at hand. This is where I believe Michael is going. Moving the discussion past the particular issue of and somewhat limited harm of anthropomorphic design to the larger question of what interdependencies and relations form around our technologies and the way we design them–how do we ask that question and pursue those answers to inform what we should do in design?

However, unlike Michael, I’m more wary of discovery as being the prevailing rationale to pursue knowledge. Here I shift back toward Ben and the need for accountability in the choices we make. And here I point to the need for what the philosopher Isabelle Stengers calls “slow science” that precedes and aligns with Slow AI and the overall concern with the effects and relations that the products of science create (Stengers, 2018). For Stengers, slow science resists the overwhelming force and mobilization of scientific practices, which like technology research is market driven (including the need to get grants and publish) and solutions oriented in the short term. Resistance means the need to question objectives in research that exclude or override agendas that concern the marginalized, the more than human world, prevention of systems that reproduce and amplify past harms.

I strongly resonate with Michael’s imperative to examine the different (known and unknown) relations with AI but also Ben’s aim of ensuring accountability. But accountability is a matter of the effects of technological relations. How does a relation between me and a technology affect the world around us, who or what else is brought into relation (willingly or otherwise) with a technology and to what effect?

References

  • Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021, March). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?🦜. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency (pp. 610–623).
  • Benjamin, J. J. (2023). Machine Horizons: Post-Phenomenological AI Studies. Doctoral, Enschede. The Netherlands: University of Twente.
  • Bennett, J. (2010). Vibrant Matter: A Political Ecology of Things. Durham: Duke Univ Pr.
  • Buolamwini, J., & Gebru, T. (2018, January). Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency (pp. 77–91). PMLR.
  • Dourish, P. (2003). The appropriation of interactive technologies: Some lessons from placeless documents. Computer Supported Cooperative Work (CSCW), 12, 465–490.
  • Harding, S. (1986). The Science Question in Feminism. 1st edition. Ithaca: Cornell University Press.
  • Hu, K. (2023). “ChatGPT Sets Record for Fastest-Growing User Base — Analyst Note | Reuters.” News. Reuters. February 2, 2023. https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/
  • Ihde, D. (1990). Technology and the Lifeworld: From Garden to Earth. Indiana University Press.
  • Ihde, D. (2008). The Designer Fallacy and Technological Imagination. In Philosophy and Design, 51–59. Springer, Dordrecht. https://doi.org/10.1007/978-1-4020-6591-0_4.
  • Latour, B. (1993). We Have Never Been Modern. Cambridge, MA: Harvard University Press.
  • Latour, B. (2007). Reassembling the Social: An Introduction to Actor-Network-Theory. 1st edition. Oxford: Oxford University Press.
  • Nerberg, S. (2022). I Am Mutehekau Shipu: A River’s Journey to Personhood in Eastern Quebec. Canadian Geographic, April 8, 2022. https://canadiangeographic.ca/articles/i-am-mutehekau-shipu-a-rivers-journey-to-personhood-in-eastern-quebec/
  • OpenAI. (2023). GPT-4 System Card. March 23, 2023. https://cdn.openai.com/papers/gpt-4-system-card.pdf
  • Rosenberger, R. (2014). Multistability and the agency of mundane artifacts: From speed bumps to subway benches. Human Studies, 37, 369–392.
  • Shew, A. (2020). Ableism, technoableism, and future AI. IEEE Technology and Society Magazine, 39(1), 40–85.
  • Stengers, I. (2018). Another Science Is Possible: A Manifesto for Slow Science. Translated by Stephen Muecke. Cambridge, UK: Polity.
  • Tenner, E. (1997). Why Things Bite Back: Technology and the Revenge of Unintended Consequences. Knopf Doubleday Publishing Group.
  • Verbeek, P. P. (2008). Cyborg intentionality: Rethinking the phenomenology of human–technology relations. Phenomenology and the Cognitive Sciences, 7(3), 387–395.
  • Verbeek, P. P. (2015). COVER STORY beyond interaction: a short introduction to mediation theory. interactions, 22(3), 26–31.

--

--

Ron Wakkary
Human-Centered AI

Professor in the School of Interactive Arts + Technology at Simon Fraser University, and Professor in Industrial Design at Eindhoven University of Technology