Anthropology of AI

H R Berg Bretz
5 min readApr 29, 2022

--

When writing my thesis “Artifical agency and moral agency” I had an idea about something I called ‘anthropology of AI’. I believe it is something needed in the future; the study how artificial agents (AI robots/machines) interact with each other and with humans in a community.

Photo by Külli Kittus on Unsplash

When discussing “moral robots” the problems of whether they can be conscious or not is unfortunately still for many top-of-mind. This is an interesting philosophical issue, but is it really the issue that needs to be addressed to further the discussion on moral machines?

If we don’t even know what consciousness is or what its constituent parts are, then how can we tell if a machine exhibits it? Also, this line of thinking might be a mistake as it puts human moral agency on a pedestal, and the mystery around the problem of describing human consciousness only helps to preserve this mistake.

Alan Turing’s ‘Turing test’ represents an analogous way to circumvent a problem like this. The Turing test evaluates whether an artificial agent is indistinguishable from a human, a way to find out if the agent has achieved (or can imitate) some level of intelligence. Of course, that doesn’t mean that what is achieved is conscious or human-like, only that the agent can imitate a human very well. When developing AI, this is an interesting test to see what has been accomplished, but once the test is passed[1], it raises another question: Now that we have achieved something, what is it and how does is affect our moral discussions?

To find that out, I suggest we need an ‘anthropology of AI’, to study and understand the AI mind and its behavior.
-But hey! Isn’t that a contradiction in terms? True, there can be no litteral anthropology of AI because it’s the study of human beings, but I have the same methodology in mind and, also, I still do think it gets the message across. Also, the idea is not to only study AI behavior, but also AI and human interaction and integration in human society. The study of AI behavior has many similarities to anthropology since AI will have similar understanding of e.g. language, logic and reasoning since it’s made by us, for us.

Exactly what is anthropology? The famous anthropologist Clifford Geertz talk about anthropology as “not an experimental science in search of law but an interpretive one in search of meaning” (1973, 5)[3]. If that is transferred to anthropology of AI, it would mean that we should study AI in detail to interpret it and find meaning in its actions and interactions. The point I am trying to make is that if we only compare AI against humans, we will focus on finding out the many ways AI is not human or how it is distinguishable from humans instead of finding out if AI have accomplished meaning in its own right, possibly quite separate and unique from human meaning. And perhaps, from a moral standpoint, by understanding these unique AI social traits, we might even discover a new moral property that we are not aware of, which could give AI agents intrinsic value and maybe even moral standing.

Anthropology of AI would be a science in its infancy as today’s artificial agents has not yet reached a high enough level of complexity and has not been integrated to a high degree in society, but we are rapidly reaching the point when that happens. For example, autonomous vehicles are increasingly being tested and integrated in regular traffic, with companies like Alphabet (Google), GM, Baidu and Tesla leading the way. When the share of these vehicles account for, let’s say, 5% or more, it will give AI-anthropologists a opportunity to search for meaning in the behavior of autonomous vehicles and their interactions[4].

What kind of behavior are we talking about? In a favorite passage of mine, Geertz references Gilbert Ryle’s “thick description” of two boys twitching (blinking) their eyes at each other. Twitching is quite easily described as the rapidly closing and opening of the eyes (“involuntary twitch”), but the anthropologist adds one layer of complexity by describing it as a “wink”. Geertz continues “The winker is communicating, and indeed communicating in a quite precise and special way: (1) deliberately, (2) to someone in particular, (3) to impart a particular message, (4) according to a socially established code, and (5) without cognizance of the rest of the company. As Ryle points out, the winker has done two things, contracted his eyelids and winked, while the twitcher has only done one, contracted his eyelids.” (1973, 6). This explains the difference between the thin description of twitching and thick description of winking.

Now, what could a thick description behavior of an AI agent be? Let’s see… autonomous vehicles are always described as very safe. Maybe that will be realized with a behavior where the vehicle starts to micro-swerve when it wants to change lanes but it’s not possible. Once that is established, that could evolve into a subtle way to tell others you want to change lanes but the current situation doesn’t permit it. Not sure this example was very convincing, but I do think the possibilities are infinite. There could be different behavior depending on the brand of the vehicles, generation of the models, human/AI ratio, weather climate, urban vs rural and so on. And this is just for autonomous vehicles.

Now, as in all ‘soft’ sciences like anthropology, all explanations (thick descriptions) aren’t always possible to empirically confirm or might incorrectly anthropomorphize the agent’s behavior, but even in those cases I think these explanations could be very interesting and a rich new source to invent new ideas and concepts from, and to create art around. Within ten years — anthropology of AI will be a field in itself.

Do you agree? Please comment or DM me if you find this interesting.

Coming soon — could ‘anthropology of AI’ could give insights into ‘explainable AI’ and ‘the black box problem of AI’?

Footnotes:

[1] Of course, what it takes to actually pass the test is a bit unclear to me. But it still seems reasonable to say that today’s AI technology has a higher degree of “passing” (or what you want to call it) than let’s say 20 years ago, and so then you could say that there is some threshold of passing that could be used in this digital sense. Maybe the point is reached when the Turing test seems commonplace or obvious and therefore uninteresting. Personally I think the test seems to depend too much on the knowledge of the human interlocutor. It’s like asking a layperson or an expert whether a painting is fake or not. And maybe today's experts lack vital information, what then? Is the test passed now but not by future tests? Seems arbitrary to me, but that’s just my unresearched opinion.

[3] Specifically, he is talking about analyzing culture in that section — which I equate with anthropology.

[4] It will also give them the opportunity to study ‘standard’ anthropology from the human perspective — i.e. how human car-driving-culture will change once this shift happens, but that not the same as what I am trying to describe.

References:

Gunkel, David J., (2014) A Vindication of the Rights of Machines Philosophy and Technology

Geertz, Clifford (1973). The Interpretation of Cultures. Basic Books 3–30

--

--

H R Berg Bretz

Philosophy student writing a master thesis on criteria for AI moral agency. Software engineer of twenty years.