Body demarcation of AI

H R Berg Bretz
6 min readNov 26, 2023

--

When discussing AI, we often forget to demarcate the body of the AI, something that I believe can have a big impact on discussions around AI, especially moral discussions. Here is what I mean by this:

When referring to a specific AI, exactly what encompasses its body and how is it demarcated from its surrounding environment? At a first glance, it could be to simply say the ‘whole body’ of the autonomous vehicle or robot.

But if an autonomous vehicle is remotely controlled, ‘where’ is the AI? In the server where the AI software is running? Does it include the physical car, or is the car merely a tool that the AI uses perform the task at hand, for example to transport a person from A to B?

A robot in a crowd — dall·e 2

This is a new problem since humans are much more distinct in this sense than AI artifacts are. A human is much more easily defined and separated from its environment. Sure, we can lose limbs or clinically replace body parts and thereby change our demarcation from the surrounding environment, but that isn’t so complicated to explain and understand. Mostly because we are living biological organisms, and it’s quite hard to add or subtract to our body. But for an artifact, it isn’t.

If a human picks up a hammer, it’s still not part of the human. If a robot picks up a hammer, it’s not as clear if that hammer is now part of the robot. If the robot has a ‘hand’ and that hand clutches the hammer, I’d say we have the same situation as a human that picks up a hammer. But if the robot instead had an arm interface, where it could attach many different types of arms, and one of them is a ‘hammer arm’, then it is much less clear where the robot ends and the hammer begins.

Another difference is that the human hand consists of biological material and the hammer doesn’t, while the robot hand could be made from the exact same material as the hammer.

This, per se, isn’t a big problem. There are many different types of AI artifacts, and these types can be very different from each other – but that only makes the discussions on this topic more complex, just as we have very complex discussions around human culture and societies, divided geographically, by class, ethnicity, gender and so on.

The problem is that this hasn’t really permeated in our language around AI yet. I believe it’s because it’s still a new subject, we are still defining it, dissecting it, understanding it, and these discussions are slowly reaching a larger audience. And the technology itself hasn’t reach a level that actually affect millions of everyday day lives. ChatGPT could be one of those technologies, but then again, are really any of today’s instances of AI actually “intelligent” or are we just watching the the very first steps in that direction? Personally, that doesn’t matter that much. I understand the criticism that a lot what is referred to as AI today is really just the same algorithms with the latest update, but as I firmly believe that ‘real’ AI will soon be achieved, it’s just a matter of time before these discussions become more relevant.

Anyway, here is where I think Anthropology of AI is important to really understand what the technology is and how AI artifacts will change us and our culture.

So, the point is that we need to start adding that complexity when talking about AI, to create language for the different types of artifacts.

My feeling is that in a lot of moral discussions, AI is often treated as a human-like artifact, individuals becoming conscious. Movies like “I, robot” and “2001” easily comes to mind, where an artifact gains consciousness and rebels against its human masters. The much more recent “The creator” is also guilty of this, where the super AI is symbolized as a child.

The moral discussions for humans revolve around free will, right and wrong choices made by autonomous, clearly demarcated entities. I believe that this kind of thinking has just been transferred to AI artifacts when discussing them in moral situations. I must admit that I myself am guilty of the same thing when writing my thesis Artificial agency and moral agency: an argument for morally accountable artifacts. But how do we evolve that moral language for AI artifacts?

Well, here are a few things to consider:

  1. We have to take into account the ‘Borg’ factor: The Borg is an alien life form from Star Trek where cyborgs are linked to a hivemind. I believe this is analogous to a back end server that controls a fleet of autonomous vehicles like drones or autonomous mining excavators. Where are the decisions made? What parts does the entity consist of?
  2. Seamless communication: Humans communicate mostly through language. This communication is very slow and is often not understood or ignored. But for an AI, there is no real difference between external and internal communication. This blurs the line if two artifacts should be considers separate or parts of the same whole, as the connection between two artifacts and the internal communication might be as fast and done in the same way.
  3. Speed of communication: Often in movies, robots talk to each other using verbal communication. If they would do that, the only reason would be to for humans nearby to understand what they are saying. Even if they would communicate verbally, they would have the capacity to do it so quickly that we wouldn’t understand a word. At these high transfer rates of communication, ‘intelligence’ can be distributed via communication. Where is the intelligence stored? Where does it originate from?
  4. Type of entity: What form does chatGPT have? Does it have a physical body? Is it one artifact? Software bots are easy to produce since they don’t require any specific hardware, so they might be the most common form of AI. Is AI the software? What separates one software from another?
  5. Reproduction: Since the AI is all digital, reproducing copies is very simple. Also, the learning period can be very short because of the incredible speed it can operate on. This makes it very different from humans, as we all spend a very long time as underdeveloped, as children. For an AI, there is no need for individual learning, since if one AI is trained, you can just make a snapshots of its mind and let the new instances start from there. The concept of individuality for the AI changes completely if its mind can be copied, transferred and duplicated easily.

These are examples of factors that changes the starting point of the moral situations where AI is involved. It is so easy for us to just take our previous understanding of morality and just apply it to AI artifacts, when it is unclear what we are actually applying it to, when so many properties in the artifact are different from humans. It is much easier to talk about Ava (from the movie Ex Machina), an AI that becomes aware of its own existence and is clearly demarcated from its environment than the actual AI that soon exists around us.

I almost titled this text “the demarcation problem of AI”, but I changed it when I realized that this isn’t really a problem, it’s rather a recognition that these matters require much more intricate and elaborate explanations than the transference on these ideas from humans to AI artifacts.

We need to create a new language that takes into account all these factors and then investigate how this will relate to human morality.

--

--

H R Berg Bretz

Philosophy student writing a master thesis on criteria for AI moral agency. Software engineer of twenty years.