AI machines as moral agents, Avoiding definitions and the Turing test. (part 8)

H R Berg Bretz
4 min readJan 22, 2022

--

In Part 7 I summarized different suggestions and ideas of defining minimal agency. But Floridi and Sanders avoids defining it and so does Alan Turing’s well known ‘Turing test’. Is it too hard?

For a mission statement, see Part 1 — for an index, see the Overview.

Photo by Possessed Photography on Unsplash

3.3. Avoiding definitions and the Turing test

As noted in the last section, Floridi and Sanders are reluctant to define agency because it is simply too hard to do. I want to briefly comment on this since it suggests that they believe Barandiaran et al‘s project cannot succeed. Instead of defining agency, Floridi and Sanders use the concept of level of abstractions to avoid this problem, much like the Turing test avoids the problem of defining intelligence. By expanding on this analogy, I hope to mitigate the strength of Floridi and Sanders’ stance to not define agency.

Alan Turing believes the question “Can machines think?” should be replaced by “Are there imaginable digital computers which would do well in the imitation game?” (1950, p. 8). Floridi and Sanders describe how Turing avoided the problem of ‘defining’ intelligence by first fixing some parameters — in this case a dialogue conducted by computer interface, with response time taken into account — and then establishing the necessary and sufficient conditions for a computing system to count as intelligent at that level of abstraction; the imitation game (2004, p. 353). This is a practical approach because it is more convenient to answer that question than the original philosophical question of how to define thinking. The idea is simple and brilliant at the same time, instead of taking the long and complicated road of trying to explain something vague and complex, it bypasses the question by simply saying that if you cannot distinguish any difference between the parties in the imitation game, there is no difference. This is not only a practical way to avoid the problem, it also forces you to question what is indeed meant by the original question. However, this does not actually answer the question, it highlights problems of the original question. If we use this analogy on the task of defining agency, the question is, why is the philosophical approach deemed so hard? An answer to this could be the association with intentionality. Since intentionality is such a puzzling and complicated notion, if agency is associated with intentionality, then defining agency also becomes as complicated. If this also invites implications of consciousness, that then might also be a reason why defining agency is a burdensome task. With Barandiaran et al’s minimal agent, those burdens are lifted, and this task does not seem as problematic as before. If this is the case, then the fact that Floridi and Sanders prefer the method of levels of abstraction (which the Turing test is an instance of) could be a symptom something else. It could a symptom of what is wrong with mistakenly claiming that intentionality and consciousness is necessary for agency[1].

To sum up this chapter, minimal agency has three conditions — (i) individuality, which separates the agent from its environment, (ii) asymmetry, which shows that the agent has the ability to manage or exert forces in the environment and (iii) normativity, which shows that the agent acts according to a purpose. The agent’s actions are driven by what can be described by the instrumental attribution of the belief/desire model, that the agent uses its beliefs to achieve its desire. And if it does not achieve it, then it has failed in some way. This minimal agency is in line with how we use ‘agent’ in everyday language (excluding causal agency) and how we psychologically attribute agency by our fast, intuitive system. There is also some reluctance in philosophical discussions concerning artificial intelligence to define agency, and particularly Floridi and Sanders resist this. A reason for this could be the notion of a too demanding concept of agency associated with intentionality and consciousness. I argued that the resistance is analogous to how Turing avoided to answer if machines can think, which in part was a pragmatical way to avoid a difficult question. If we accept minimal agency over intentionality agency, then the problem of how the very definition of agency nullifies ‘artificial agency’ is averted. Let us then turn to the distinction of mere agency and moral agency.

Comments:

The Turing test doesn’t care if the agent has a real consciousness or just advanced software — as long as a human is fooled by it, the task in complete. That is basically my argument — why is it so important to know if the agent is conscious or not? In many aspects, that is not what matters. And I’m a firm believer that in the future it is going to be much harder to know if an agent is human or not. But I personally have a problem with the Turing test: What does it mean to be convinced that a chatbot is human? I think a lot of people has been fooled, but only because they didn’t know better. In a sense it is a very arbitrary measure. Someone that knows how a specific chatbot is programmed could probably easily distinguish it from a human, while someone else won’t.

Next part here!

Footnotes:

[1] This does not mean that Floridi and Sanders’ levels of abstractions has no merit. Contrary, it seems to have many merits, but that is irrelevant to my argument that it could be a symptom of why they consider defining agency is so problematic.

--

--

H R Berg Bretz

Philosophy student writing a master thesis on criteria for AI moral agency. Software engineer of twenty years.