Robots with quantum minds. From the psychology of fiction to the physical roots of biology (2)

Symphilosopher
6 min readMar 14, 2023

--

written by Juliet Jiayuan Chen and Johan F. Hoorn

A non-technical introduction to:

Ho, J. K. W., & Hoorn, J. F. (2022). Quantum affective processes for multidimensional decision-making. Nature: Scientific Reports, 12, 20468. doi: 10.1038/s41598–022–22855–0. Available from https://www.nature.com/articles/s41598-022-22855-0

Cite the current introductory paper as:

Hoorn, J. F. (December 7, 2022). Robots with quantum minds: from the psychology of fiction to the physical roots of biology [Essay]. Preprints 2022, 2022120114. doi: 10.20944/preprints202212.0114.v1. Link

To keep the reading feasible, the original paper was cut into 4 parts, which will be posted successively. Part 1 can be found here: https://medium.com/@symphilosopher/robots-with-quantum-minds-from-the-psychology-of-fiction-to-the-physical-roots-of-biology-1-8a9fff1b7343.

This is Part 2, going into the psychology and communication of encountering AI-driven avatars and robots. Part 3 will discuss some computational aspects.

4. Psychology and communication

The second word in Artificial Intelligence is intelligence. Although psychology still struggles with its definition (see Falck, 2020), many artificers of digital devices do not shun to call their systems smart, they call them electronic brains, cellular automata, a network of neurons, and indeed, intelligent. The advantage of psychology and communication over the humanities is that they put their theories to the test and so ‘identification’ with a character, like the ‘intelligence’ of that fictional creature, is seen as an attribution by the observer rather than a quality of the virtual being or the artificial device itself. Moreover, identification seems to be an extreme state on a scale of being involved (e.g., feeling sympathy). Not everybody identifies all of the time with so many characters. Additionally, one may be aloof or indifferent, bored really, so involvement is compensated by a feeling of emotional distance (cf. antipathy). The other aspect of ‘identification’ would be similarity, in how far the character and the observer are alike — in personality or with respect to life’s vicissitudes, being in the same situation.

Central in emotion psychology is the degree to which something impacts a person’s personal goals and concerns (e.g., McRae & Gross, 2020). Psychologists call that relevance, indicating how important something is to an individual. The direction of affect is called valence, whether you have positive and/or negative expectations for the far or near future about that (un)important person you encountered or event that just happened.

The psychologists and communication experts who dedicate themselves to the use of technology indicate that people discern certain affordances in a device, action possibilities, which facilitate or inhibit achieving certain goals (cf. Wang, Wang, & Tang, 2018). The wheels on a robot tell that it cannot walk and that it needs an even plane to ride on. Movable lips would indicate that a robot can talk. Its perceived intelligence tells whether an artificial system can advise the user on a mortgage or could beat the world champion on a game of Go. Being evaluated for their relevance and valence, affordances trigger intentions to use the technology in a certain way or to put it aside.

Being involved with a character while keeping a certain distance to it and — if interactive like a game character or a social robot — being willing to use the character to achieve personal goals (e.g., do a financial transaction, have a conversation) would lead to an overall level of satisfaction: I like the virtual therapist, I dislike the health coach.

Whereas the humanities excel in setting up grand theories that capture the depth and variety of meaning, the social sciences are strong on examining hypotheses by running experiments and other empirical tests, probing the external validity of our thinking. In doing so, bits and pieces of theory are accepted or rejected with higher or lower likelihood while relationships between dimensions (e.g., a morally better character is regarded as more relevant) are quantified: A function may be written that tells how much the one increases with the increase of the other.

While following this reasoning, in about 20 years’ time, we validated the theory (Interactively) Perceiving and Experiencing Fictional Characters for movie characters, media figures, office assistants, game characters, virtual tour guides (Hoorn & Konijn, 2003; Konijn & Hoorn, 2005; Van Vugt, Hoorn, Konijn, & De Bie Dimitriadou, 2006; Van Vugt, Hoorn, & Konijn, 2009), speed-dating avatars (Hoorn, Konijn, & Pontier, 2018), and social robots in various settings (e.g., a robot doctor, a robot grandchild) (Hoorn, Konijn, Germans, Burger, & Munneke, 2015). We found the functions that tell how much the said psychological dimensions suppress or enhance one another, running from ethics and affordances through relevance and valence up to involvement, distance, and use intentions.

Problem is that the going-up-and-down may be more-or-less the same and that who is addressing who among the psychological dimensions by-and-large is comparable but all of it is not very exact. There is quite some room for variation. The equations shift with every sample taken.

Psychology and communication often present research results in the form of regression equations. For example, if something is threatening, the fear increases by that much on average. Then they draw a circle with an arrow to another circle and add the correlation or a beta weight. That’s about it. And although they are psychologists, the premise is mathematical, not psychological. Why regression? Why not something else? Many researchers don’t ask that question. To what extent is ‘regression’ what the brain does? In all those psychological models, regressions are written with every arrow in the path model, but the values differ per study. A regression equation estimates the relationship between two or more variables, where one variable (threat) is thought to predict the other (fear), but the percentages of explanation may differ from study to study and are sometimes as little as 30%. Which means that 70% remains unexplained. So yes, there is a lot of room for guesstimation and that measure of ‘chance’ can be approached in different ways. We’ll get back to that later when we discuss ‘fuzziness’ and ‘quantum probability.’

Mind you that academically, it may make perfect sense to make distinctions in a theoretical model that empirically cannot be traced back in the answers that participants provide. Social scientists tend to revise their theories to simplified versions because ‘the data have spoken’ but it may just be that lay people do not think about such things as profoundly as academics do. Or they are unaware of those things. What we need is general theory that is socially stratified or psychologically specified. “Eppur si muove,” mind you.

… to be continued …

Next time, Part 3 enters the realm of computing where theories of psychology and communication with robots should be translated into computer-readable terms.

·

Symphilosophers

陳佳媛 Ella-Jenna Oosterglorenwoud (Chen)

Research Assistant in Social Robotics with an interest in Philosophy of Mind

Laboratory for Artificial Intelligence in Design (AiDLab)

https://www.linkedin.com/in/symphilosopher/

Symphilosopher@gmail.com

.

洪約翰 Johan F. Hoorn

PhD(D. Litt.), PhD(D. Sc.)

Interfaculty full professor of Social Robotics

The Hong Kong Polytechnic University, Dept. of Computing and School of Design

www.linkedin.com/in/man-of-insight

jf.hoorn@gmail.com

·

References (2)

Falck, S. (2020). The psychology of intelligence. London: Routledge.

Hoorn, J. F., & Konijn, E. A. (2003). Perceiving and experiencing fictional characters: An integrative account. Japanese Psychological Research, 45(4), 250–268.

Hoorn, J. F., Konijn, E. A., Germans, D. M., Burger, S., & Munneke, A. (2015). The in-between machine: The unique value proposition of a robot or why we are modelling the wrong things. In S. Loiseau, J. Filipe, B. Duval, & J. van den Herik (Eds.), Proceedings of the 7th International Conference on Agents and Artificial Intelligence (ICAART) Jan. 10–12, 2015. Lisbon, Portugal (pp. 464–469). Lisbon, PT: ScitePress.

Hoorn, J. F., Konijn, E. A., & Pontier, M. A. (2018). Dating a synthetic character is like dating a man. International Journal of Social Robotics, 1–19. doi: 10.1007/s12369–018–0496–1

Konijn, E. A., & Hoorn, J. F. (2005). Some like it bad. Testing a model for perceiving and experiencing fictional characters. Media Psychology, 7(2), 107–144.

McRae, K., & Gross, J. J. (2020). Emotion regulation. Emotion, 20(1), 1.

Van Vugt, H. C., Hoorn, J. F., Konijn, E. A., & De Bie Dimitriadou, A. (2006). Affective affordances: Improving interface character engagement through interaction. International Journal of Human-Computer Studies, 64(9), 874–888. doi:10.1016/j.ijhcs.2006.04.008

Van Vugt, H. C., Hoorn, J. F., & Konijn, E. A. (2009). Interactive engagement with embodied agents: An empirically validated framework. Computer Animation and Virtual Worlds, 20, 195–204. doi:10.1002/cav.312

Wang, H., Wang, J., & Tang, Q. (2018). A review of application of affordance theory in information systems. Journal of Service Science and Management, 11(01), 56.

--

--

Symphilosopher

A collective for the advancement of independent thinking.