Robots with quantum minds. From the psychology of fiction to the physical roots of biology (3)

Symphilosopher
7 min readMar 27, 2023

--

A non-technical introduction to:

Ho, J. K. W., & Hoorn, J. F. (2022). Quantum affective processes for multidimensional decision-making. Nature: Scientific Reports, 12, 20468. doi: 10.1038/s41598–022–22855–0. Available from https://www.nature.com/articles/s41598-022-22855-0

Cite the current introductory paper as:

Hoorn, J. F. (December 7, 2022). Robots with quantum minds: from the psychology of fiction to the physical roots of biology [Essay]. Preprints 2022, 2022120114. doi: 10.20944/preprints202212.0114.v1. Link

To keep the reading doable, the original paper was cut into 4 parts, which will be posted successively.

Part 1 is available here: https://medium.com/@symphilosopher/robots-with-quantum-minds-from-the-psychology-of-fiction-to-the-physical-roots-of-biology-1-8a9fff1b7343

Part 2: https://medium.com/@symphilosopher/robots-with-quantum-minds-from-the-psychology-of-fiction-to-the-physical-roots-of-biology-2-da3f7c4f9e9b

This is Part 3, explaining why it is important to formalize theory and what that does for developing AI-driven avatars and robots.

5. Computer science

The third concept in Artificial Intelligence is formal modeling. Computer scientists do not care much about what average users think of their models. Oftentimes, that is too bad when a self-driving car runs over a jaywalker because it cannot handle exceptions to the rule but there is an upside to this mentality as well. Before anything, computer scientists are logicians and mathematicians. Whereas humanities sketch the grand conception and social scientists fight over the empirical validity of opposing findings, mathematics is so abstract that it prompts us to unify our diversified theorizing. Formal modeling helps the internal verification of a theory and detects flaws in the logics or pinpoints unspecified variables, the meaning of which to social scientists seem so obvious but that computers do not do anything with unless you spell it out to them in unmistakable terms.

5.1 Why formalization?

For example, Media Equation theory (ME) (e.g., Złotowski, Sumioka, Eyssel, Nishio, Bartneck, & Ishiguro, 2018; Lee‐Won, Joo, & Park, 2020) and Computers Are Social Actors (CASA) (e.g., Gambino, Fox, & Ratan, 2020) say that humans apply social scripts to media (TV, computers, robots) as if they were real people (cf. blaming your computer for doing it wrong). Note the as if, because that takes those media straight into the realm of fiction (see Hoorn, 2020a; 2020b). The as if also indicates a level of similarity, not equality, and because ME and CASA do not provide any equations for their predictions (how similar should a medium be before applying social scripts to it?), the very name of Media Equation is misleading. Because ME and CASA say little about equations, we may assume the simplest, which is a linear regression. The more social cues a system supplies, the more it is treated like a human being (social robots more so than TV sets): y = ax + b. Level of perceived human-likeness y = number of social cues x, growing at a rate of a, starting from some baseline b (that’s the minimum cue the machine should provide to get a human-likeness response at all).

As a counterpoint, Uncanny Valley theory (UV) states there is a dip in the appreciation of human-likeness (e.g., Zhang, Zhang, Du, Qi, & Liu, 2020). If robots become lifelike but not yet good enough, people get scared. UV does not offer equations either but shows a graph that goes up, then down when just not good enough, and higher-up again when the robot approaches near-humanness. That’s an equation with a cubed x in it, otherwise you don’t get that wave form.

Yet another theory is singularity (e.g., Kurzweil, 2016), posing that one day, AI and robots outsmart human beings and supersede them, a process that would happen at an ‘exponential rate.’ Here, human-likeness is not about applying social scripts, eerie appearance or behavior, but about intelligence, the commodity humans attribute to other humans, to animals, or to artificial systems. The term exponential gives us a good grip on the growth curve as it means that human-likeness (i.e. intelligence) is attributed to machines with a^x.

Through formalization, three completely different theories have become fully comparable. They all do predictions on human-likeness but in different ways: linear (~ x), dual-extremum (e.g., representable in a power series with at least cubic behavior (~ x³)), and exponential (a^x). ME and CASA predict a linear, and maybe quadratic relation (~ x²) (a parabola or inverted U) if there is an optimum (which they do not mention), UV expects a dual-extremum (a curve that goes up–down–higher-up), and singularity assumes an exponential growth curve. Mathematics show that three theories diverse at surface level are manifestations of the same underlying assumption: more humanlike cues lead to higher perceived human-likeness (ME, CASA) but there may be an optimum (UV) or there is no end to it (singularity).

5.2 Different logics

In classic logic, things are black or white, true or false, on or off, 1 or 0. That hardly represents the way humans naturally think, feel, and behave, which oftentimes is more ambiguous and vaguer than either like or dislike. There is another kind of logic that to a degree can deal with the intermediate values between true and false. Fuzzy logic works with percentages or ‘membership functions’ by which the value to ‘how you feel’ may lie between 0 and 1, the grey zone. Instead of saying that a person is good or no good, with fuzzy logics we can assign a degree of goodness to a character. Thus, the verdict does not have to be ‘totally good’ or ‘entirely not good.’ A character that is deemed ‘somewhat good’ now becomes within range of computational assessment (cf. Raghuvanshi & Perkowski, 2010).

We took the validated theory Interactively Perceiving and Experiencing Fictional Characters and translated it into a mathematical model, using fuzzy logics (Hoorn, Pontier, & Siddiqui, 2012; Hoorn, Baier, Van Maanen, & Wester, 2021). Like this, the computer could mimic how humans assess mediated others (humans online, chat bots, etc.). After implementation as a software, now dubbed Silicon Coppélia, we tested our system with girls speed-dating an avatar that was driven by our AI or by real boys. Our AI passed the Turing Test (both frequentist and Bayesian statistics): Silicon Coppélia’s performance was indiscernible from that of a human being (Hoorn, Konijn, & Pontier, 2018).

… to be continued …

Next time, Part 4 speculates about the possibility of quantum states in the brain, which would be a possible explanation why people can have ambiguous feelings, vague emotions, or think that they should not cry while the tears roll from their eyes. Would quantum probability be the way robots may ‘understand’ human beings?

·

Symphilosophers

陳佳媛 Ella-Jenna Oosterglorenwoud (Chen)

Research Assistant in Social Robotics with an interest in Philosophy of Mind

Laboratory for Artificial Intelligence in Design (AiDLab)

https://www.linkedin.com/in/symphilosopher/

Symphilosopher@gmail.com

.

洪約翰 Johan F. Hoorn

PhD(D. Litt.), PhD(D. Sc.)

Interfaculty full professor of Social Robotics

The Hong Kong Polytechnic University, Dept. of Computing and School of Design

www.linkedin.com/in/man-of-insight

jf.hoorn@gmail.com

·

References (3)

Gambino, A., Fox, J., & Ratan, R. A. (2020). Building a stronger CASA: Extending the computers are social actors paradigm. Human-Machine Communication, 1, 71–85. doi: 10.30658/hmc.1.5

Hoorn, J. F. (2020a). Theory of robot communication: I. The medium is the communication partner. International Journal of Humanoid Robotics, 17(6), 2050026. doi: 10.1142/S0219843620500267

Hoorn, J. F. (2020b). Theory of robot communication: II. Befriending a robot over time. International Journal of Humanoid Robotics, 17(6), 2050027. doi: 10.1142/S0219843620500279

Hoorn, J. F., Baier, T., Van Maanen, J. A. N., & Wester, J. (2021). Silicon Coppélia and the formalization of the affective process. IEEE Transactions on Affective Computing, x(x), 1–24. doi: 10.1109/TAFFC.2020.3048587

Hoorn, J. F., Konijn, E. A., & Pontier, M. A. (2018). Dating a synthetic character is like dating a man. International Journal of Social Robotics, 1–19. doi: 10.1007/s12369–018–0496–1

Hoorn, J. F., Pontier, M. A., & Siddiqui, G. F., (2012). Coppélius’ concoction: Similarity and complementarity among three affect-related agent models. Cognitive Systems Research, 15–16, 33–49. doi:10.1016/j.cogsys.2011.04.001 [Elsevier’s top 25 most cited cognitive systems research article, 2015]

Kurzweil, R. (2016). Superintelligence and singularity (pp. 146–170). In S. Schneider (Ed.), Science fiction and philosophy: from time travel to superintelligence. New York: Wiley Blackwell.

Lee‐Won, R. J., Joo, Y. K., & Park, S. G. (2020). Media Equation. The International Encyclopedia of Media Psychology (pp. 1–10). New York: John Wiley & Sons. doi: 10.1002/9781119011071.iemp0158

Raghuvanshi, A., & Perkowski, M. A. (2010). Fuzzy quantum circuits to model emotional behaviors of humanoid robots. In Proceedings of the IEEE Congress on Evolutionary Computation July 18–23, 2010. Barcelona, Spain (pp. 1–8). Piscataway, NJ: IEEE. doi: 10.1109/CEC.2010.5586038

Zhang, J., Li, S., Zhang, J. Y., Du, F., Qi, Y., & Liu, X. (2020, July). A literature review of the research on the Uncanny Valley. Lecture Notes in Computer Science (LNCS, vol. 12192), International Conference on Human-Computer Interaction (HCII ‘20). Cross-Cultural Design. User Experience of Products, Services, and Intelligent Environments (pp. 255–268). Cham, CH: Springer.

Złotowski, J., Sumioka, H., Eyssel, F., Nishio, S., Bartneck, C., & Ishiguro, H. (2018). Model of dual anthropomorphism: The relationship between the Media Equation effect and implicit anthropomorphism. International Journal of Social Robotics, 10, 701–714. doi: 10.1007/s12369–018–0476–5

--

--

Symphilosopher

A collective for the advancement of independent thinking.